In December 1891 Irving Stringham addressed the California Teachers' Association at Riverside, California. He chose as his topic The Past and Present of Elementary Mathematics and the address was published as:

Irving Stringham, The Past and Present of Elementary Mathematics, in Addresses delivered before the California Teachers' Association at Riverside, December 28-31, 1891 (University of California at Berkeley, Berkeley, California, 1892), 36-54.

We have to understand that the address given by Irving Stringham was in the year 1891. Much progress has occurred in the understanding of the history of mathematics in one and a quarter centuries since the address was given. For example, the work of the Arab/Islamic mathematicians was not appreciated at this time so it is not in any way a criticism of Stringham that he suggests that they were "mere translators". Below is a version of part of Stringham's address:

In the long history of the world's intellectual progress nothing is more striking than the dependence of each succeeding age upon its predecessors for the materials out of which its own achievements are wrought. The raw materials we use in the manufacture of science are seldom dug up fresh from the earth, but are picked up by the wayside, where they were dropped by our predecessors, who could devise no use for them. It thus happens that the discoveries of Newton are old hints wrought over and made into the finished product of science by the master-mind, some of them dating backwards nineteen centuries, to the time of Archimedes. The most transcendant genius works under limitations, cannot see far into the future; and by fortuitous inspiration he anticipates discovery that, standing out of relation to the knowledge of his own time, belongs to succeeding age, remains unused, unfruitful, and of no current value till such time as can be correlated with other scientific knowledge.

And so every science has which principles prospectively merely the incidents in apparently unrelated problems their first users perhaps unknown and never to be identified. or investigations, in important are at first prehistoric stage, entirely Only later, possibly after repeated recurrence in investigations of which it is the true foundation, does the principle call attention to itself as of great importance. It is the recognition of its importance, not its first merely incidental use, that constitutes real discovery.

The prehistoric period of mathematics belongs to the centuries immediately preceding the earliest development of Greek philosophy, and appears to have been first cultivated in connection with land surveying and astronomy in Egypt and Assyria. It had its beginnings as a science in the latter part of the seventh century, B.C., at Miletus, where the Greek philosopher Thales, who had travelled extensively in foreign countries and had resided for a time in Egypt, first taught geometry deductively; but nearly contemporaneously also at Crotona, where Pythagoras first established the principles of the doctrine of proportion, and laid the foundations of a goodly part of elementary geometry. Thenceforward, so long as Greek civilization existed anywhere, the study of geometry never ceased to be an important factor in the world's intellectual progress, and for more than twelve centuries the continuity of its study was unbroken. First, Thales at Miletus, then Pythagoras at Crotona, then Hippocrates at Athens, then Euclid at Alexandria, then Archimedes at Syracuse, then Apollonius at Perga, then Pappus at Alexandria, wrought out in succession, classified and organized into scientific unity the great mass of propositions which constitute the subject-matter of geometrical study in our schools today.

Near the close of the Greek period, during which geometry, from the merest beginnings, had grown into a comprehensive system, the first published accounts of an algebraic analysis appear, the Arithmetica of Diophantos, the last contribution of the great Alexandrian school to mathematical science. This work remarkable for its achievement of results, but lamentably defective in method and organic unity, had in it the foreshadowings of a new science, but was in no respect the science itself. It possessed no power of further growth from within, and when algebra finally became scientific it was constructed on entirely new models.

But even if the work of Diophantos had been constructed upon correct scientific principles, it would still have passed speedily into oblivion, for the light of Greek civilization was already feeble and flickering and was soon completely extinguished by the fanaticism and bigotry that for seven centuries enveloped Europe in almost total darkness. In fact, the work of Diophantos was not rediscovered in Europe until the middle of the sixteenth century.

With the disappearance of the Greek schools there seemed to be no hope for the further cultivation of mathematics as a science on the face of the earth, for nowhere else in the ancient world, up to that time, had any results of a high order been achieved. But at this critical juncture a new light appears for us in the east. The Arithmetica of Diophantos had been published (conjecturally) in the fourth century of our era, and the Alexandrian school continued in existence until the Mohammedan conquest in 641 A.D., during which time mathematics was still cultivated, though feebly, in the form of commentary or perfunctory study, and without originality or fruitful result; and it was more than a century and a half before the final catastrophe - the capture of Alexandria and the burning of the great library by the Mohammedans - that there appeared in India a work on algebra and trigonometry by the astronomer Aryabhatta, of which no Greek mathematician of the earlier centuries would have been capable. From this time until the revival of learning in western Europe the Indians are the true discoverers in mathematical science. To what extent they were indebted to the Greeks for the raw materials out of which they constructed their algebra is not known, nor is it of consequence, for since the Greeks never succeeded in constructing an organic system of algebra, the indebtedness could, in any event, have been but small.

To the Indians, then, is due the credit of first creating algebra as a science. Two great works on mathematics and astronomy attest this claim; one by Brahmagupta, written in 628, which expounds a complete system of algebraic analysis, the other by Bhaskara, in 1150, upon arithmetic and algebra, in which the Indian system of arithmetic - the one we ourselves use - is employed. These works, however, record the highest achievements of the Indians in mathematics, and all subsequent progress in algebra has taken place in the west, and in modern times.

Though the Indians had shown themselves to be consummate algebraists, they were in no sense geometers; and far from adding any new thought to the science which the Greeks had created, they apprehended with difficulty and with much blundering what of geometry they received from Greek sources. Thus two great civilizations had stood on either side of a barrier which neither could pass. The Greeks had created geometry, but they could not invent algebra; the Indians invented algebra, but they could not add one proposition of importance to geometry.

Out of these two independent sources issued the two distinct streams of mathematical thought that flowed first sluggishly through the Mohammedan countries of Arabia, Africa, and Spain, thence finally into Christian Europe, where they were subsequently joined together. For it was from the Arabs that mediaeval Europe first acquired in the twelfth century some knowledge of both algebra and geometry. The Arabs, however, were not good conservers, or compilers, of the scientific knowledge accessible to them from Greek and Indian sources, and transmitted it in such imperfect form that many important principles had either to be rediscovered, or sought for in the Greek or Sanskrit, before Europe full possession of the mathematical knowledge which Greece on the one hand and India on the other had contained. But be this as it may, Europe rapidly recovered, during the four centuries from the twelfth to the sixteenth, substantially all the mathematical knowledge that the ancient civilizations had bequeathed to their successors.

During the sixteenth century mathematics received its share of that activity in intellectual pursuits which has received the name of the Renaissance; but its development was strictly upon the lines that had been marked out for it in ancient times. Geometry did not free itself from the limitations as to method and scope that had been set for it by Euclid, Archimedes, and Apollonius. Algebra followed the Indian model, which had been recovered with much difficulty by the help of the Arabs, and, barring a few geometrical constructions of algebraic equations, mere translations into modern forms of expression of well-known Greek demonstrations, did not get beyond the limited definition of algebraic quantity as rational or commensurable number. Algebra and geometry were still two distinct sciences, and there were no indications, except in a few special instances, that the one was in any way susceptible of interpretation in terms of the other.

In the seventeenth century, however, three important steps were taken towards a recognition of the intimate relations that we now know these two main divisions of elementary mathematics bear to each other, and which play such an important part in all modern mathematical investigations. The first was through the invention of logarithms, in 1614, by Napier; the second through the invention of analytic geometry, in 1637, by Descartes; the third through the invention of the differential calculus, in 1665, by Newton.

The introduction of logarithms, however, important as it was as an essential part of the new analysis soon to be created, was recognized at this time only as simplifying the processes of arithmetical calculation.

Both the analytic geometry and the differential calculus seemed to be necessary prerequisites for a full understanding of logarithms as a part of analysis, and a century passed before their importance as such was recognized; furnishing, in fact, a remarkable instance of a premature discovery. The ability to recognize the true import of the logarithm was made possible only by discoveries that came half a century after the logarithm itself had been invented.

The discovery made known to the world by Descartes, in 1637, may be summed up in the words: Every function has a graph. To illustrate by a very simple example, the function $x^{2}$ has for its graph a parabola with its principal vertex at the origin of coordinates, and its principal diameter coincident with the $y$-axis. Thus:

The values of $x$ are represented by lines drawn horizontally from the origin $O$, to the left or right, from the extremities of which lines are drawn upwards to represent the values of $x^{2}$. Each pair of lines thus determined by a pair of values $x, x^{2}$ fixes a point in the plane, and the aggregate of all such points range themselves along the curve known as the parabola. The it is now customary to represent by a second letter, as $y$, and to speak of $x$ and $y$ as functions of each other. In a similar manner any algebraic equation between two quantities, $x$ and $y$, has its graph.

Now mark the radical departure here taken in the interpretation of algebraic quantities. A straight line stands as the representative of any such quantity, an interpretation wholly repugnant to Greek geometry. So long as his practice conformed to the canon which, from time immemorial, had been his guide in the geometrical interpretation of quantity, $x^{2}$ could mean for the Greek only a square, an area, never a straight line. But his orthodoxy was the barrier to his further progress, and from the abandonment of that orthodoxy the modern mathematician dates the possibility of achievement beyond the limits of investigation which the Greeks had set for themselves. Henceforth, moreover, the interests of algebra and geometry were one and the same; progress in the one was to mean a simultaneous progress in the other.

Twenty-eight years pass, and we stand at the threshold of the greatest of the discoveries in mathematics of modern times - that of the differential calculus. The year is 1665, when Sir Isaac Newton communicated to some of his friends the fundamental ideas of the new method.

Taken in connection with the new interpretation of the algebraic equation by Descartes, the scope of this method was so great as eventually - that is, within another century at most - to reconstitute the entire body of mathematical science upon a new basis, and to completely change the attitude of mathematicians towards the problem of its further advancement. Henceforth its various parts are not looked upon as disassociated systems having no common meeting ground, but geometry, algebra, trigonometry, analytic geometry, the differential and integral calculus, are seen to constitute one organic whole, or rather one organic unit in a much larger whole. Henceforth mathematics thus newly constituted is to have its alphabet, its nomenclature, its language, and whoever would use it for any purpose must learn its nomenclature and speak its language. Henceforth it is not to be the plaything of the philosopher, nor the recreative and disciplinary study of the scholiast, but the instrument of research and of efficient accomplishment of practical ends in the hands of the astronomer, the physicist, and the engineer. Such was the unparalleled achievement of the seventeenth century.

But I must be more explicit. I am endeavouring to trace out, in brief outline, from the earliest times to the present day, the development of the foundation principles of the two great divisions of elementary mathematics, geometry and algebra (but in particular and chiefly the latter), and that which concerns us primarily in the work of Newton is his introduction of the continuous variable as one of the elements of algebraic analysis. For our present purpose it will be sufficient to describe the continuous variable as a straight line, having one of its extremities fixed at a point $O$, while the point $P$, which marks the other extremity, is free to move forwards or backwards in a straight line.

When $P$ moves without interruption in its path the variable quantity $OP$ is said to change continuously, and is called a continuous variable. It may or may not be possible to represent it by a rational, that is, a commensurable number.

At last the materials for a complete grounding of algebraic science, in logically fundamental principles, were at hand. Napier had given us logarithms, Descartes had put into our hands adequate means for the graphical representation and interpretation of the algebraic function, Newton had shown us how algebraic quantity could be freed from the limitation by which its meaning had always been confined to rational number. The materials for the work were indeed at hand, but they were in great part raw materials, and the energies of mathematicians were expended upon testing and reshaping the powerful instruments of analysis just discovered and enlarging the scope of their application, and little attention was given to a re-examination of the foundations of algebra. Yet, as a science, founded upon logically scientific principles, algebra was still in the formative stage and required this re-examination in the light of the new discoveries. Two centuries have been just sufficient to accomplish the task.

During the eighteenth century no important result was attained. The old algebra had acquired, during the centuries since the revival of learning, a momentum sufficient to carry its development forward upon the lines that had been originally set. But at the beginning of the present century a distinct step in advance was taken, by Argand and Gauss, through the introduction of the so-called imaginary as a quantity, having equal importance in algebra with so-called real quantity, and being susceptible of real geometrical interpretation. This step made it possible, as soon as its importance was fully understood, to define algebra for the first time.

When a series of elements operating upon each other in accordance with fixed laws produce only other elements belonging to the same series, they are said to constitute a group. Thus all positive integers, subjected only to the processes of addition and multiplication, produce only positive integers, and hence form a group.

The effect of introducing into the arithmetic of positive integers the further processes of subtraction and division is to break the integrity of the old group and form a new one whose elements include, not only positive integers, but all rational numbers, both positive and negative, integral and fractional. A final step through involution, or the extracting of roots, with its allied processes, leads in a similar way to imaginary and complex numbers - that is, numbers composed of both a real and an imaginary part.

Now if as is legitimate, we regard all reals and imaginaries as special forms of complex quantities - reals having zero imaginary parts, imaginaries having zero real parts - then the algebraic processes of addition, subtraction, multiplication, division, evolution, involution, and the taking of logarithms, applied to complex quantities in any of their several forms, produce only other complex quantities. And hence:
The aggregate of all complex quantities - including all reals and imaginaries, both rational and irrational - operating upon each other in all possible ways by the rules of algebra, form a closed group.
If, in an algebra the elements that constitute the subjects of its operations form a closed group when subjected to a complete cycle of such operations, such an algebra may be said to be logically complete.

Now, in the elementary algebra we teach in the public school, involution and the logarithmic process form an essential part, and through them imaginary and complex quantities make their appearance as unavoidable subjects of its operations. They are necessarily elements coordinate with the real quantities of our algebra, which therefore logically, and as we now agree, also practically, only the fraction of an algebra, if the complex quantity be left out of it. The inability of the earlier algebraists to recognize this fact made it also impossible for them to carry out the algebraic processes of involution and the taking of logarithms to any except real and positive numbers.

Let us glance, for moment, at what was further necessary in order to complete the foundations of our mathematical superstructure. The early works upon arithmetic and algebra had been little more than mere collections of rules for the solution of problems. Few principles were explained, none adequately, and this feature of logical incompleteness has remained prominent, with rare exceptions, in all text-books upon algebra up to the present time. By tradition, algebra became a mere mechanical device for turning out practical results, by careless reasoning errors crept into the explanation of its principles, and through incompetent compilers were perpetuated in the form of current literature, and thus, instead of becoming a classic like the geometry handed down to us from the Greeks in the form of Euclid's elements, algebra became a collection of processes practically exemplified and of principles inadequately explained.

The binomial theorem was either assumed to be true for negative and fractional indices by mere analogy from one or two special cases, or a logically unsound proof was adduced and actually remains in nearly all of our textbooks today. No attempt at all was made to justify the associative, commutative, and distributive laws in the four fundamental processes.

Hence the necessity of a thorough overhauling of our algebraic system at the beginning of the nineteenth century. We have now accomplished the task; how well, posterity will decide. Abel, in 1829, examined thoroughly and laid down once for all time the conditions under which development by the binomial theorem is possible; and in 1870-71 Weierstrass and G Cantor gave us a new definition of irrational number, and established the doctrine of the irrational upon a strictly logical basis; and at about the same time (1870), Benjamin Peirce produced the final scientific formula into which our present definition of algebra is cast. It would take me too far afield to go over these matters here in detail. They require, in fact, days, rather than hours, of careful study for an adequate understanding of them, and I must content myself with referring you to the literature upon this part of the subject under discussion, which is now in existence and easily accessible.

The hurried survey we have now made of the course which mathematical thought has taken within the two great divisions of its work since the earliest times, shows how at every step in the progress of mathematical science, whether in the domain of algebra or geometry, it has seemed impossible for the mind to free itself from the tendencies, or avoid following out the line of development, which some previous age or generation has predetermined for it. Slow indeed has been our progress, if we merely count the centuries through which the struggle against difficulties has been maintained, though the achievement has undoubtedly been very great.

But standing, as we do now, near the close of the twenty-fifth century since mathematics became a science, and taking advantage of the discoveries of our predecessors, we may pass across the entire field that outlines the foundations of algebraic science, and seizing only upon those principles that mark its epochs of advance and are cardinal, construct an algebraic system which shall have all the logical rigour and completeness of Greek geometry; and for the accomplishment of this task no serious difficulty longer stands in our way. The materials are at hand for the purpose; it only requires some master's hand to mould them into coherence.

Last Updated October 2015