# George W Snedecor's books

We list below George W Snedecor's best known books. The Statistical Methods book was very influential and ran to many editions. We list below the different editions, numbering them as separate books. As an indication of the popularity of this work we note that as well as new editions, there were many reprintings: First Edition 1937; Second Edition 1938; Third Edition 1940; Fourth Edition 1946, Reprinted 1946, Second Reprinting 1948, Third Reprinting 1950, Fourth Reprinting 1953, Fifth Reprinting 1955, Sixth Reprinting 1955; Fifth Edition 1956, Reprinted 1957, Second Reprinting 1959, Third Reprinting 1961, Fourth Reprinting 1962 ... The book sold more than 125,000 copies. We also include in our list the book Statistical Papers in Honor of George W Snedecor.

Click on a link below to go to the information about that book

Correlation and Machine Calculation (1925) with Henry A Wallace.

Correlation and Machine Calculation. Revised Edition (1931) with Henry A Wallace.

Calculation and Interpretation of Analysis of Variance and Covariance (1934)

Statistical Methods applied to Experiments in Agriculture and Biology (1937)

Statistical Methods applied to Experiments in Agriculture and Biology 2nd Edition (1938)

Statistical Methods applied to Experiments in Agriculture and Biology. 3rd Edition (1940)

Statistical Methods applied to Experimenting in Agriculture and Biology. 4th Edition (1946)

Everyday Statistics (1950)

Everyday Statistics. 2nd Edition (1951)

Statistical Methods applied to Experimenting in Agriculture and Biology. 5th Edition (1956)

Statistical Methods (6th Edition) (1967), with William G Cochran

Statistical Papers in Honor of George W Snedecor (1972), by T A Bancroft (Ed.).

Statistical Methods (7th Edition) (1980), with William G Cochran

Statistical Methods (8th Edition) (1989), with William G Cochran

1. Correlation and Machine Calculation (1925), by George W Snedecor and Henry A Wallace.
Henry A Wallace on the background.
Oral History Research Office of Columbia University (1951).

My work on cycles began in about 1913, when I began to study the relationship of weather to corn yields, of corn supply to corn prices, the relation of corn prices to hog prices, the cycle of hogs, the cycle of cattle, the cycle of horses, and so on. I did that as a preliminary to getting into more serious and careful statistical work. As a result of studying the relationship of corn weather to corn yields, I ran across the work of H L Moore, the Columbia University professor. He had put out some very careful statistical analyses involving the relation of independent variables to a dependent variable, expressing them by regression lines and correlation coefficients. Suffice it to say that I became proficient at doing work of that kind, using a key-driven calculating machine to facilitate matters.

I thought that the people at the Iowa State College of Agriculture and Mechanical Arts at Ames were not sufficiently current in that field. I went up and met with several of the professors and sold them on the idea that they should be able to evaluate their experimental work much more accurately if they had more adequate statistical background. As a result, they employed me for ten weeks to conduct a statistical course ... There was no one in the class of some twenty who was not either a professor or a post-graduate student.

Then I took another problem which was interesting to them as agricultural people - the relationship between farm land values in the different counties - for which there were census figures - and the yield of corn per acre. We used an average of ten years for which we had crop reporting figures. We used the percentage of the crop land in corn, for which we had census figures; the value of the buildings per acre, for which we had census figures; and so on. We took up various independent variables bearing on the dependent variable of the value of the farm land per acre. That was the problem which I set to them, which later was embodied in a bulletin put out by Iowa State College entitled Correlation and Machine Calculation.

1.1. Review by: Garnet Wolsey Forster.
Journal of Farm Economics 8 (1) (1926), 141-142.

This small volume by Wallace and Snedecor is a welcome contribution to the subject of statistical methods. The bulletin presents in a concise and clear form a time-saving method of handling simple linear correlation and multiple linear relation problems by the use of either the key-driven calculating machines, such as the Comptometer and Burroughs' Calculator, or the crank-driven machines, such as the Monroe or Marchant.

In addition to presenting a method of machine calculation, the authors give a clear and simple explanation of the various correlation coefficients, simple, partial, and multiple. The discussion, however, is limited to linear correlation. In the opinion of the reviewer this is a serious omission, inasmuch as the relationship between variables is often non-linear. It is true, of course, that any one familiar with the methods recently developed for treating non-linear relationships could use the machine methods outlined in this bulletin. Nevertheless, a non-technical presentation of the methods employed in solving problems involving non-linear relationships is badly needed. It is hoped that such a treatise will be forthcoming - if not from Wallace and Snedecor, then by someone as well qualified.

1.2. Review by: Henry Schultz.
Journal of Political Economy 35 (2) (1927), 309-311.

"The rapid extension during recent years of the ideas of simple correlation has imposed their use upon many scientists not trained in the mathematical theory underlying them. The present trend in all biological sciences, as well as in economics and psychology, is still further to extend the use of correlation, broadening its scope to include the associations among more than two variables. One object of this bulletin is to present in simple untechnical language some explanation of the meaning and uses of the various correlation coefficients, simple partial and multiple.

The second and principal object of the bulletin is to set forth explicit directions for the use of the usual commercial forms of calculating machines ... in finding correlation coefficients or related constants."

In the attainment of both of these objects the authors have been eminently successful.

The general subjects explained by the authors are: simple correlation; multiple correlation - three variables; multiple correlation - more than three variables; partial correlation coefficients; and coding of observations. These are followed by some very sane "Precautions and Suggestions." The formulas used for the computation of the various coefficients and their related constants are not proved, as this would lie beyond the scope of this bulletin, but they are interpreted in non-technical terms, and each step in their computation is illustrated by a concrete problem - the estimation of the average value per acre in twenty-five Iowa counties on January 1, 1920, from a knowledge of one or more of five related factors.
2. Correlation and Machine Calculation. Revised Edition (1931), by George W Snedecor and Henry A Wallace.

2.1. From the Introduction.

The rapid extension during recent years of the ideas of simple correlation has imposed their use upon many scientists not trained in the mathematical theory underlying them. The present trend in all biological sciences, as well as in economics and psychology, is still further to extend the use of correlation, broadening its scope to include the associations among more than two variables. One object of this bulletin is to present in simple un-technical language some explanation of the meaning and uses of the various correlation coefficients, simple partial and multiple.

The second and principal object of the bulletin is to set forth explicit directions for the use of the usual commercial forms of calculating machines, either key driven, such as the Comptometer and Burroughs Calculator, or crank driven, such as the Monroe or Marchant, in finding correlation coefficients or related constants.

3. Calculation and Interpretation of Analysis of Variance and Covariance (1934), by George W Snedecor.
3.1. From the Introduction.

Since its introduction by R A Fisher in 1923 (12), analysis of variance has proved itself a useful addition to statistical methods. It is a technique for segregating from comparable groups of data the variation traceable to specified sources. It furnishes an estimate of experimental error freed of that part of the variability whose origin is known. In conjunction with an associated test of significance, it affords a basis of judgment as to whether or not several groups are samples from a single homogeneous population. Through the use of degrees of freedom it is equally well adapted to the treatment of large or small samples. It lends itself readily to the design of efficient experimental procedures, and to tests of experimental technique. In the analysis of covariance, the method is extended to include two or more associated variables.
The method of this monograph is to present in simple fashion a group of successful applications of analysis of variance and covariance. Explanations of the processes of calculation and the results obtained are rather copious. Interpretations are necessarily limited, but may usually be found in the references cited. At the first reading, sections entitled "Computation" and "Notes on theory" may well be omitted. The "Computation" sections contain complete instructions for calculation.

3.2. Review by: Harold Hotelling.
Journal of the American Statistical Association 30 (189) (1935), 117-118.

The analysis of variance is a concept ascribable to R A Fisher which includes as special cases a large number of statistical methods, such as the comparison of the means of two samples or of two regression coefficients, and deals also with situations in which there are three or more samples, and in which there are cross-classifications. The distribution of ratios of independent estimates of variance discovered by Fisher makes possible the comparison of dispersion among class means with the dispersion to be expected on account of random sampling, in view of the observed intraclass variance, and thus to discover whether the principle of classification used was significantly related to the variate measured. Though the examples given by Fisher pertain chiefly to crop yield experiments, the method is applicable to a great variety of social and scientific observations. An extension needed where two or more variates are measured simultaneously is the analysis of covariance. The algebra and calculus of the analysis of variance were presented by J O Irwin in the Journal of the Royal Statistical Society for 1931. Professor Snedecor in this little volume describes the methods appropriate to various cases by means of arithmetical examples.

For a large number of potential users of modern efficient statistical methods, this manual will be an invaluable desk companion. It is clear, simple, explicit, and correct. The mathematics in it ought to be intelligible to a grammar-school graduate, yet it will be of service even to the most advanced research workers in economics, sociology, biology, psychology, medicine, physics, chemistry, and astronomy. There is, of course, a danger in the use of any mathematical method whose theoretical basis is not thoroughly clear to the user, but the chances of an intelligent reader of this book going wrong in the use of its methods do not appear to be serious.

3.3. Review by: Warren C Waite.
Journal of Farm Economics 16 (4) (1934), 728-730.

This excellent little book will be useful to all who wish to use the analysis of variance in an examination of a problem or seek to secure more knowledge of the nature of the procedures. It consists of the presentation of twelve examples carefully selected to show the principal problems arising in the use of the analysis. The numerical computations are completely illustrated and illuminating comments are made regarding the interpretation of the results. The plan of presentation is straightforward. Part I deals with cases involving a single criterion of classification, Part II deals with cases involving two criteria of classification, Part III with cases of three or more criteria of classification, and Part IV with the use of the procedures in the analysis of covariance. A useful table is included for testing the significance of the results. Adequate references are provided to the more technical literature for those who wish to pursue further.

The analysis of variance has been used most widely in biometry, but will be useful in many problems arising in agricultural economics. The question which it attacks is whether a set of mean values resulting from a classification of data in a particular way could result from chance or whether the resulting means as a group are significantly different. The general proposition upon which the procedure rests is that if we have a homogeneous population which is normally distributed and we know its standard deviation, we are able to calculate the distribution of means of samples of various sizes drawn from it. Knowing what our means would be in the case of a homogeneous population, we may then compare our actual means with those of our hypothetical case to determine whether the actual means might have arisen from chance variations. If there is a considerable probability that the means resulting from our classification arose from chance then we must conclude that our classification is non-significant and the data are homogeneous from the viewpoint of our classifying factor. If there is a considerable probability that the variation in our means could not arise from chance then our classification is significant and we must consider this factor in our conclusion with respect to the problem. The tests are worked most readily in terms of squared deviations and if we are dealing with a case of significant differences among means we have made a separation of the total variation in the series into two portions.

One part of the variability may be ascribed to the operation of forces which produce in general different values of the variable in the various classes into which we have sorted our cases. These elements we call heterogeneous. The remainder of the variation is expected to show no significant variation in its parts regardless of how we classify our cases. This latter is said to be homogeneous. It is evident that this is important information in any problem. If the analysis indicates we are drawing observations from a homogeneous universe, we may treat any random sample uniformly and indicate our errors specifically. If our sample comes from a heterogeneous universe, we may be led into error regarding our conclusions, unless we can provide a consistent probability of drawing samples from the various heterogeneous sections. If we have some knowledge of the classifying factor, we may generally improve our results by proper sampling, and with only an hypothesis of complete lack of knowledge so design our experiment or research as to yield a maximum possible variety of classifications to test the homogeneity. Elimination of the heterogeneous elements may enable us to combine the remaining homogeneous portions of several groups of data to produce a larger group of homogeneous data resulting in an improved estimate of the experimental error. Economic data are likely to have many heterogeneous elements and devices to distinguish these elements lead to considerable improvement in the precision of results.
4. Statistical Methods applied to Experiments in Agriculture and Biology (1937), by George W Snedecor.
4.1. From the Preface.

The beginner in experimentation too often finds himself supplied with a pair of elaborate mechanisms. In the one hand is a mass of data demanding simplification and interpretation, while in the other is a complex statistical methodology said to be necessary to research. How shall the two be geared together? Since the data can be only inefficiently utilised without statistical method, and since method is futile until applied to data, it seems strange that greater effort has not been made to unite the two. For those of some experience there are adequate texts and journal articles. It is the novice to whose needs this book is directed. It is hoped that he may be furnished with a smoothly working combination of experimental data and statistical method.

Like all other sciences, statistics is in a stage of rapid evolution. During the last 20 years new discoveries have swiftly succeeded each other, fruitful syntheses have been effected, novel modes of thought have developed and a whole series of brand new statistical methods have been marketed. The biologist who has not been able to keep abreast of the progress of statistics finds himself a bit confused by the new ideas and technical terms. It is thought that he will welcome a statement of them in a form that will not require too much distraction of his attention from necessary professional duties.

It is a fundamental belief of the author that statistical method can be used competently by scientists not especially trained in mathematics. The conditions surrounding the mathematical theorems can be set forth in terms quite readily understood by the lay reader. Since mastery of two sciences is possible for only few, it is necessary for most of us to advance by cooperation. To the mathematical statistician must be delegated the tasks of developing the theory and devising the methods, accompanying these latter by adequate statements of the limitations on their use. None but the biologist can decide whether the conditions are fulfilled in his experiments and interpret the results. The only mathematics used in this book is arithmetic, supplemented by enough symbolism to make the exposition intelligible.

In the course of the development of each bit of scientific knowledge there comes a time when the experimental techniques must be questioned. Are they adequate to furnish the demanded precision of results? In what respects need they be improved? Is the most hopeful point of attack in the laboratory methods or in the experimental material? Fortunately, statistical methods supply answers, in many cases with little or no extra labour in collecting data, provided only that slight but necessary modifications be included in the plan of the experiment. Some of these tests of technique can be discussed even in this elementary presentation.

Small sample methods are prerequisite in most biological data. For that reason, they are introduced at the start. The classical theory of large samples receives scant attention. In most places where it is mentioned at all it is introduced as a simplified special case of the small sample.

The arrangement of the material in this text is not so much logical as developmental. The easiest ideas are put first, and only one new concept is presented at a time. The experienced reader will often feel a sense of inadequacy. It is believed that this will disappear as he continues, and that the inexperienced will be inducted with a minimum of difficulty. Numerous examples form an indispensable part of the presentation. In moat of them the statistical method, with its meaning, is emphasised, the necessary drudgery of calculation being reduced to the lowest level.

Certain diligent but misguided enthusiasts have brought down upon statistics the opprobrious description, "dry as dust." Of course, one must take into consideration the point of view. Data on golf scores, operations, and babies are arid indeed to the listener, but of absorbing interest to the narrator. We have endeavoured to present the subject in a different aspect. Fundamentally, statistics is a mode of thought. Biometrics is a delineation of living things. While the mechanism of description is always likely to be tedious, the effort has been made to emphasise the subject portrayed rather than the technique of the portrayal.

Statistics at Iowa State College is a cooperative enterprise. In a sense the author is merely reporting the thinking of his colleagues. Their interest, advice and help have made possible the experience upon which this book is founded. Their generous contributions of experimental data and technical knowledge will, if I have succeeded in interpreting them adequately, be helpful to others engaged in research.

It is a pleasure to acknowledge the leadership of Professor R A Fisher. Even he who runs may read my appreciation of his unifying contributions to statistics. By his residences as guest professor in mathematics at Iowa State College, as well as through his writings, he has exercised a profound influence on the experimental and statistical techniques of the institution. He and his publishers, Messrs. Oliver and Boyd of Edinburgh, have been liberal in permitting the use of tables of functions.

My collaborators in the Statistical Laboratory have been unsparing of their help. To A E Brandt, Gertrude M Cox, H W Norton and Mary L Greenwood, I am indebted for valuable criticisms, suggestions, and computational assistance.

GEOROE W. SNEDECOR
Statistical Laboratory
Iowa State College
September, 1937

4.2. Review by: Alan E Treloar.
Journal of the American Statistical Association 33 (201) (1938), 271-273.

The author sets as his objective in this book the gearing together of experimental data and statistical method for the beginner. "It is the novice to whose needs this book is directed.... The only mathematics used ... is arithmetic, supplemented by enough symbolism to make the exposition intelligible.... The easiest ideas are put first, and only one new concept is presented at a time." With simplicity of verbal exposition as a keynote, the readers are addressed directly in the informal conversational style of a laboratory discussion between an understanding teacher and his responsive student. The use of this form by Snedecor will probably impart a sense of ease and confidence to the beginner.
...
The principle of instruction is apparently not to disturb the reader by mathematical generalisation but to extend him through the analysis of selected numerical examples closely following the given types. The aim is to demonstrate a formula achieving results, thus fostering enthusiasm for it as a tool. Perhaps it is in order to restrain that enthusiasm from too ardent expansion in the reporting of research that the chapter concludes with an admonition given repeatedly in the book in diverse forms: "Don't publish computational details or discussions of statistical methods." One wonders if it is not this detailed teaching in terms of computational procedure related to specific examples, at the expense of full consideration of the principles of reasoning on which statistical tests rest, which stimulates the all too common effusion of the tyro.
...
By all odds the most difficult field of statistical analysis, that of drawing sound conclusions from very limited information, is made to appear simple and without trace of hazard. Confidence gained through learning how to apply a formula does not convert a computer into a wise interpreter. If this missing information be supplied by a competent instructor, as surely it is in Snedecor's own classes, then the value of the book will rise to a high level. Students of agrobiology will appreciate learning statistical procedure from a painstaking and indulgent teacher, but they will not really know what they are up to until they can clearly state the reason why for each step in their own words.
5. Statistical Methods applied to Experiments in Agriculture and Biology. 2nd Edition (1938), by George W Snedecor.
5.1. Review by: Edward L Dodd.
Science, New Series 89 (2310) (1939), 317-318.

In the preface, the author writes: "It is a pleasure to acknowledge the leadership of Professor R A Fisher. ... By his residence as guest professor in mathematics at Iowa State College as well as through his writings he has exercised a profound influence on the experimental and statistical techniques of the institution." It is not surprising, then, that Snedecor's book should resemble R A Fisher's "Statistical Methods for Research Workers" more than it resembles the usual text-book on statistics. And the appearance of a sixth edition of the Fisher book (1936) testifies to the assistance that such a book can render to investigators, especially to those working in biological fields. Fisher's book is a reference book, almost devoid of mathematical proof, giving by detailed examination of numerous problems some of the most important principles of statistical research, with special emphasis upon significance tests. Snedecor's book proceeds along the same general lines, but the latter is a little more informal, it devotes a little more space to making plausible what it does not attempt to prove, and it inserts 417 examples, enough to make it a practical text-book as well as a reference book.
...
Although the book avoids mathematical proof, there are 175 citations, among which appear the names of Bartlett, Brandt, Davenport, Ezekiel, Fischer, R A Fisher, Galton, Gauss, Goulden, Irwin, Lipka, Neyman, Pearl, Karl Pearson, Egon Pearson, H L Rietz, Sheppard, Snedecor, "Student," W R Thompson, Tippett, Wallace, Wishart, Wilks and Yates. Even those who are interested primarily in theoretic probability and statistics may derive considerable benefit from a perusal of such a book as Snedecor's, where detailed consideration is given to particular problems.

5.2. Review by: Charles A Shull.
Botanical Gazette 100 (4) (1939), 874-875.

Those who may have found other works on statistical methods forbidding should examine this excellent exposition of the subject. It is addressed particularly to the novice, and the reviewer is impressed with the nontechnical language in which the various procedures are explained. The author has demonstrated that discussion of the mathematical principles involved in statistical studies is not necessarily uninteresting and unintelligible to the beginner. The more readily mastered principles and techniques are treated first, and new concepts are introduced one at a time; in this way progress is gradual, and each new step easy to take. Here is a manual of statistics which apparently can be used without excessive study of mathematics. It necessarily covers much the same ground as the well known work by R A Fisher, but the approach is somewhat simpler and less technical.

The first seven chapters deal with the simplest problems the investigator is likely to encounter: statistical study of attributes; comparative measurement of individuals; sampling of normal populations; statistical comparison of two groups; short cut methods and approximations; linear regression; and correlations. The succeeding chapters cover the more complicated cases: large sample theory; multiple degrees of freedom; variance; covariance; multiple regression; curvilinear regression; individual degrees of freedom; and binomial and Poisson distribution.
...
The book is recommended as a constant companion of those who are attempting to establish the validity of conclusions from mass study of responses. Most investigators sooner or later find that statistics are necessary to the solution of their problems; moreover, if the principles are mastered, more frequent use would naturally be made of these methods of testing results. The plane of experimental work would be raised considerably if the statistical methods outlined in this volume were more generally employed. No better work is available for the beginner.

5.3. Review by: W Edwards Deming.
Journal of the American Statistical Association 34 (206) (1939), 421-422.

A four-word description might well be "Fisher in the vulgate." The fact that a second edition is called for already is confirmation of the conjecture made last year that the book would enjoy a widespread adoption as a text. The style is lucid and colloquial, and anyone who has forgotten even a large part of his college algebra will have no difficulty following the arguments and learning to apply the technics: from that standpoint the book is an undoubted success. Yet such weapons in the hands of persons who are not such expert experimentalists as Mr Snedecor, are apt to do more harm than good; nowhere in the book is there any attention given to the question, "When do these methods apply?" Until randomness is established there is more to an experiment than simply the numbers that are recorded. A physicist often takes hundreds of observations while perfecting his apparatus, recording nothing at all; then, when he is satisfied, records six observations. Yet these six are worth more than six million taken earlier in the experiment, because only those at the end are amenable to the statistical treatments advocated. Mr Snedecor's assumption is that a parent population necessarily exists. Others insist that the parent population must be created. Whether it is normal or nearly normal is not so much of a worry as whether it exists at all; it does not until the steady state of statistical control has been attained. Usually this is a matter of long and careful experimentation with control chart technic.
...
It is always easy to find fault. The fact is that I have found the book to be of great assistance for reference, and I take pleasure in recommending it as a text, provided sufficient attention is given to guided supplemental reading to obtain perspective; the contents would then be ample for a year's course with students who already possess some training in one of the natural sciences.
6. Statistical Methods applied to Experiments in Agriculture and Biology. 3rd Edition (1940), by George W Snedecor.
6.1. From the Preface.

I seize this opportunity of expressing my grateful appreciation of the friendly and helpful comments received from readers in all parts of the world. In this third printing I have corrected a number of errors that have been called to my attention. New material has been added at the end of chapters 6 and 16, while an additional chapter 17 has been written to incorporate methods useful in some of the broader fields of sampling. My colleague, Professor W. G. Cochran, has given me indispensable help and criticism in revising and extending the text.
G. W. S.
September, 1940

6.2. Review by: Charles A Shull.
Botanical Gazette 102 (2) (1940), 415.

The third edition of this excellent work has been improved by correction of errors appearing in the second edition, by brief additions at the ends of chapters 6 and 16 (on linear regression and binomial and Poisson distributions), and by the addition of chapter 17, on design and analysis of samplings. This last chapter is a very practical one, taking up the problems of sampling from various types of populations, size of samples, etc. Actual examples are used to illustrate the methods. This volume is the best current source of information for those who need to examine results for their validity.

6.3. Review by: Cyril Harold Goulden.
Journal of the American Statistical Association 36 (214) (1941), 313.

Since the first editions of this book have been extensively reviewed, it is not necessary to deal with the main body of the text. In the third edition, several additions have been made that are of considerable value. Two short sections have been added to the chapter on linear regression. These are entitled, "Regression and rates," and "The standard error of a forecast." The title of the latter section may be misleading to some in that the section actually deals with standard errors of values that have been adjusted by linear regression equations. In many cases this is not strictly a matter of forecasting. It is unfortunate that there are mistakes in the formulas of this section. ...

The re-written section on "Test of homogeneity of variance in several groups" is extremely useful, as this is becoming a standard procedure in many projects requiring statistical analysis. Another new section has to do with transformations of data for tests of significance. This is also very timely as there is undoubtedly a good deal of confusion as to the need for transformations and how they should be made. The explanations are clear and logical and the necessary tables are given.

The chief addition to the book is an entirely new chapter "Design and Analysis of Sampling." This is a subject which is not dealt with in most books on statistical analysis and there seems to have been an impression that modern statistical methods, as exemplified chiefly by the analysis of variance technique, do not have a direct bearing on sampling problems. Snedecor illustrates how the principles of the analysis of variance throw a great deal of light on problems of this type. This chapter is excellent; the only suggestion that might be made is that stricter editing of the first few pages would have contributed to greater clarity of statement.
7. Statistical Methods applied to Experimenting in Agriculture and Biology. 4th Edition (1946), by George W Snedecor.
7.1. From the Preface.

In this edition the text has been largely rewritten, and the scope has been widened as follows: (i) greater emphasis has been placed on the theoretical conditions in which the various statistical methods have validity, and concurrently (ii) on the conduct of the experiment so as to incorporate in the data the information desired; (iii) estimates and fiducial statements have been brought into equal prominence with tests of hypotheses; (iv) there is increased reliance on experimental samplings to exemplify distribution theory; (v) the treatment of correlation and of experimental designs has been expanded; and (vi) the methods for disproportionate subclass numbers have been extended to include all those necessary for ordinary needs.

As before, I have leaned heavily on my colleagues in the Statistical Laboratory and on other members of the Iowa State College staff. My indebtedness to them is gratefully acknowledged.
G. W. S.
January, 1946

7.2. Review by: Frederick Mosteller.
The Annals of Mathematical Statistics 19 (1) (1948), 124-126.

Statistical Methods is a non-mathematical treatment of modern experimental statistics. Few non-mathematical books are available that treat such topics as confidence limits, use of transformations, and analysis of variance and covariance in the detail presented by Snedecor. The examples are largely, but not entirely, drawn from agriculture and animal husbandry. The exercises for students are extensive and thought-provoking.

Unlike most non-mathematical texts the book under review does not spend pages and pages on methods of recording frequencies and methods of computing countless moments which are seldom used in the later developments of the text. There is no long exasperating discussion of kurtosis and skewness; and there is no parade of qualitative Greek names for categorising frequency distributions.

The reviewer has used this book for teaching a second course in statistics to social science majors with reasonable success. The main disadvantage was the biological nature of most of the examples, but until some author writes a comparable book using social science examples, the reviewer will continue to use Snedecor's material for a large part of the course.

The main differences between the Third and Fourth Editions of this text have been adequately summarised by Snedecor:

"(i) greater emphasis has been placed on the theoretical conditions in which the various statistical methods have validity, and concurrently (ii) on the conduct of the experiment so as to incorporate in the data the information desired; (iii) estimates and fiducial statements have been brought into equal prominence with tests of hypotheses; (iv) there is increased reliance on experimental samplings to exemplify distribution theory; (v) the treatment of correlation and of experimental designs has been expanded; and (vi) the methods for disproportionate subclass numbers have been extended to include all those necessary for ordinary needs."

Some more obvious changes in the Fourth Edition are the entirely new type and summaries which are included at the end of some of the chapters. The practice of using random sampling numbers (iv) to help explain theory has long been employed by teachers of statistics, but few authors have taken as much advantage of this technique as has Snedecor. In the Fourth Edition confidence intervals are widely used (iii). The author uses the adjectives "confidence" and "fiducial" more or less interchangeably, but it is the reviewer's opinion that it is the Neyman concept rather than the Fisherian that predominates. It should be remarked that this is one of the few texts that give the students the idea that in linear regression we do not predict y with the same accuracy for every x even when linearity and homoscedasticity hold (v).

7.3. Review by: Bentley Glass.
The Quarterly Review of Biology 21 (3) (1946), 323.

The three earlier editions of this standard textbook and reference have been previously reviewed in this section. The new revision shows a considerable amount of rewriting and rearrangement, and the scope of the treatment has been enlarged to give "greater emphasis ... on the theoretical conditions in which the various statistical methods have validity, and ... on the conduct of the experiment so as to incorporate in the data the information desired; estimates and fiducial statements have been brought into equal prominence with tests of hypotheses; there is increased reliance on experimental samplings to exemplify distribution theory; the treatment of correlation and experimental designs has been expanded; and the methods for disproportionate subclass numbers have been extended to include all those necessary for ordinary needs."

The outstanding characteristics of the book remain its clear and easy style, its abundance of excellent, practical examples, its central emphasis on the analysis of variance, and its focus on the relation of statistics to experimental work in a variety of fields. Biologists will find it among the most valuable books on their shelves. Students should be grateful for so stimulating and effective an approach to the subject.

7.4. Review by: Francis Marion Wadley.
Science, New Series 10 (2674) (1946), 407-408.

This is the fourth edition of this well-known work, which has been widely used in the fields of agriculture and biological research since its first appearance in 1937. The general order of presentation is the same as before: simple variation and correlation, some large sample theory and more complex cases of chi-square, analysis of variance and covariance, multiple and curvilinear regression, and more complex concepts. There has been considerable minor rearrangement, and new emphasis has been placed on sampling, fiducial limits, estimation, and components of variance. The format is somewhat more attractive than in previous editions.

The book begins with several new sections on sampling of attributes, considerably more imposing than the former very elementary opening. A table of fiducial limits for binomial material is introduced (its theory being left to Chap. 16). There is also a new and useful table of random numbers. Some of the ideas brought out in the former Chapter 1 are then developed. Other chapters show less difference from former editions, but in all there are changes. Graphic tests of significance are omitted from chapters 2 and 4, and mathematical tests are treated more exhaustively. Fundamentals of regression and correlation are more fully discussed, and the Z-transformation is introduced. Chapter 9, on chi-square, contains some material originally in Chapter 1, as well as some new material. The chapters on analysis of variance contain material on basic assumptions, components, and disproportionate frequencies, not in older editions. Later chapters have fewer changes, but there is more attention to Gauss multipliers and more detail in discussion of single degrees of freedom, while the section on errors of "betas" is omitted. An obvious error in Example 14.7 is retained. Some familiar problems and sections are omitted and some new ones introduced.

These changes show the influence of development in , the knowledge of statistics and of the work of associates on the strong staff at Ames. The book retains many of the characteristics of earlier editions. The informal language with its personal pronouns, adding to readability; the effort to develop logic "painlessly"; the presentation of tables in a form to appeal to experimenters more than to mathematicians; the strong practical emphasis and wealth of practical problems are all there. Analysis of variance is strongly emphasised, and other techniques are related to it. The close relation of the author's laboratory to experimental work in various fields is well reflected. Difficult questions are handled in an apparently easy manner, reversing the practice in some texts.

The changes superimposed on the former development have made logical outlining a little difficult. The text is definitely more valuable as a reference than earlier editions and seems more difficult to adapt to teaching. In studying and using the text, the scientific worker will feel anew the influence of the modest and unselfish work of the author and his associates, which has already contributed so much to progress.

7.5. Review by: J W.
Journal of the Royal Statistical Society 109 (4) (1946), 505-506.

This book first appeared (1937) when there had been time to settle down to the new conceptions of statistics as applied generally by biologists, which date from the appearance twelve years earlier of R A Fisher's epoch-making Statistical Methods for Research Workers. The author is head of the Statistical Laboratory of Iowa State College, and is both a mathematician and an essentially practical worker. The emphasis throughout was laid on methodology, and infinite pains were taken, even at the cost of seeming wordy, to explain everything in simple terms, and to illustrate profusely by means of examples.

The present edition is largely a re-written text, in which the scope has been widened in a number of directions. As explained in the preface, greater emphasis has been placed on the theoretical conditions in which the methods have validity, and on the conduct of the experiment; further, estimates and fiducial inference have been brought into equal prominence with tests of hypotheses. The treatment of correlation and of experimental design has been expanded, and the methods for dealing with disproportionate sub-class numbers have been extended.

The impression left on the reviewer's mind is that the book has been very much improved. It will startle the reader accustomed to the orthodox treatment of the subject as addressed to students of economics and commerce to find the $\chi^{2}$ test in Chapter 1, fiducial inference in Chapter 2, and the t-distribution in Chapter 3, while, on the other hand, the binomial and Poisson distributions are relegated to Chapter 16, and are left out of the short course of reading in the elements of statistical method suggested by the author. But the deliberately conceived plan is to use all that is known of the logic of statistical inference from the beginning, and in even the simplest classes of observational data. Mathematical proofs are necessarily absent from a book conceived as this one is, but the plan followed is logical. To take-one simple case: if the classical methods of testing for significance in large samples must be replaced for small samples by different methods, it is clear that these new methods also apply to the former case. If, as is usually the case, they are not more difficult to apply, one can forget all about the classical methods. Most such tests require tables, even those for large samples; it is then only a case of replacing one set of tables by another.

It will be clear from what has been said that there is much in the book which can be studied with profit by any worker in statistics. Having said this, it remains to be added that students of agriculture and biology have to go much further into details of experimental design and analysis of variance applications than workers in other branches of the subject. That is done in this book. Chapters are devoted to analysis of variance in two or more groups of measurement data (the general extension of the t-test for the comparison of two groups), and with two or more criteria of classification (leading to the common designs). Further chapters cover the analysis of covariance, first as an extension of simple regression in two variables, and later, in association with multiple regression, where there are two, three or more independent variates. In the same sequence there is a chapter on individual degrees of freedom, covering the methods applicable to special experimental designs of factorial character. Apart from the special emphasis given to the biological applications there is little of the content of the modern methodological text-book on statistics that does not find a place; the economist will look in vain for a discussion on index numbers or time-series, but he will profit by study of the chapter on sampling.

7.6. Review by: David John Finney.
Journal of the American Statistical Association 41 (234) (1946), 262-265.

"Proofs! undoubtedly it is good to have proofs, but it is perhaps better not to have them" said General Greatauk of a famous Penguinian trial. Herein lies the great dilemma for the author of a book on applied statistics: he must avoid mathematical proofs, yet to convince his readers he must justify his procedure logically rather than dogmatically. Professor Snedecor has overcome the difficulty very successfully by careful choice of numerical examples and by empirical construction of the probability distributions required in elementary statistical tests. The non-mathematical student should gain in understanding of the tests very considerably by repeating his sampling experiments. Since its first publication in 1937, this book has been one of the few to combine successfully a sound theoretical basis with an exposition sufficiently clear and detailed for those without statistical experience. The rewritten and greatly improved fourth edition contains 60 pages more than the third and has a slightly smaller but better type; it is a valuable text for the experimentalist in any branch of biology, whether he be a novice or experienced in statistics, and no criticism of detail should be considered as disparaging to the whole.

In chapter 1, the concepts of population and sample parameters, null hypotheses, tests of significance, and fiducial limits are introduced through sampling experiments on a table of random numbers. Applications of the binomial distribution are discussed and a useful table of fiducial limits for binomial sampling is given. Professor Snedecor's insistence on the importance of problems of estimation is a valuable corrective to the absurdity of regarding tests of significance as the sole function of statistical analysis. The presentation of the logic of the fiducial argument, however, seems open to criticism in some places, since it may suggest to the reader that a probability of the population parameter lying between certain limits is being determined. In fact, the probability is that of obtaining a sample equally or more extreme from a population whose parameter is specified. Perhaps the reviewer attaches undue importance to the distinction between these two forms of statement, but he is convinced that the student who learns to appreciate this point will be saved from many future illogical arguments.

7.7. Review by: Willim J Youden.
Journal of the American Statistical Association 41 (234) (1946), 265-266.

Professor Snedecor's text on statistical methods is written for people without mathematical background with the purpose of teaching them how to apply statistical methods to the data of biology and agriculture. Mathematical statisticians, who are responsible in the first place for establishing the distributions of various statistics, sometimes feel that it is a dangerous business to turn people loose, equipped with a collection of formulas and with only a very imperfect understanding of the theoretical foundations of the subject matter of statistics. Certainly, scientific literature turns up an annual crop of inept applications of statistical techniques with a high yield of erroneous and even absurd conclusions. The reviewer is convinced that many of these mistakes might be avoided if the experimenter did not so often part company with intimate knowledge of his material when he applies a statistical formula. It seems obvious enough that when the results of a statistical analysis are in conflict with an interpretation based upon long experience and familiarity with similar data it is time to show caution and examine carefully the appropriateness of the statistical method used.

This particular hazard is not peculiar to statistics. Similar dangers exist in the use of chemical techniques. It is impossible for non-chemists to avoid chemical procedures; and since they are not chemists, they sometimes make chemical blunders. It may be hoped that Professor Snedecor's hint to seek expert advice when in doubt will be followed whether it be in the field of statistics, chemistry, or something else.
...
In conclusion, the question as to whether it is wise for experimental scientists to make elementary statistical applications without possessing a background in mathematical statistics must be passed upon by the scientists themselves. It is evident that ever larger numbers of scientists do find some familiarity with statistics of increasing value. Professor Snedecor's book has already earned the gratitude of many research workers and the new edition, with its inclusion of recent developments in statistics, should get a cordial reception.
8. Everyday Statistics (1950), by George W Snedecor.
8.1. From the Foreword.

This book is for the general reader who wishes to learn some of the fundamentals of modern statistics. Assailed as he is by opinion polls, by census enumerators, by reports of deaths from polio and automobile accidents, by advertising claims based on experimental evidence, he wonders if there are any common elements that weld these various numerical activities into a science. There are such elements, and they lie deep in the structure of our society. Our purpose is to present them to the lay reader in an informal style.

The book is frankly experimental. The author has had a growing conviction during the last quarter of a century that people would be better adjusted to their environment if they had some knowledge of statistics. During this time, public acceptance of statistics, as well as its impacts on the public, has been growing with startling rapidity. What yesterday was in advance of the times will be outmoded tomorrow. The experiment is to find out what is needed today.

No mathematics beyond elementary algebra is required. The attempt is made to present the logic of the science with only so much mathematical symbolism as is necessary for clarity. A considerable amount of even this can be skipped if desired.

The vocational opportunities in statistics are scarcely considered. There are plenty of texts on statistical methods in economics, business, sociology, psychology, education, biology, and so on. Also, there are excellent texts in mathematical statistics. Our emphasis is on the social aspects of the subject. This may not be obvious because statistics involves a lot of detail that tends to obscure the principles. Every now and then we try to get above the trees for a view of the mountain.

Heartfelt appreciation is felt for pioneer use of the text by Morris J Liss of the University of Miami, by John T Schneider of the University of California at Los Angeles and by John M Howell of the Los Angeles City College. Their students and mine have endured successive mimeographed editions with their immaturity, mistakes and unattractive typography. They have made helpful suggestions without which this edition would have been impossible.

I am indebted to Professor Ronald A Fisher, Cambridge, and to Messrs Oliver and Boyd, Ltd., Edinburgh, for permission to reprint Tables Nos. III and IV from their book "Statistical Methods for Research Workers."

George W Snedecor
Statistical Laboratory,
Iowa State College,
August, 1950

8.2. Review by: Leonard H C Tippett.
Journal of the American Statistical Association 47 (259) (1952), 551-553.

The author describes this book as "frankly experimental"; this reviewer cannot make up his mind about it. Here he records his conflicting impressions.

The main subjects of the book are: the ideas of sampling, probability, and the frequency distribution; the elementary theory and practical significance of the binomial, normal, and non-normal distributions; elementary sampling theory, and tests of significance and confidence statements based on the chi-theory, and tests of significance and confidence statements based on the chi-square, t- and F-distributions; sampling techniques; regression, correlation, and the analysis of variation; the use of life tables for making insurance calculations. Only the very elements of the subjects are introduced, as must be the case in a volume of only 170 pages, but the reader is shown how to make the necessary calculations and to use the statistical tables provided, and there are very many examples and exercises. Through this book a working, if elementary, knowledge of the subject may be attained.

So described, this might appear to be merely another elementary text book, but it is more distinguished than that. Professor Snedecor has in mind the layman and so he introduces each bit of statistics in terms of some everyday problem, and explains the ideas underlying the statistical approach in everyday terms. And what a fascinating set of problems they are! Teachers and lecturers will value this feature of the book. The exposition is unhurried and the working of the illustrative examples carried through step by step, with full explanation of the meaning of each step. The author takes the reader on a series of excursions of rather exciting discovery. The book also is balanced in its parts so that anyone who can read the first few chapters should be able to work his way through to the end.

All this is excellent, but the reviewer has doubts that he cannot still. Although "no mathematics beyond elementary algebra is required" and "the attempt is made to present the logic of the science with only so much mathematical symbolism as is necessary for clarity," the "lay reader" has to learn, and learn quickly, a formidable amount of what he must regard as jargon. The use of "hypothesis," "confidence intervals," "chi-square," "degrees of freedom," and so on may be unavoidable in dealing with "everyday statistics," but they are not everyday words. By the time he is able to use these with the facility necessary to read this book the lay reader has certainly lost his Eden-like innocence. On p. 17 he is assumed to be familiar with the word "parameter." And on p. 15 there is reference to the fallacy of stating "The probability is 0.95 that π (a population parameter) lies within the (given) confidence interval": that is certainly a fallacy, but not a layman's fallacy.

Under the title "Everyday Statistics" and the sub-title "Facts and Fallacies" the general reader might expect to learn something about weighted averages, percentages, index numbers, and time charts; and such fallacies as mistaking seasonal or random fluctuations for trends, and correlations, particularly spurious correlations in time, for causal relationships. These are the very stuff of everyday statistics but they find little or no mention in this book.

However, we may remind ourselves that the book is experimental, as its format shows - although very well done, it is reproduced from typescript. It is a very interesting and valuable experiment in exposition; perhaps Professor Snedecor intends to write a more definitive edition after he has studied the reactions of people to this experimental one. This reviewer hopes so. And he suggests that the reactions of the readers for whom the book is intended are more important than the crabbing criticisms of a reviewer who is also a professional statistician.
9. Everyday Statistics. 2nd Edition (1951), by George W Snedecor.
10. Statistical Methods applied to Experimenting in Agriculture and Biology. 5th Edition (1956), by George W Snedecor.
10.1. From the Preface.

Recent years have brought amazing changes in statistics. New theory and new practice continually have come into being. Demands for new experimental designs never cease. To serve the needs of users of statistics in biology, I have included in this 5th edition a selection of the newer devices which promise to be most useful to the experimenter.

As in earlier editions, two groups of readers are envisioned - beginners and research workers in biology. For the first group, earlier parts of all chapters are kept simple, with somewhat elaborate explanations. The guideposts are retained in the text to direct them into the more elementary parts the subject. Also there is the short course outlined on the page preceding the Table of Contents. To meet the needs of the second group, I have collected the methods which have been found most useful in my practice as consultant in experimental statistics. The meanings and limitations of the designs are stressed. Computations are explained so that they may done by a clerk. References are given as guides to more advanced reading. Some of this material occupying the latter portions of the chapters, is of necessity treated concisely.

The most notable change in this edition is the presentation of factorial experiments in chapter 12. Here I have gathered and augmented the pertinent methods formerly scattered throughout the book. I believe this will be found helpful to the many research workers who are using factorial arrangements of their treatments but who are not capitalising on all the information contained in such experiments.

Since heavy calculations tend to distract attention from the purpose of statistics, I have tried, especially in the earlier parts of the chapters, to lighten the burden of computation and to keep attention centred on the biological objectives.

Additional progress is made in diverting emphasis from tests of significance to point and interval estimates. How far this trend will continue remains to be seen. At present it is clear that the estimate is often more informative than the test.

A few of the multitudinous non-parametric tests are introduced for the information of the reader. Their various uses are discussed and their results compared with those of the more usual procedures.

It has seemed desirable alter some of the notation and terminology in order to make it easier for the student to transfer his reading to current journals. Many of the changes serve to emphasise the distinction between population parameters and sample statistics. But always I have aimed at relieving the non-mathematical reader of the necessity of depending on algebraic symbolism. Explanations and directions for calculation can be followed without reference to formulas.

I am happy to present the newly written chapter 17 on "design and Analysis of Samplings," prepared by Professor William G Cochran, a recognised authority on this rapidly developing subject. Professor Cochran has chosen illustrative material suitable to the biological orientation of this text.

10.2. Review by: Aiphonse Chapanis.
The Quarterly Review of Biology 32 (1) (1957), 88.

This, the fifth edition of Snedecor's highly useful and successful book, should be even more popular than its predecessors. Among the technical changes made in this version are the introduction of a few of the newer nonparametric statistics which are so useful in dealing with biological data, the alteration of some of the notation and terminology to make it more consistent with current usage, a reorganisation of the material on factorial experiments, a more explicit discussion of the models of the analysis of variance and the way in which the model affects the significance test, and a chapter on sampling by one of the world's experts on the topic, William G Cochran.

Snedecor's style of writing continues to be generally free and easy. It is refreshing to find an author who is not afraid to use the personal pronoun "I" and to address the reader with a direct "you." I think more textbook writers should follow suit.

10.3. Review by: Michael J R Healy.
Journal of the Royal Statistical Society. Series A (General) 120 (2) (1957), 221.

"Snedecor" has been for a long time one of the most popular text-books for workers in biology and agriculture, and it is still among the best in its field. The fifth edition contains some new material, notably on non-parametric techniques and on multiple tests of significance. Discussion of factorial arrangements has been collected together in a rather formidable chapter which starts with the definition of a comparison or contrast, and finishes with the method of fitting constants to a 2-way table with unequal cell frequencies. An excellent chapter on sampling techniques is contributed by W G Cochran.

The style of the book is leisurely and discursive, with liberal use of the first person singular. The abundant worked examples are treated in great detail, and a large number of further examples are provided as exercises. The whole approach is admirably calculated to appeal to the non-mathematician. For a statistician, the "flavour" may best be appreciated by a typical quotation, dealing with the comparison of the means of two samples whose observed variances differ substantially.

"My recommendation is this: If it becomes evident that there is a large discrepancy between the mean squares, re-examine the experimental set-up to discover any possible explanation. It may be that you will find some cause for expecting the standard deviations to be different even though this cause had been overlooked in designing the experiment. In that case, use the Behrens-Fisher test or Cochran's approximation. If no reason for the discrepancy is found, stick to the regular tests but use greater caution in making decisions about the means because the sample estimates of $\sigma^{2}$ are questionable".

10.4. Review by: Don W Hayne.
The Journal of Wildlife Management 23 (3) (1959), 370-371.

Snedecor's Statistical Methods has been, in various editions, a standard work for over 20 years. Inclusion of new material and revision of old is the reason for including it here, as well as the wish to compare it with the other books. Entirely new, and very useful, is the chapter on sampling by William G Cochran; every sentence, almost, is directly applicable today in field biology. In the rest of the book, emphasis continues on the testing for differences by analysis-of-variance methods, but many other techniques are presented. Professor Snedecor has brought up to date the treatment of several subjects, and now includes among other topics a multiple-range test, a few nonparametric tests, and some discussion of Type II error and power.

The unique value of the book continues, as formerly, to be its profound understanding of biological problems and their useful solution. It is clear, however, that Statistical Methods has not been rewritten as a whole, and the lines of patching show more clearly than ever. The previous uneven quality of exposition continues; we might not see this so clearly were not some parts of such high excellence. Regardless of these faults, this book still leads in its field, and many biologists will use it as a sole, or principal, reference. They will wish to acquire this edition for the new material in it.

10.5. Review by: Kenneth Alexander Brownlee.
Journal of the American Statistical Association 52 (277) (1957), 100-102.

The appearance of a Fifth Edition of Snedecor's Statistical Methods draws one's attention rather forcibly to the fact that ten years have elapsed since the publication of the Fourth Edition, which was reviewed in Vol. 41 of this journal by D J Finney (21 pages) and W J Youden (2 pages), and nineteen years since the First Edition, published in 1937. Throughout this period Statistical Methods has been beloved by users of statistics in biology, and also other sciences, for talking to them in language that they can understand, without committing conceptual errors that bring down the wrath and scorn of the theoretical statistician.

The present edition states explicitly that it is written for two groups of readers:

(a) beginners in biology,
(b) research workers in biology.

To achieve this objective, the text contains signposts in each chapter at which the former type of reader can give up and proceed to the next chapter. The effect is to produce a rather wide range of difficulty, and for the reader capable of understanding the distinction between the various mixed models in a split plot situation an exposition of the Student t-test for comparison of means that first takes two pages for the case of equal sample sizes and then takes another two pages for the case of unequal sample sizes must appear maddeningly pedestrian. One is tempted to raise the question of whether a student for whom this type of approach is necessary should be studying statistics, or studying biology, or studying.

The general content, and the general approach, of Statistical Methods must be so well known to all statisticians that it must be entirely redundant either to summarize or to comment upon them. Those statisticians who find this approach satisfactory for the classes they have to teach, or the clients they have to advise, are grateful for the existence of this text, since it appears to remain much the best of its type despite many attempted imitations. Those statisticians who have better prepared students, or more sophisticated clients, are grateful, or should be, for this fact.

The publicity announcing this Fifth Edition does not, I think, give an adequate impression of the extent to which this edition has been revised and enlarged. Physically, the volume contains 49 more pages. There are now sections dealing with the following topics: multiple comparisons (following Tukey's unpublished procedure); Lord's studentised range; the sign test; Wilcoxon tests, for unpaired observations and for paired observations; regression through the origin with standard deviation of the dependent variable proportional to the independent variable; correction for bias of treatment mean squares in substitution of missing values; Tukey's test for additivity; Satterthwaite's treatment of degrees of freedom of linear combinations of mean squares. Further, there has been an extensive rewriting and rearrangement of parts of the original text. The last chapter in the book, on DESIGN AND ANALYSIS OF SAMPLINGS, which took 24 pages in the Fourth Edition, is now the responsibility of William G Cochran, who uses 35 pages.
11. Statistical Methods (6th Edition) (1967), by William G Cochran and George W Snedecor.
11.1. Review by: Owen L Davies.
Journal of the Royal Statistical Society. Series C (Applied Statistics) 17 (3) (1968), 294.

The authors have kept in mind the two purposes the book has served during the past thirty years - as texts for introductory courses in Statistics and as reference sources of statistical techniques helpful to research.

The book rightly makes extensive use of experimental sampling both to familiarise the reader with the basic sampling distributions that underlie modern statistical practice, and as a technique in its own right for solving problems which are intractable mathematically. Indeed experimental sampling methods are nowadays used extensively in computer simulation of processes involving a stochastic element, often in preference to a mathematical treatment. There has been some re-arrangement in this new edition of the book resulting in improvement in the presentation, and some new topics are included which make the treatment more complete. Some of the new topics arc: linear calibration, linear regression when both variables arc subject to error, an introduction to probability, selection of variates for prediction in multiple regression, non-linear regression and applications to asymptotic regressions, discriminant functions.

The treatment of the subject is classical and conventional. It is also elementary and does not require a knowledge of mathematics beyond A-level. The methods arc fully explained and liberally illustrated with practical examples which should make the book easily understood by students undertaking a first course in statistics. There is a distinct biological bias in the practical illustrations.

The treatment of sampling, which covers five chapters in all, is particularly good and the book deals adequately, at the level intended, with most of the techniques of statistical analysis required by an experimental scientist.

11.2. Review by: Tore Dalenius.
Revue de l'Institut International de Statistique / Review of the International Statistical Institute 36 (3) (1968), 361-362.

In the last few decades, the body of knowledge conceived of as "statistical methods" has undergone a development that has meant a quantitative as well as a qualitative revolution. As a consequence, most textbooks which appeared in the 1930s or even later, were soon out of date; today they have, at the very best, a purely historic value. But a few old-timers among the old textbooks have survived. To some extent this may be due to a series of revisions; but this explanation does not seem to suffice - the quality of the original version is obviously of a decisive importance. "Statistical Methods" by Snedecor and Cochran is an example of this. The first edition - prepared by the senior author - appeared in 1937. The present edition is the sixth! ... The authors have aimed at retaining the well proved features of the previous editions. Especially, the mathematical level required involves only elementary algebra; the exposition makes an extensive use of simple examples. On the other hand, the authors have introduced new material, e.g. a discussion of remedial measures for the effects of failures in the assumptions underlying the analysis of variance, and the discriminant function. In conclusion, I may add that the 30th anniversary of a classic textbook has been adequately dignified by the appearance of the present edition.

11.3. Review by: William R Buckland.
Journal of the Royal Statistical Society. Series D (The Statistician) 18 (4) (1968), 414-415.

The appearance of a new edition (almost decennially) of Snedecor is a distinct event in the standard textbook literature of statistical methods. When the new edition is coupled with a second well-known name as co-author, the occasion must be drawn to the attention of readers of this journal; hence, the somewhat unusual procedure of devoting a full measure of space to the sixth edition of a standard work.

This book has served well many generations of students and a wide variety of research workers who need a basic reference book to help collect, analyse and interpret their data. The level of mathematical difficulty is relatively low by many current views since only elementary algebra is required. This, however, has enabled the book to make the very wide appeal that it has and, in one way, contributed to the distinctive tone of the drafting. In order to help the reader's understanding of principles the algebraic proofs of formulae are supplemented or complemented by common-sense explanations of the role played by the different parts of the formula. This "down-to-earth" and "let me help you" approach is extended by the, use of experimental sampling procedures, a good basis for using computers, extensive fully-worked illustrations in the text into which are inserted many groups of examples for the reader's own participation. In order to encourage the student into wider fields, each chapter is provided with a full list of references: these also give valuable background to the research worker in another basic discipline who may wish to follow a particular topic at some length. The book now concludes with a standard collection of statistical tables and an index of the examples analysed in the text as well as the usual author and subject indexes.
...
This book cannot be too highly commended to the Institute's students at the appropriate level, their teachers and many others who come within the viewpoint of its distinguished authors.

11.4. Review by: Agnes M Herzberg.
Journal of the Royal Statistical Society. Series A (General) 132 (3) (1969), 442.

The first edition of Professor Snedecor's book appeared in 1937. Since then the book has become so well known that a review of the sixth edition, written in collaboration with Professor Cochran, need only consider the revisions and additions. The whole text has undergone minor revisions including various changes in layout and ordering of chapters. For example, the material on large sample methods which formerly comprised Chapter 8 has now been placed in earlier chapters. Also, the chapter on multiple regression now precedes that on covariance and multiple covariance. The statistical tables, previously scattered throughout the text, now appear in an appendix.
...
This edition with its new format maintains the mathematical level of previous editions and the student with no more than a knowledge of basic algebra should find no difficulty. The book will, no doubt, continue to be widely used, as the authors say, "both as texts for introductory courses in statistics and as reference sources of statistical techniques helpful to research workers in the interpretations of their data".

11.5. Review by: Ivan Bello.
Econometrica 38 (2) (1970), 372-373.

This book is certainly unique in the wide range of statistical methods which it presents. All subjects are fully illustrated and the mathematical models underlying each topic are provided. The small sample distributions, so important in social research, are particularly well analysed. The chapters on regression and correlation, as well as the one on analysis of variance, provide the research worker with all the important tools he needs. Even though its greatest value lies in research work, it is undoubtedly useful for teaching too.
12. Statistical Papers in Honor of George W Snedecor (1972), by T A Bancroft (Ed.).
12.1. Review by: J P Johnson.
Biometrics 29 (1) (1973), 227.

This book is a commemorative volume in tribute to Professor Snedecor, founder and first director of the Statistical Laboratory at Iowa State University, and is comprised of a set of nineteen papers contributed by twenty-seven statisticians.

The contributors were allowed freedom of choice of topic and hence no common theme exists. After a foreword consisting of remarks given at the dedication of the statistics centre at Iowa State University as Snedecor Hall, the papers are alphabetically ordered according to the surnames of the authors. The topics covered include selection of predictor variables, inference procedures, choice test by panels, binomial sequential design, recursive rules, observational studies, transversals in Latin squares, a test of fit, mixed analysis of variance model, forecasting by counts and measurements, inference and data analysis, history and future of statistics, systematically selected samples, distribution of primes, statistical appraisal in nutrition, transformations, graphic displays, and sensitivity.

This collection of statistical thoughts will be of interest to those who have at some time or other come into contact with professor Snedecor and should be scanned by others as few will be disappointed in the contents.

12.2. Review by: W T F.
Biometrics 28 (2) (1972), 630-631.

This volume is presented as a tribute to George Waddel Snedecor who was the founder and first director of the Statistical Laboratory at Iowa State University. Its contributors include friends and former students and colleagues of Professor Snedecor. The contributors selected their own topics which resulted in considerable variability of statistical methodology and applications. The 19 papers included in the volume ...

12.3. Review by: M Stone.
Journal of the Royal Statistical Society. Series C (Applied Statistics) 22 (2) (1973), 255.

"Snedecor", the man and the book, richly deserve to be honoured. There can be few statistical workers who have not used "Snedecor" to facilitate some piece of data analysis. There must, however, be some doubts as to where exactly the honour lies in this collection of papers whose authors were "asked to select their own topics and provide their own scientific refereeing". The absence of a unifying theme gives many hostages to poor memory; in a few years, the reader may be racking his brains as to the location of Ray Mickey's beautiful paper on systematically selected samples. The regular journals provide readier access and would have been enriched by most of these papers. May this be the last such festschrift! ... An enjoyable collection, yes, but the enjoyments are quite isolated and paper-specific. In the end, individual cerebration is no substitute for social celebration. Better to have commissioned a musical - "Snedecor!".

12.4. Review by: G G K.
Technometrics 15 (2) (1973), 424-425.

George W Snedecor was founder and first director of the Statistical Laboratory at Iowa State University. This publication contains a collection of papers contributed in his honour by twenty-seven statisticians who were his friends, former students and colleagues. Contributors were allowed to select their own topics which include binomial sequential design, choice test by panels, distribution of primes, forecasting by counts and measurements, graphic displays, history and future of statistics, inference and data analysis, mixed analysis of variance model, observational studies, recursive rules, selection of predictor variables, sensitivity, statistical appraisal in nutrition, systematically selected samples, a test of fit and transformations and transversals in Latin squares.
13. Statistical Methods (7th Edition) (1980), by William G Cochran and George W Snedecor.
13.1. Review by: Richard C Campbell.
Biometrics 38 (1) (1982), 292.

This edition of the long-established introductory text was nearly complete when Professor Cochran died: Professor D F Cox completed the remaining authorial work.

The preliminary material has been expanded into three chapters to allow 'a more gradual immersion into the subject matter'. Nine topics appear for the first time, and other sections have been modified to take account of recent work and of the increased importance of computers; three new tables are useful in the detection of outliers.

The new topics are:
Probability paper,
The probability of at least one success in trials,
Levene's robust test for the equality of a set of estimated variances,
Balancing the order in which treatments are given in a repeated measurements experiment,
Simultaneous study of the different effects of a transformation in the analysis of variance,
Yates's algorithm in factorial experiments,
Experiments with repeated measurements,
Mallows $C_{p}$ criterion for choosing a subset of predictor variables in multiple regression,
Nonsampling errors in sample surveys.

The main modifications are concerned with (i) the expected values of mean squares in the analysis of variance and (ii) the calculations and uses of multiple regression, in particular the sweep operation and the use of dummy variables to bring analyses of variance and covariance into multiple regression form. With these alterations the book, including Appendix Tables and Index, makes 507 pages; the scope is described in the Preface as 'ample material for a course extending throughout the academic year'. To this reviewer, that is an understatement. Indeed the book is a remarkably complete exposition of elementary statistical methods for the user, covering a wide range of topics, with some elementary algebra to help formalise the structure, plenty of working examples and a good supply of exercises for the reader to work. It would be a very diligent reader who really did cover all this material in one academic year! This has been a successful text for nearly 45 years: the new edition should have at least as many years of useful life as that preceding.

13.2. Review by: John A Cornell.
Technometrics 23 (3) (1981), 312-313.

Upon agreeing to review this book, it occurred to me that there is probably little I can add to what has already been said about a book that has served for several generations as a classroom text and as a reference for users of statistical methods. K A Brownlee said it very well in his 1957 review of the fifth edition when he wrote, "Statistical Methods has been beloved by users of statistics in biology, and also other sciences, for talking to them in a language that they can understand, without committing conceptual errors that bring down the wrath and scorn of the theoretical statistician." This edition follows the example set by the earlier editions. ... As a text, the book contains ample material for an introductory one-year course on statistical methods for graduate students. Students with only limited mathematical training can work the computations in the exercises. ... In summary, this book is outstanding in terms of coverage of traditional methods. Here is a text that, in my opinion, will continue to be the standard against which all other texts on statistical methods are compared
14. Statistical Methods (8th Edition) (1989), by William G Cochran and George W Snedecor.
14.1. Review by: William A Williams.
Journal of the American Statistical Association 86 (415) (1991), 834.

This venerable textbook, authored by two past presidents of the American Statistical Association, has now been published in its eighth edition. It continues to be the leading general statistical text in the fields of biology and agriculture because of its clearness of exposition and sensitivity to the needs of student users as well as postgraduate researchers. The excellence of the eighth edition will serve to further enhance its favourable image.

As a student I was exposed to the fourth edition under the tutelage of Waiter Federer, Cornell University. shortly after I returned from army duty in World War II. He had just finished leading our class through R A Fisher's Statistical Methods for Research Workers and The Design of Experiments, and we found Snedecor's style of presentation to be stimulating. That well-thumbed copy has remained on my desk-side bookshelf ever since, along with two more recent editions.

The first edition was written by Snedecor and published in 1937. Subsequent editions appeared in 1938, 1940, 1946, 1956, 1967, 1980, and 1989. They were published under the title Statistical Methods Applied to Experiments in Agriculture and Biology through the fifth edition, and excellent reviews were provided by C H Goulden, D J Finney, W J Youden, and K A Brownlee in this journal. Cochran's collaboration began with providing a chapter on sampling in the fifth edition. David F Cox, of Iowa State University, coordinated the revision of the seventh and eighth editions "guided by the principle that the work should remain the work of the original authors; thus much of the material remains as previously published." However, the seventh edition was substantially reworked, with the addition of four chapters but a reduction of 80 pages of text. It was reviewed comprehensively by J A Cornell in Technometrics in 1981.

Little change in content or style occurred between the seventh and the eighth edition, and roughly 90 percent of the pages are almost exactly the same. The chapter on multiple regression has had a matrix treatment of the subject explicitly added, and an eight-page introduction to matrix notation was appended. About ten pages of text apportioned among seven locations were deleted in a "tightening-up." The total number of pages remains almost the same. A major improvement in physical readability results from a larger page size and the use of a bigger font.

A few questions occurred to me about the choice of topics and the amount of explanation provided in the treatment of regression analysis. I felt that the discussion of variable selection methods was unduly brief (three pages) in view of the increasing importance of modelling applications in data analysis. References to modem response-surface methodology literature were omitted. Problems of coefficient instability and multicollinearity were not addressed. Path analysis received only passing reference, though it is rapidly gaining in usage for ecological and agricultural research. In view of the popularity of fitting nonlinear functions with the newly available, powerful graphics and estimation programs on PC's further amplification regarding the functions requiring iterative solutions may be in order. Robust regression methods, so important to detecting outliers in a multivariate context, were not included. Principal components, cluster analysis, discriminant analysis, and spatial statistical methods were not presented, as perhaps is appropriate in a general text. In the area of experimental design, the topic of confounding, with discussions of incomplete block designs and fractional factorials, are not included, as has been indicated in previous reviews.

These comments regarding a few areas are not intended to detract from recognition of the overall excellence of this long-lived exposition of statistical methods and applications using real data. Surely this text is one of the "great books" of statistics and continues to set a high standard for other authors to try to measure up to in the future.

14.2. Review by: Douglas H Jones.
Journal of Educational and Behavioral Statistics 19 (3) (1994), 304-307.

Snedecor and Cochran published the first edition of Statistical Methods in 1937. Over the intervening years, the authors have added various statistical topics to bring the volume up to date. The eighth edition differs from the seventh edition, primarily, by the inclusion of the matrix approach to multiple regression with an appendix on matrix algebra.

The original Statistical Methods targeted established research workers who were novices in applying statistics to data. Whereas its intended audience has dwindled considerably, the book still remains a solid introduction to applied statistics. It makes available much wisdom and insight into why sound statistical methods work.
...
This book represents a traditional approach to introducing applied statistics with emphasis on computational statistics that can be done by hand with a simple calculator. Over the years, the authors have attempted to modernise this aspect. However, because of the progress in speed and power of computers since the seventies, the book is seriously behind in giving information on practical computation. Thus, material on testing portions of a model appears only intermittently throughout the book, and this results in a hard-to-follow treatise of this important statistical topic. In the 1930s, the authors targeted the intended audience of the book, established researchers unfamiliar with statistical methods. Although there were many researchers in this category from the 1930s to the 1970s, it is unlikely there remain many individuals in this category today, especially with the advent of powerful statistical packages running on personal computers. However, this book contains many gems of statistical advice that make this edition worth owning. This edition should be a successful reference book if its library usage follows the pattern of past editions. Indeed, I checked the Rutgers libraries and found that out of five copies of the seventh edition, three were on loan. Out of five copies of the sixth edition, three were on loan, and one was missing.

Last Updated September 2020