Rolando Chuaqui's books


Rolando Basim Chuaqui published two books: Axiomatic set theory. Impredicative theories of classes (1981), and Truth, possibility and probability. New logical foundations of probability and statistical inference (1991). We give some extracts from reviews of these books below.

Click on a link below to go to information on that work

Axiomatic set theory. Impredicative theories of classes (1981)

Truth, possibility and probability. New logical foundations of probability and statistical inference (1991)

1. Axiomatic set theory. Impredicative theories of classes (1981), by Rolando Basim Chuaqui.
1.1. From the Preface.

This book contains axiomatic presentations in first-order logic of versions of Set Theory based on an impredicative axiom of class specification. This axiom asserts the existence of the class of all sets which satisfy a given arbitrary first order formula. This axiom is impredicative, because the defining formula may contain quantification over arbitrary classes, including the class being defined.

All theorems in the book can be deduced from an elegant and very strong axiomatic system BC of Bernays, which uses a reflection principle. However, for most of the book a weaker system (Morse-Kelley-Tarski or MKT) is sufficient. The theory based on this latter system has a complicated history; probably the first exposition was by A P Morse. His axiom system, however, is not standard. The first axiomatic version presented as a standard first-order theory is that appearing in the appendix to Kelley's book General Topology. The axioms used in the present book are basically due to A Tarski. The presentation owes much to Tarski's course on Set Theory at the University of California, Berkeley.

The impredicative comprehension axiom for classes is stronger than the corresponding principles in the usual theories of Zermelo-Fraenckel and von Neumann-Bernays-Gödel. I have tried to use this extra strength as much as possible in order to simplify the development and thus show the technical advantages of impredicative theories. In order to isolate this feature, I have devised a subtheory of MKT which I call General Class Theory (G). This weak theory is slightly stronger than one with the same name which appears in my papers Chuaqui 1978 and 1980.
...
A few words are in order with respect to the style of presentation of the book. I have chosen to state theorems and definitions quite formally in first-order language. The proofs, however, are informal. In order to lighten the burden of understanding the formulas, I have generally added informal remarks explaining theorems and definitions. One of my friends has said to me that the book was written in the style of 1950 and not of 1980. I agree with this remark, but I believe that the 1950 style is better, I think that students should learn to read formulas, even complicated ones. There was a major advance in Mathematics when mathematicians learned to write and read equations. The reading and writing of logical formulas is also an advance, although perhaps not so crucial as that of equations.

1.2. From the Introduction.

There are two main points of view with respect to axiomatic systems. According to the point of view that may be called algebraic, the axioms are true for a large number of concepts. In this case, the axioms themselves characterise the corresponding mathematical theory completely. For instance in Group Theory we define a Group as a set and operations on this set that satisfy the axioms of Group Theory.

The point of view of this book is different. We assume that sets and classes are objects existing independently of our minds. We choose some sentences which are true of these concepts as axioms. From these axioms, we try to derive as many true sentences (our theorems) as possible. The ideal situation would be to derive all true sentences about sets and classes. We know that this is impossible (by Gödel's Incompleteness Theorem). Therefore we have to be content with deriving all what we need for the purpose at hand.

In order to proceed according to this second point of view, it will be necessary to present the basic concepts of the theory and explain them enough so as to be able to show that the chosen axioms are true. Obviously. this will be an informal explanation not of a strictly mathematical character.

The basic notions are those of set and class, and the fundamental relation between sets and classes is that of elementhood. It is necessary to delimit these notions because the informal notions of set and class are not clearly determined; at least in principle, there are several possible notions of set or class.

1.3. Review by: Luiz Paulo de Alcantara.
Mathematical Reviews MR0629105 (83e:04003).

The book under review presents first-order versions of axiomatic class theories based on the impredicative axiom of class specification. Three theories are studied: (1) a weak general class theory which is equiconsistent with second-order number theory; (2) a version, due to A Tarski, of the full impredicative class theory (the Kelley-Morse system); (3) a version of the strong impredicative theory based on Bernays' reflection principle. The development of the basic points of set theory is quite complete; fully detailed proofs are given. One of the main features of the book is the extensive use of recursive definitions. For instance, the rank function is defined before ordinals, which in turn are obtained as the values of this function. There is no attempt at treating metamathematical questions, apart from a few informal comments. The book is intended to be used as a textbook for a graduate or advanced undergraduate course. Some knowledge of elementary logic and naive set theory is required. The author clearly tries to show that the axioms (mainly Bernays' reflection principle) are intuitively justified, and in the reviewer's opinion he has succeeded.

1.4. Review by: F R Drake.
The Journal of Symbolic Logic 49 (4) (1984), 1422.

This is an unusual book in both presentation and content. In presentation it is very formal, relying on the symbolism of first-order logic so much that the customary index is replaced by an index of symbols, and every theorem is stated as a formula of the appropriate formal system. The proofs are quite informal, however, and there are words of explanation for most of the theorems. But a student learning from this text will certainly learn to read formulas (one of the avowed aims of the author).

The book is intended for graduates, or at least students who have met a fair amount of set theory before; it does treat its topics ab initio (it would be difficult not to in such a formal presentation), but the author does not expect a student to learn the elementary parts of set theory from this text. They are there as needed for the formal development, which tends to be at a fast pace in each topic.

In content, the book concentrates entirely on those parts of set theory that do not involve meta-mathematical ideas. So Gödel's constructible sets do not appear, nor does forcing, or any descriptive set theory; and the cardinals do not go beyond weakly compact cardinals (treated by giving Shelah's purely combinatorial characterisation). As the subtitle implies, class theories are used throughout, eventually including a strong reflection principle of Bernays that implies the existence of a proper class weakly compact cardinals. One of the concerns of the author is to present the advantages of class formulations of set theory (as the subtitle implies). But the effect of the areas chosen is to suggest that these advantages are very far from the main areas of research in set theory today; a research student would need much more background information than is found here, particularly in interpreting formulas within set theory. Thus, the only models considered are the natural models of the form R(α + 1) with α inaccessible; as a result, the few remarks on absoluteness could be very misleading.

One exception is the area of cardinal arithmetic without the axiom of choice: there is a substantial section presenting work of Bradford and earlier work of Tarski, which has not appeared before in a text. But, particularly in presenting Bradford's work, there is very little motivation: it is set out uncompromisingly in first-order formulas (up to ten lines long). This makes it (for this reviewer at least) the most difficult part of a difficult book, and leaves the question: Is it worthwhile? Some of the details presented in the book are not easily found in other texts, and some are not found at all. The range of detail on topics such as linear orderings, Hartog's aleph function, and inaccessible cardinals is wide. But there is no index to help find such topics (the index of symbols generally lists only the first appearance), and not all the topics are apparent from the list of contents. So the formality mitigates against its use as a reference, while the overall bias of its contents restricts its use to students.

Perhaps this reviewer has much to learn in appreciating the advantages of formality, but he would not expect Chuaqui's lead in returning, as he says, to the style of the fifties to be widely followed.

1.5. Review by Newton C A da Costa.
Studia Logica: An International Journal for Symbolic Logic 45 (3) (1986), 329-330.

The book covers several different topics: a brief and complete treatment of the basic concepts of set theory, such as functions, relations, orderings, ordinal numbers, cardinal numbers, definitions by recursion, large cardinals up to weakly compact cardinals, filters, ideals and trees. Besides these standard topics, the author presents some specialised subjects, for example, very general principles of definitions by recursion (necessary for developing the metamathematies of the theory), an extensive treatment of cardinals without the axiom of choice including some results of Tarski and, especially, of Bradford, not yet dealt with in any textbook. It is interesting to remark that some of these results were published by Tarski only for cardinal algebras (A Tarski, Cardinal Algebras, Oxford University Press, Oxford and New York, 1949) and not specifically for cardinal numbers without the axiom of choice. Also included is an extensive treatment of Hartog's function, following ideas of Tarski. Shelah's treatment of weakly compact cardinals appears for the first time in detailed form.

The book has several original traits. Firstly, a detailed presentation of variable binding term operators, normally used in set theory without explanation. Secondly, a very general principle of definitions by recursion over well-founded relations. Thirdly, the author develops three basic general theories : 1) A General Class Theory based only on specification, extensionality, the axiom that asserts that the empty class is a set, and the axiom: if xx and yy are sets, then xyx \bigcup y is also a set. In this theory, the principles of recursion mentioned above can be proved. 2) The standard Kelley-Morse Class Theory, with and without the axiom of choice. 3) Bernays' Class Theory, containing Bernays' strong reflection principle. In all these theories, the axiom of regularity is avoided, without losing any of the advantages of using it.

The book is intended for students who have a good background in elementary logic and naive set theory. No mathematical topics are included; but, for this topic, the reader may be referred to the author's work 'Internal and Forcing Models for the Impredicative Theory of Classes', Dissertationes Mathematicae 176 (Warsaw, 1980).
...
Though some of the topics of the book may be out of the mainstream of research in the foundations of set theory, their intrinsic interest constitutes a good reason for their inclusion. Fashion is not the most important guide in Mathematics ... .
...
In the reviewer's opinion, this book is a relevant contribution to the literature, which can also be employed as a reference work.
2. Truth, possibility and probability. New logical foundations of probability and statistical inference (1991), by Rolando Basim Chuaqui.
2.1. From the Preface.

When one speaks about the foundations of probability, there are two subjects that come to mind: the axiomatic foundations of the calculus of probability, which is a well developed, independent mathematical discipline, and the study of the possible interpretations for the calculus, especially in connection with its applications. The axiomatic foundations were laid out by Kolmogorov, and his axioms or similar ones are accepted by everyone. In the Preliminaries and Part 2, we discuss a simple version of Kolmogorov's axioms and some of their consequences.

On the other hand, as it is well-known, there are several conflicting interpretations of probability, such as considering probability as a measure of the degree of belief or as the limit of relative frequencies. In this book, I present a new interpretation of probability satisfying Kolmogorov's axioms, which I think captures the original intuitions on the subject. Thus, most of the content of the book deals with the second of the foundational subjects mentioned above, which is, in a large measure, philosophical in character. My new interpretation, however, leads to the construction of mathematical models, and a large part of the book deals with the mathematical development of these models.

The interpretation of probability presented in this book is new in the sense that it is different from the interpretations that are at this moment considered reasonable by those who work in the subject. I think, however, that it is rooted in the traditional view of probability dating back to the 17th and 18th centuries, when the calculus of probability originated.

Another purpose of this book is the study of the foundations of statistical inference and decision theory. The mathematical models by themselves cannot provide an interpretation of these subjects, in part, because they allow several interpretations. Hence, I need, besides the models, extra principles for this analysis. The same general philosophical position about probability that has led me to the models mentioned above, has also led me to the formulation of principles, which I believe are the basis for statistical inference and decision theory. I think that these principles, together with my probability models, provide a system for rationally deciding between the different statistical techniques that have been proposed.

I believe that the principles for statistical inference presented in this book are prevalent in usual statistical practice, although I think that they have never been explicitly stated in a coherent and complete fashion. It seems that their ultimate inspiration is in the justification of R A Fisher of significance tests (but not in his fiducial probability), although my results are mostly in accord (with many, exceptions) with the techniques of Neyman and Pearson. For decision theory, however, I accept a Bayesian foundation.

Since as human beings we need to understand what we do, the study of the foundations of a science is justified in its own right. I believe that it is especially important for the practitioners of a science to understand its foundations; in particular, to understand the meaning of the terms they are using, to realise the significance of the methods of the science and their limitations, to visualise the formal structure of the science, etc. Probability is extensively used in many sciences, not only in statistics, but also an theoretical physics and other sciences. Through statistics, it is also applied to all the natural and social sciences. Hence, the foundations of probability should be of interest to all of these different types of scientists. In fact, anyone involved in the philosophy of science is naturally drawn into the study of the foundations of probability.

There are many instances of philosophical ideas influencing the development of a science. But statistics is the only case I know of where philosophical ideas are not just relevant for the development of the science, but also have immediate practical implications. It is well-known that the different interpretations of the concept of probability, which are based on competing philosophical ideas, lead to different statistical techniques. For instance, those who favour a subjectivist or logical interpretation are Bayesian; the believers in a frequency interpretation choose the methods of Neyman-Pearson or Fisher. Although sometimes the application of the different techniques may steer us to similar results, it is also not infrequent that they may yield mutually contradictory consequences.

A "practical" field where foundations of probability has had importance in the last years, is that of artificial intelligence, especially the construction of expert systems. This is due to the fact that for many such applications the concept of probability needs to be clear. Although I do not discuss extensively this subject, the considerations in Chapter V are relevant to artificial intelligence.

I believe that each conception of probability is rooted in a particular epistemological position. To argue for my interpretation of probability from my epistemological position, however, and then argue in favour of this epistemological position, would take us too far in this book, and is better left for other publications. I believe, however, that my position is in accordance with common sense ideas about probability. So I will argue here for my conception mainly by appealing to intuitions about probability. I will often argue from the consequences of the different definitions and interpretations, for instance by the construction of examples and counterexamples based on them. Once they are produced, I shall often take the intuitive content of these examples to be obvious.

A few words are in order about the title of the book. Although it is called "Truth, possibility and probability", there is no exhaustive discussion of truth or possibility in it. Truth and possibility are only discussed with reference to probability, in fact, as the basis for the definition of probability advanced in this book. Truth, especially, is only briefly touched upon, and there is no treatment of the different theories of truth.

After a brief chapter with preliminaries, which contains a very short survey of the mathematical theory of probability, the book consists of four parts. Part 1 includes a philosophical discussion of the foundations of my interpretation of probability, a general outline of the models, and the main intuitive ideas that are basic to my approach, including what I accept as the basic principles of statistical inference and decision theory.

Part 2 contains a brief survey of the elementary mathematical theory of probability, including the notions of infinitesimal analysis that are needed for my development. The reason for this need of infinitesimal analysis, which is presented in its nonstandard version due essentially to A Robinson, is that I build only finite models, and, in order to take into account all probability spaces, I have to approximate infinite spaces by spaces with an infinite natural number of elements (what are called hyperfinite spaces). I believe that one of the main interests of the book is the building of probability theory upon a hyperfinite basis. This construction is partly inspired in Nelson's Radically elementary probability theory (1987).

The study of nonstandard analysis in the literature has, for the most part, been devoted to the obtaining of new standard results and better proofs of old standard results, The point of view adopted in this book, which I think is also Nelson's, is the study of nonstandard analysis for its own sake. Thus, most results are left in their nonstandard form. For instance, the general representation theorem in Chapter XV, mentioned below, is an interesting nonstandard theorem, and I don't know whether it has an interesting standard version or not.

Part 3 includes a formal development of the models for my interpretation of probability. In Chapter XV, which is in this part, I prove a general representation theorem that states that all stochastic processes can be approximated, in a precise nonstandard sense, by processes defined over one of my models. This theorem also has an interpretation which is independent of my theory: it says that any stochastic process can be approximated by a process defined on a probability space based on equiprobable outcomes.

In Part 4, I give a short survey of statistical inference and decision theory, and I analyse the principles behind the practice of these disciplines, giving examples and applications. In particular, I shall develop a theory of provisional acceptance of hypotheses, which I think is consistent with current statistical practice. In fact, most of the Neyman-Pearson techniques for statistical inference are justified on the basis of principles that are different from those advanced by Neyman and Pearson themselves and other classical statisticians. On the other hand, for decision theory, which is sharply distinguished from statistical inference proper, a Bayesian approach is adopted. I believe that this division of labour between classical and Bayesian statisticians is reflected in the current practice: most statisticians that work in the testing of scientific hypotheses are classical, and most of those who work in areas where decision theory is important, such as economics, are Bayesian.

Also included are two appendices. Appendix A contains a complete set of axioms for the nonstandard analysis needed in the book, and a sketch of the proof that these axioms are consistent. Appendix B contains a brief sketch of the foundations of the theory of integration and measure from a nonstandard standpoint.

Part I is rather informal. The only prerequisites are some understanding of probability and statistical inference, although I believe that the Preliminaries provides a brief out line of the main ideas on probability. The rest of the book is heavily mathematical, but I believe that it can be followed by any person who has mathematical maturity. For Part 4, and also somewhat for Chapter IV in Part I, some acquaintance with statistical techniques is desirable, but not essential.
...
Although, I started working on foundations of probability in my Ph.D. dissertation (1965), done under Professor D Blackwell, actual work on the book began in 1984, while I held a Guggenheim fellowship at the Institute for Mathematical Studies in the Social Sciences in Stanford University. The probability seminar of Professor P Suppes, then and in the period 1986-1989, when I worked on a project there, was an inspiration for many of my ideas, and my conversations with him have helped me to clarify my views and state them in a more precise form.

Besides the Simon Guggenheim Foundation, whose support was important at a crucial moment, I have also to thank for partial financial support the Program for Scientific and Technological Development of the Organization of American States, 1978-1985, Project "Theory of models and applications", and several projects from FONDECYT, the Chilean government foundation for science.
...
Special thanks must go the Pontificia Universidad Católica de Chile, where I have worked most of the time of preparation of the book. The constant support of my colleagues and the authorities of the university, through several (DIUC) research projects and otherwise, have been essential for the success of this enterprise.

Santiago, Chile, 1991

2.2. From the Publisher.

Anyone involved in the philosophy of science is naturally drawn into the study of the foundations of probability. Different interpretations of probability, based on competing philosophical ideas, lead to different statistical techniques, and frequently to mutually contradictory consequences.

This unique book presents a new interpretation of probability, rooted in the traditional interpretation that was current in the 17th and 18th centuries. Mathematical models are constructed based on this interpretation, and statistical inference and decision theory are applied, including some examples in artificial intelligence, solving the main foundational problems. Nonstandard analysis is extensively developed for the construction of the models and in some of the proofs. Many nonstandard theorems are proved, some of them new, in particular, a representation theorem that asserts that any stochastic process can be approximated by a process defined over a space with equiprobable outcomes.

2.3. Review by: Henry E Kyburg Jr.
Mathematical Reviews MR1159708 (93h:03021).

This work is excellent both as a philosophical book and as a mathematical book. The author has done an outstanding job both in understanding, in depth, a number of philosophical positions regarding probability, and in making use of sophisticated modern mathematical techniques in developing a mathematical treatment of probability. No other work exploits so effectively, and with such understanding, the machinery of nonstandard analysis in developing the foundations of probability and statistics.

There are four main parts: a philosophical part; a part discussing the elementary mathematical theory of probability, in its nonstandard form; a formal development of the logical models of probability in a nonstandard framework; and a fourth part that concerns statistical inference (which is pursued in the classical tradition) and decision theory, which is developed in a Bayesian framework. This last part does not examine the issues in depth but provides discussions of a number of basic examples, and compares the treatment of these examples that follows from the principles developed in the book to other common treatments.
...
One of the main interests of the book is building probability theory on a hyperfinite basis, the advantage being that the individual models can all remain finite. The representation theorem proved in Part III is striking, but should not be misunderstood. The theorem has a corollary that says that given any stochastic process, there is a "nearly equivalent" process defined over a space in which all elementary events are equiprobable.

One should think of this in connection with Bayes's theorem and compound experiments. A variation of an example, cited by the author and due to Janina Hosiasson-Lindenbaum (the author's scholarship is impressive), is this: let us choose a card from a deck of two red cards and one black card; if it is red, choose a ball from an urn containing one black and four white balls; if it is black, choose a ball from an urn containing four black and three white balls. The probability of getting a black ball is 47/105, but in the description of the problem there is no obvious set of 105 objects from which the choice is made. But the theorem does not say that there is a "natural" or a "semantically transparent" model based on equiprobability. In fact, the only way to get at an equiprobability model for this example is to consider one that has a number of elements proportional to the product of the possibilities in each case: the number of cards, the number of balls in the first urn, and the number of balls in the second urn.

The author frankly espouses inductive acceptance: "One of the main objectives of this book is to give a coherent account of acceptance and rejection of hypotheses based on probabilistic evidence." In this, he agrees entirely with R A Fisher, who distinguished dramatically between the statistics appropriate to scientific inquiry, and "statistics for shopkeepers." One may object to his acceptance of the dogma that the total of what we accept must be consistent; it is this that leads him to reject a Bayesian treatment of estimation. (One should not accept anything with probability less than 1.0, else conjunction can lead to difficulty!) Since Bayesian estimation can reproduce classical estimation procedures, this leaves the author in a somewhat uncomfortable position.

The exposition is in general extremely clear. This clarity of exposition is enhanced by the use of nonstandard analysis, which allows all probability spaces to be finite. (This provides additional reasons to be thankful for the appendices.) ...

Last Updated November 2022