# Diaconis papers

Persi Diaconis's publication list contains around 200 items. Below we list sixteen of his papers (some single authored and other jointly authored) and we also give an extract from the authors' introduction or an extract from a review. We should note that the papers we list are not really representative of Diaconis's work since we have chosen the papers partly on the basis that the subjects they treat do not require too much technical knowledge to understand their content.
1. (with Ron Graham) The analysis of sequential experiments with feedback to subjects (1981). Colin Mallows writes in a review:

A subject trying to guess the outcome of a sequence of draws without replacement from a known finite population may be given complete feedback (i.e. the identity of the object just guessed at) or only partial feedback (i.e. whether or not the guess was correct). Numerous results are established that throw light on the value of such feedback, and the structure of optimal guessing strategies, which can be extremely subtle in the partial-feedback case. ... The authors develop a method for fairly scoring such experiments; the number of correct guesses is compared to the conditional expected number given the available information at each stage. The authors take a new (and hard) look at a number of variations of an old problem, and obtain many exciting results.
2. (with Mehrdad Shahshahani) Generating a random permutation with random transpositions (1981). Lars Holst writes in a review:

Imagine $n$ cards, labeled 1 through $n$. Card 1 is at the left, Card 2 is next, and so on. Draw two cards at random (with replacement) and change their places; thus no change happens with probability $\large\frac{1}{n}$. This operation is called a random transposition. If many transpositions are made, the row of cards will tend to appear in random arrangement. The following problem is studied: How many transpositions are needed until the permutation is close to random?
3. (with Bill Kantor and Ron Graham) The mathematics of perfect shuffles (1983). The authors write in the introduction:

There are two ways to perfectly shuffle a deck of $2n$ cards. Both methods cut the deck in half and interlace perfectly. The out shuffle leaves the original top card on top. The in shuffle leaves the original top card second from the top. Applications to the design of computer networks and card tricks are reviewed. The main result is the determination of the group generated by the two shuffles, for all $n$.
4. Bayesian statistics as honest work (1985). Persi Diaconis writes in the introduction:

A review of recent work on exchangeability is offered in the spirit of tolerance and respect for competing views that characterized the work of Jack Kiefer and Jerzy Neyman. The theme is this: If you do a piece of honest work, people will find uses for it no matter what paradigm you work in. This theme is illustrated by tracing the fruits of de Finetti's efforts to understand Hume's problem of induction. His resolution involves exchangeability and his well-known theorem that every exchangeable sequence is a mixture of coin tossing. Subsequent generalizations and an application to the study of human vision are sketched.
5. (with Mehrdad Shahshahani) Products of random matrices and computer image generation (1986). Stephen Demko writes in a review:

The authors consider the possibility of generating computer pictures of some natural objects by plotting the fixed points of products of affine maps. They employ a mathematical framework for the study of fractals proposed by Hutchinson and make interesting connections with stochastic processes and classical problems concerning singular measures.
6. (with Joseph Keller) Fair dice (1989). E Hertel writes in a review:

A convex polyhedron is said to be a fair die if it has the same chance of landing on each of its facets after an accidental throw. A first type of such polytopes is the well-known class of isohedra, which are polytopes with symmetries acting transitively on their facets (fair by symmetry). The authors characterize a second type of fair dice (fair by continuity). The question of further fair dice is still open.
7. (with Frederick Mosteller) Methods for studying coincidences (1989). The authors write in the introduction:

This article illustrates basic statistical techniques for studying coincidences. These include data-gathering methods (informal anecdotes, case studies, observational studies, and experiments) and methods of analysis (exploratory and confirmatory data analysis, special analytic techniques, and probabilistic modeling, both general and special purpose). We develop a version of the birthday problem general enough to include dependence, inhomogeneity, and almost and multiple matches. We review Fisher's techniques for giving partial credit for close matches. We develop a model for studying coincidences involving newly learned words. Once we set aside coincidences having apparent causes, four principles account for large numbers of remaining coincidences: hidden cause; psychology, including memory and perception; multiplicity of endpoints, including the counting of "close'' or nearly alike events as if they were identical; and the law of truly large numbers, which says that when enormous numbers of events and people and their interactions cumulate over time, almost any outrageous event is bound to occur. These sources account for much of the force of synchronicity.
8. (with Dave Bayer) Trailing the dovetail shuffle to its lair (1992). David Aldous writes in a review:

Rarely does a new mathematical result make both the New York Times and the front page of my local paper, and even more rarely is your reviewer asked to speak on commercial radio about a result, but such activity was caused by the preprint of this paper. In layman's terms, it says you should shuffle a deck of cards seven times before playing.
9. (with Susan Holmes) Three examples of Monte-Carlo Markov chains: at the interface between statistical computing, computer science, and statistical mechanics (1995). Arnoldo Frigessi writes in a review:

Markov chain Monte Carlo (MCMC) methods are iterative stochastic algorithms for sampling from highly multivariate probability distributions. In this paper the authors show, by means of three significant examples, that statisticians and theoretical computer scientists are developing similar ideas useful for the design of MCMC algorithms and for the study of their convergence properties. This is not a survey paper on MCMC, although some pointers to sources of relevant literature are given. All examples involve reversible Markov chains with discrete sample spaces.
10. From shuffling cards to walking around the building: an introduction to modern Markov chain theory (1998). Persi Diaconis writes in the introduction:

This paper surveys recent progress in the classical subject of Markov chains. Sharp rates of convergence are available for many chains. Examples include shuffling cards, a variety of simulation procedures used in physics and statistical work, and random walk on the chambers of a building. The techniques used are a combination of tools from geometry, PDE, group theory and probability.
11. Patterns in eigenvalues: the 70th Josiah Willard Gibbs lecture (2003). Persi Diaconis writes in the introduction:

Typical large unitary matrices show remarkable patterns in their eigenvalue distribution. These same patterns appear in telephone encryption, the zeros of Riemann's zeta function, a variety of physics problems, and the study of Toeplitz operators. This paper surveys these applications and what is currently known about the patterns.
12. Random walks on groups: characters and geometry : Lecture course at Groups St Andrews in Oxford (2003). Anders Karlson writes in a review:

This paper gives a well-written and interesting overview of one general approach to the study of random walks on groups. It can also serve as an introduction to the subject. Only finite groups are discussed. The methods of the general approach involve character theory and the geometry of the group in various generating sets. The paper contains no major new results. In focus is the example of random transpositions on the symmetric groups, which is a subject the author has made many important contributions to. The paper points out many connections to other branches of mathematics as well as applications to other sciences. Indeed it has a long bibliography. The paper finishes with a list of several open problems. It is therefore altogether a valuable source of overview information.
13. (with Paul Erdös) On the distribution of the greatest common divisor (2004).
14. (with Susan Holmes and Richard Montgomery) Dynamical bias in the coin toss (2007). J E Marsden writes in a review:

In a classical paper, Joe Keller (1986) studied the mechanics of a coin, starting heads up, flipped about a horizontal axis with an initial distribution of initial conditions and showed that in many cases, the probability of a heads is close to 1/2. In the paper under review, the authors allow the more realistic possibility of precession and derive an explicit and very interesting formula for the probability of heads in terms of the angle between the normal to the coin and the angular momentum vector. Both the analysis and comparison with experiment reveal a fascinating, but rather small bias that can occur in coin flipping.
15. (with Jason Fulman and Robert Guralnick) On fixed points of permutations (2008). Cheryl Praeger writes in a review:

The authors study the distribution of the number of fixed points of a random permutation in various families of finite permutation groups.
16. (with Jason Fulman) Carries, shuffling, and an amazing matrix (2009). Kent Mossison writes in a review:

This article describes the surprising connection between two Markov chains that, on first glance, have no relationship to each other. The first is the "carries process'' discovered and analyzed by J M Holte (1997). The second is the Markov chain that models riffle shuffles of a deck of cards.

Last Updated July 2011