Some David Donoho lectures


We give below examples of lectures David Donoho has given in the last 20 years. We list ten such lectures, giving for each the title, date, venue and abstract.

1. The Romance of Hidden Components.
Date: Wednesday, 25 August 2004.
Time: 6:30 p.m. - 7:30 p.m.
Venue: LT 31(Faculty of Science Auditorium) Blk S16, Level 3, 3 Science Drive 2 National University of Singapore, Singapore 117543.

Abstract.

Perhaps the most romantic and seductive idea in all of science is that, hiding behind the enormously complex structures we see in the world around us, there are hidden components that are on the one hand very simple and even elegant and on the other hand easily combine to generate all the variety we see about us. Classical examples include Newton and the spectrum of light, Eugenecists and the idea of IQ; modern examples include wavelets and quarks. The speaker will review some of the classical ideas of hidden components, starting from principal components or even before, and describe some of the most recent notions, such as independent components analysis, sparse components analysis, nonnegative matrix factorisations, and cumulant components. He will try to keep things at an elementary level, communicating the attractiveness of these ideas to scientists and engineers outside of statistics, the wide-ranging impact these ideas are having from high-tech industry to neuroscience and astronomy, and describing what he thinks is the much greater role that statisticians should be playing in developing and deploying these methods.
2. Precise asymptotics in compressed sensing.
Date: Wednesday, 6 July 2011.
Time: 9:00-10:00.
Venue: Foundations of Computational Mathematics, Budapest University of Technology and Economics, Building K, Budapest, Hungary.

Abstract.

I will describe recent work giving precise asymptotic results on mean squared error and other characteristics, in a range of problems in compressed sensing; these include results for LASSO, group LASSO, and nonconvex sparsity penalty methods. A key application of such precise formulas is their use in deriving precise optimality results which were not known previously, and to our knowledge not available by other methods.
3. What's the big deal about "big data"? emergent phenomena in high-dimensional data analysis.
Date: Wednesday-Friday 25-27 February 2015.
Venue: Fields Institute, University of Toronto.

Abstract.

In classical statistical analysis, one assumed that the number of variables of interest is mall but the number of observations is large. In the big-data era the number of variables is almost as large as, or even larger than the number of observations. In this new regime, several fascinating phenomena arise which both complicate life, but also render it more interesting.

I will illustrate the emergent new phenomena with vignettes showing how the big-data asymptotic overturns traditional statistics, such as covariance estimation, and its applications in signal processing and finance; from high-dimensional robust estimation of linear models, and its use for outlier detection. For example, traditional optimal procedures are no longer optimal. I will show how the emergent new high-dimensional phenomena offer exciting new opportunities in science and technology, for example in compressed sensing.
4. 50 years of Data Science.
Date: Friday18 September 2015.
Venue: Tukey Centennial workshop, Princeton NJ.

Abstract.

More than 50 years ago, John Tukey called for a reformation of academic statistics. In 'The Future of Data Analysis', he pointed to the existence of an as-yet unrecognised science, whose subject of interest was learning from data, or 'data analysis'. Ten to twenty years ago, John Chambers, Bill Cleveland and Leo Breiman independently once again urged academic statistics to expand its boundaries beyond the classical domain of theoretical statistics; Chambers called for more emphasis on data preparation and presentation rather than statistical modelling; and Breiman called for emphasis on prediction rather than inference. Cleveland even suggested the catchy name "Data Science" for his envisioned field.

A recent and growing phenomenon is the emergence of "Data Science" programs at major universities, including UC Berkeley, NYU, MIT, and most recently the Univ. of Michigan, which on September 8, 2015 announced a $100M "Data Science Initiative" that will hire 35 new faculty. Teaching in these new programs has significant overlap in curricular subject matter with traditional statistics courses; in general, though, the new initiatives steer away from close involvement with academic statistics departments.

This paper reviews some ingredients of the current "Data Science moment", including recent commentary about data science in the popular media, and about how/whether Data Science is really different from Statistics.

The now-contemplated field of Data Science amounts to a superset of the fields of statistics and machine learning which adds some technology for 'scaling up' to 'big data'. This chosen superset is motivated by commercial rather than intellectual developments. Choosing in this way is likely to miss out on the really important intellectual event of the next fifty years.

Because all of science itself will soon become data that can be mined, the imminent revolution in Data Science is not about mere 'scaling up', but instead the emergence of scientific studies of data analysis science-wide. In the future, we will be able to predict how a proposal to change data analysis workflows would impact the validity of data analysis across all of science, even predicting the impacts field-by-field.

Drawing on work by Tukey, Cleveland, Chambers and Breiman, I present a vision of data science based on the activities of people who are 'learning from data', and I describe an academic field dedicated to improving that activity in an evidence-based manner. This new field is a better academic enlargement of statistics and machine learning than today's Data Science Initiatives, while being able to accommodate the same short-term goals.
5. Factor Models and PCA in light of the spiked covariance model.
Date: Thursday, October 20, 2016.
Time: 4 p.m. - 5 p.m.
Venue: M3 1006, University of Waterloo. Reception will follow in the M3 Bruce White Atrium.

Abstract.

Principal components analysis and Factor models are two of the classical workhorses of high-dimensional data analysis, used literally thousands of times a day by data analysts the world over. But now that we have entered the big data era, where there are vastly larger numbers of variables/attributes being measured that ever before, the way these workhorses are deployed needs to change.

In the last 15 years there has been tremendous progress in understanding the eigenanalysis of random matrices in the setting of high-dimensional data in particular progress in understanding the so-called spiked covariance model. This progress has many implications for changing how we should use standard 'workhorse' methods in high-dimensional settings. In particular it vindicates Charles Stein's seminal insights from the mid 1950's that shrinkage of eigenvalues of covariance matrices is essentially mandatory, even though today such advice is still frequently ignored. We detail new shrinkage methods that flow from random matrix theory and survey the work of several groups of authors.
6. Compressed Sensing: From Theory to Practice.
Date: Monday, 7 November 2016
Time: 3:30-4:30 p.m.
Venue: University of Washington Department of Electrical & Computer Engineering, Paul Allen Center Atrium.

Abstract.

In the last decade, Compressed Sensing became an active research area, producing notable speedups in important practical applications. For example, Vasanawala, Lustig and co-workers at Stanford's Lucille Packard Children's hospital produced roughly 8× speedups using compressed sensing approaches in the acquisition time of Magnetic Resonance Images, and even larger speedups have been reported in other practical applications, such as Magnetic Resonance spectroscopy. Over the same period, theory in both applied mathematics and information theory developed extremely precise and insightful formulas. However, there are gaps between the two bodies of work, because the rules that practitioners must play by are not always the ones that theorists envision. In this talk, Professor Donoho will survey some recent developments, bringing theory and practice closer together, including multi-scale compressed sensing and cartesian product compressed sensing.
7. High-dimensional statistics in light of the spiked covariance model.
Date: Tuesday, 8 November 2016
Time: 10:30-11:30 a.m.
Venue: University of Washington Department of Electrical & Computer Engineering
Electrical and Computer Engineering Building 105.

Abstract.

Classical statistical methods have become workhorses of high-dimensional data analysis, used literally thousands of times a day by data analysts the world over. But now that we have entered the big data era, where there are vastly larger numbers of variables/attributes being measured than ever before, the way these workhorses are deployed needs to change.

In the last 15 years there has been tremendous progress in understanding the eigenanalysis of random matrices in the setting of high-dimensional data, in particular progress in understanding the so-called spiked covariance model. This progress has many implications for changing how we should use standard `workhorse' methods in high-dimensional settings. In particular it vindicates Charles Stein's seminal insights from the mid 1950's that shrinkage of eigenvalues of covariance matrices is essentially mandatory, even though today such advice is still frequently ignored. We detail new shrinkage methods that flow from random matrix theory and survey the implications being developed through the work of several groups of authors.
8. Deepnet Spectra and the Two Cultures of Data Science.
Date: 12 November 2019.
Time: 12 noon-1 p.m.
Venue: Al-Khawarzmi Distinguished Lecture Series, Building 9, Level 2, Hall 2, Room 2325.

Abstract.

Machine learning became a remarkable media story of the 2010s largely owing to its ability to focus researcher energy on attacking prediction challenges like ImageNet. Media extrapolations forecast the complete transformation of human existence. Unfortunately, machine learning has a troubled relationship with understanding the foundation of its achievements well enough to face demanding real-world requirements outside the prediction challenge setting. For example, its literature is admittedly corrupted by anti-intellectual and anti scholarly tendencies. It is beyond irresponsible to build a revolutionary transformation on such a shaky pseudo-foundation. In contrast, more traditional subdisciplines of data science like numerical linear algebra, applied probability, and theoretical statistics provide time-tested tools for designing reliable processes with understandable performance. Moreover, positive improvements in human well being have repeatedly been constructed using these foundations. To illustrate these points we will review a recent boomlet in the ML literature in the study of eigenvalues of Deepnet Hessians. A variety of intriguing patterns in eigenvalues were observed and speculated about in ML conference papers. We describe the work of Vardan Papyan showing that the traditional subdisciplines, properly deployed, can offer insights about these objects that ML researchers had.
9. ScreeNOT: Exact MSE-Optimal Singular Value Thresholding in Correlated Noise.
Date: 14 October 2021.
Venue: TRIPODS Distinguished Lecture, Institute of Data Science, Texas A&M University

Abstract.

Truncation of the singular value decomposition is a true scientific workhorse. But where to truncate? For 55 years the answer, for many scientists, has been to eyeball the scree plot, an approach which still generates hundreds of papers per year. The speaker will describe ScreeNOT, a mathematically solid alternative deriving from the many advances in Random Matrix Theory over those 55 years. Assuming a model of low-rank signal plus possibly correlated noise, and adopting an asymptotic viewpoint with number of rows proportional to the number of columns, the speaker shows that ScreeNOT has a surprising oracle property. It typically achieves exactly, in large finite samples, the lowest possible MSE for matrix recovery, on each given problem instance - i.e., the specific threshold it selects gives exactly the smallest achievable MSE loss among all possible threshold choices for that noisy dataset and that unknown underlying true low-rank model. The method is computationally efficient and robust against perturbations of the underlying covariance structure. The talk is based on joint work with Matan Gavish and Elad Romanov, Hebrew University.
10. Data Science at the Singularity.
Date: 19 December 2023.
Time: 11 a.m. - 12 noon.
Venue: 2023 IMS International Conference on Statistics and Data Science, Small Auditorium, Lisbon, Portugal.

Abstract.

A purported "AI Singularity" has been much in the public eye recently, especially since the release of ChatGPT last November, spawning social media "AI Breakthrough" threads promoting Large Language Model (LLM) achievements. Alongside this, mass media and national political attention focused on "AI Doom" hawked by social media influencers, with twitter personalities invited to tell congresspersons about the coming "End Times".

In my opinion, "AI Singularity" is the wrong narrative; it drains time and energy with pointless speculation. We do not yet have general intelligence, we have not yet crossed the AI singularity, and the remarkable public reactions signal something else entirely.

Something fundamental to science really has changed in the last ten years. In certain fields which practice Data Science according to three principles I will describe, progress is simply dramatically more rapid than in those fields that don't yet make use of it.

Researchers in the adhering fields are living through a period of very profound transformation, as they make a transition to frictionless reproducibility. This transition markedly changes the rate of spread of ideas and practices, and marks a kind of singularity, because it affects mindsets and paradigms and erases memories of much that came before. Many phenomena driven by this transition are misidentified as signs of an AI singularity. Data Scientists should understand what's really happening and their driving role in these developments.

Last Updated June 2024