We are indebted to Alexander Rakhlin, who pointed us to the early generalization bound for the Perceptron algorithm. This result both in its substance and historical position shaped our understanding of machine learning. Kevin Jamieson was the first to point out to us the similarity between the structure of our course and the text by Duda and Hart. Peter Bartlett provided many helpful pointers to the literature and historical context about generalization theory. Jordan Ellenberg helped us improve the presentation of algorithmic stability. Dimitri Bertsekas pointed us to an elegant proof of the Neyman-Pearson Lemma. We are grateful to Rediet Abebe and Ludwig Schmidt for discussions relating to the chapter on datasets. We also are grateful to David Aha, Thomas Dietterich, Michael I. Jordan, Pat Langley, John Platt, and Csaba Szepesvari for giving us additional context about the state of machine learning in the 1980s. Finally, we are indebted to Boaz Barak, David Blei, Adam Klivans, Csaba Szepesvari, and Chris Wiggins for detailed feedback and suggestions on an early draft of this text. We’re also grateful to Chris Wiggins for pointing us to Highleyman’s data.
We thank all students of UC Berkeley’s CS 281a in the Fall of 2019, 2020, and 2021, who worked through various iterations of the material in this book. Special thanks to our graduate student instructors Mihaela Curmei, Sarah Dean, Frances Ding, Sara Fridovich-Keil, Wenshuo Guo, Chloe Hsu, Meena Jagadeesan, John Miller, Robert Netzorg, Juan C. Perdomo, and Vickie Ye, who spotted and corrected many mistakes we made.