Foundations of Data ScienceThis book provides an introduction to the mathematical and algorithmic foundations of data science, including machine learning, high-dimensional geometry, and analysis of large networks. Topics include the counterintuitive nature of data in high dimensions, important linear algebraic techniques such as singular value decomposition, the theory of random walks and Markov chains, the fundamentals of and important algorithms for machine learning, algorithms and analysis for clustering, probabilistic models for large networks, representation learning including topic modelling and non-negative matrix factorization, wavelets and compressed sensing. Important probabilistic techniques are developed including the law of large numbers, tail inequalities, analysis of random projections, generalization guarantees in machine learning, and moment methods for analysis of phase transitions in large random graphs. Additionally, important structural and complexity measures are discussed such as matrix norms and VC-dimension. This book is suitable for both undergraduate and graduate courses in the design and analysis of algorithms for data. |
Contents
Introduction | 1 |
BestFit Subspaces and Singular Value Decomposition SVD | 29 |
Random Walks and Markov Chains | 64 |
Machine Learning | 109 |
Streaming | 159 |
Clustering | 182 |
Random Graphs | 215 |
8 | 226 |
29 | 270 |
Topic Models Nonnegative Matrix Factorization Hidden Markov | 274 |
Other Topics | 318 |
Wavelets | 341 |
Background Material | 360 |
62 | 380 |
411 | |
412 | |
Other editions - View all
Common terms and phrases
algorithm apply approximation assume assumption average bound called clause clustering columns communities component compute connected Consider consists constant containing convex corresponding defined direction distance distribution document edges elements entries equal equation error estimate example Exercise exists expected fact factor Figure function Gaussian given gives graph greater implies increases independent inequality integer learning least Lemma length less linear matrix maximum mean method minimize node Note origin orthogonal pair path pick points positive possible probability problem Proof prove random variable ranking rows sample satisfying scale selected separator sequence simple singular solution space squared steps subset Suppose symmetric matrix Theorem topic tree true unit variance vector vertex vertices walk weight zero