# Data Science and Machine Learning Mathematical and Statistical Methods

As a part of my teaching for AI at the University of Oxford, I read a large number of books which are based on the maths of data science.  Data Science and Machine Learning Mathematical and Statistical Methods is a book i recommend if you like the maths of data science. There is a pdf version at the author’s site. You can also buy the book from Amazon. I provide both links at the end.

Maths of data science is not an easy topic to explain; the trick is to get the balance right. If you start with the very basics (matrices, vectors, probability etc), you end up with a long but not so useful book

In my view, this book does get the balance right – although the flow is a bit unique in my view

The book starts with Importing, Summarizing, and Visualizing Data

It then goes on to Statistical Learning and introduces the reader to some common concepts and themes in statistical learning. They  discuss the difference between supervised and unsupervised learning, and how we can assess the predictive performance of supervised learning. The chapter also examines the central role that the linear and Gaussian properties play in the modeling of data and conclude with a section on Bayesian learning.

This is then followed by Monte Carlo Methods. I am unclear why they appear so early, but the chapter gives an introduction to the three main uses of Monte Carlo simulation: to (1) simulate random objects and processes in order to observe their behavior, (2) estimate numerical quantities by repeated sampling, and (3) solve complicated optimization problems through randomized algorithms

Next, we cover Unsupervised Learning techniques  such as density estimation, clustering, and principal component analysis. Important tools in unsupervised learning include the cross-entropy training loss, mixture models, the Expectation–Maximization algorithm, and the Singular Value Decomposition.

This is followed by Regression. The purpose of this chapter is to explain the mathematical ideas behind regression models and their practical aspects. The book analyzes the fundamental linear model in detail, and also discusses nonlinear and generalized linear models.

The chapter on regression is followed by Regularization and Kernel Methods (although I am not exactly sure why these two things are combined in one chapter).

Regularization provides a natural way to guard against overfitting and kernel methods offer a broad generalization of linear models.

We then have Classification chapter for explaining the mathematical ideas behind well-known classification techniques such as the naïve Bayes method, linear and quadratic discriminant analysis, logistic/softmax classification, the K-nearest neighbors method, and support vector machines.

The book follows by a discussion on Decision Trees and Ensemble Methods Statistical learning methods based on decision trees have gained tremendous popularity due to their simplicity, intuitive representation, and predictive accuracy. This chapter gives an introduction to the construction and use of such trees. We also discuss two key ensemble methods, namely bootstrap aggregation and boosting, which can further improve the efficiency of decision trees and other learning methods.

Finally, the book concludes with Deep Learning which are referred to as a ‘a rich class of approximating functions called neural networks’.

I like the flow of the book and if you like the maths of machine learning, I recommend it as a source.

The author’s site is HERE where you can download a version of the pdf.

Alternatively, you can buy the book HERE.