Home » Uncategorized

Free New Book by Andrew Ng: Machine Learning Yearning

This is the new book by Andrew Ng, still in progress. Andrew Yan-Tak Ng is a computer scientist and entrepreneur. He is one of the most influential minds in Artificial Intelligence and Deep Learning. Ng founded and led Google Brain and was a former VP & Chief Scientist at Baidu, building the company’s Artificial Intelligence Group into several thousand people. He is an adjunct professor (formerly associate professor and Director of the AI Lab) at Stanford University. Ng is also an early pioneer in online learning – which led to the co-founding of Coursera.


Machine Learning Yearning is a deeplearning.ai project. The subtitle of the book is Technical strategy for AI engineers in the era of deep learning.

Content of the book

  1. Why Machine Learning Strategy
  2. How to use this book to help your team
  3. Prerequisites and Notation
  4. Scale drives machine learning progress
  5. Your development and test sets
  6. Your dev and test sets should come from the same distribution
  7. How large do the dev/test sets need to be?
  8. Establish a single-number evaluation metric for your team to optimize
  9. Optimizing and satisficing metrics
  10. Having a dev set and metric speeds up iterations
  11. When to change dev/test sets and metrics
  12. Takeaways: Setting up development and test sets
  13. Build your first system quickly, then iterate
  14. Error analysis: Look at dev set examples to evaluate ideas
  15. Evaluating multiple ideas in parallel during error analysis
  16. Cleaning up mislabeled dev and test set examples
  17. If you have a large dev set, split it into two subsets, only one of which you look at
  18. How big should the Eyeball and Blackbox dev sets be?
  19. Takeaways: Basic error analysis
  20. Bias and Variance: The two big sources of error
  21. Examples of Bias and Variance
  22. Comparing to the optimal error rate
  23. Addressing Bias and Variance
  24. Bias vs. Variance tradeoff
  25. Techniques for reducing avoidable bias
  26. Techniques for reducing Variance
  27. Error analysis on the training set
  28. Diagnosing bias and variance: Learning curves
  29. Plotting training error
  30. Interpreting learning curves: High bias
  31. Interpreting learning curves: Other cases
  32. Plotting learning curves
  33. Why we compare to human-level performance
  34. How to define human-level performance
  35. Surpassing human-level performance
  36. Why train and test on different distributions
  37. Whether to use all your data
  38. Whether to include inconsistent data
  39. Weighting data
  40. Generalizing from the training set to the dev set
  41. Addressing Bias and Variance
  42. Addressing data mismatch
  43. Artificial data synthesis
  44. The Optimization Verification test
  45. General form of Optimization Verification test
  46. Reinforcement learning example
  47. The rise of end-to-end learning
  48. More end-to-end learning examples
  49. Pros and cons of end-to-end learning
  50. Learned sub-components
  51. Directly learning rich outputs
  52. Error Analysis by Parts
  53. Beyond supervised learning: What’s next?
  54. Building a superhero team – Get your teammates to read this
  55. Big picture
  56. Credits

You can find more about this book project, and access the chapters, here (on Github). To get updates, visit this website

DSC Resources