Subscribe to DSC Newsletter


I am teaching myself ML for fun.

I am going through the Deep Learning book by Goodfell, Bengio and Courville.  Must must admit they lost me on chapter 2 page.

Any suggestions how to get over this part?  Any youtube videos, or books that might help out getting over through this?  Thanks.

Views: 5373

Reply to This

Replies to This Discussion

You may want to take a course in Linear Algebra before getting into ML.  It won't make a lot of sense unless you have a basic understanding of matrices.  

A good starting point would be also this one doesn't cover matrices as far as I remember. Andrew NG's Machine Learning course ( also has a linear algebra section in week 1 to refresh your knowledge.

There are really some great books  I love Mathematics - From the Birth of Numbers which is brilliant in that you get great scope & history. Also I highly value Schaum's outlines series : Linear Algebra (Fully solved problems) walks you thru certain problems and how to solve, then gives you problems to solve, you give it a spin and the answers are provided so you can verify accordingly. I think we also have to give a shout out to the for Dummies series which are are written be great people with great imaginations to convey a thought process in a fun manner.

Get the the library and have some fun ! 

Greetings Frederick,

What the product operation line is saying (equation 2.5) is the cell C_{i,j} is defined as the dot product of A's row i and B's column j.

A How To video is Intro to matrix multiplication (video) | Khan Academy. This is just the tip of the iceberg for linear algebra. As other commentators noted, a refreshing class on linear algebra might be helpful.

For a mathematical reason why the definition of matrix multiplication is the way it is, see

I'd suggest some background in machine learning and neural networks before you start reading the book.

1) Linear algebra is a must have!
2) Look into the history of neural networks. Start with perceptron and feed forward network with 1 hidden layer before you move onto other architectures - they are fancy, but learning the limitations of perceptron, feed forward networks will truly inspire you to read more.
3) Learn how to interpret weights of neural networks. Hidden layer weights may seem insignificant, but they tell you exactly what/how the network learns.

IMHO these 3 are necessary to understand why other architectures are required and the type of problems that each architecture can solve. I admit that deep learning is a beast, but it can be tamed by using a systematic approach. Ideally, go through a course on deep learning (there are many in YouTube) and use the book as primary reference material.

Matrix product is basic to understand ML. I think this may help you to visualize what is happening. In addition, follow the suggestion given in above comments.

Reply to Discussion


Follow Us


  • Add Videos
  • View All


© 2018   Data Science Central   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service