Home » Uncategorized

An overview of gradient descent optimization algorithms

This article was written by Sebastian Ruder. Sebastian is a PhD student in Natural Language Processing and a research scientist at AYLIEN. He blogs about Machine Learning, Deep Learning, NLP, and startups.

Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g. lasagne’scaffe’s, and keras’documentation). These algorithms, however, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by.

This blog post aims at providing you with intuitions towards the behaviour of different algorithms for optimizing gradient descent that will help you put them to use. We are first going to look at the different variants of gradient descent. We will then briefly summarize challenges during training. Subsequently, we will introduce the most common optimization algorithms by showing their motivation to resolve these challenges and how this leads to the derivation of their update rules. We will also take a short look at algorithms and architectures to optimize gradient descent in a parallel and distributed setting. Finally, we will consider additional strategies that are helpful for optimizing gradient descent.

Gradient descent is a way to minimize an objective function J(θ) parameterized by a multivariate model’s parameter θ parameters by updating the parameters in the opposite direction of the gradient of the objective function ∇J(θ) w.r.t. to the parameters. The learning rate η determines the size of the steps we take to reach a (local) minimum. In other words, we follow the direction of the slope of the surface created by the objective function downhill until we reach a valley. If you are unfamiliar with gradient descent, you can find a good introduction on optimizing neural networks here.

Below is an illustration of various learning-rate methods, showing higher performance of adaptive methods, in two differebt configurations of extrema.

2808318953

2808327843

Table of contents:

Gradient descent variants

Challenges

Gradient descent optimization algorithms

Parallelizing and distributing SGD

Additional strategies for optimizing SGD

Conclusion

To check out all this information, click here

Top DSC Resources

Follow us on Twitter: @DataScienceCtrl | @AnalyticBridge