Below are some extracts from an interesting Quora discussion on this topic.
Quotes:
Another contributor wrote:
The Newton method is obtained by replacing the Direction matrix in the steepest decent update equation by inverse of the Hessian. The steepest decent algorithm,
where theta is the vector of independent parameters, D is the direction matrix and g represents the gradient of the cost functional I(theta) not shown in the equation.
The gradient decent is very slow. For convex cost functionals a faster method is the Newtons method given below:
Above equation for Newtons method Becomes,
where H is the hessian
If the first and second derivatives of a function exist then strict convexity implies that the Hessian matrix is positive definite and vice versa.
Drawback of Newton method:
To prevent these problems several modifications that approximate the hessian and its inverse have been developed
Read the full discussion here.
Comment
It looks this issue is now fixed.
The images aren't displaying..
© 2020 Data Science Central ® Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Upcoming DSC Webinar
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Upcoming DSC Webinar
Most popular articles
You need to be a member of Data Science Central to add comments!
Join Data Science Central