Hi Everyone,
Recently i gave an interview for data science opening and interviewer asked optimisation algorithm for logistic regression. I answered gradient descent and explained same.
He said these days nobody use gradient descent? Is that true and if yes what are alternatives.
What are disadvantages of gradient descent because of which it is not used anymore?
Thanks for your help in advance.
Tags: algorithms., datascience, logistic, optimisation, regression
Here is a great alternative: swarm optimization, see https://www.datasciencecentral.com/profiles/blogs/swarm-optimizatio...
The animated picture below shows how it works:
Genetic Algorithms may also be of interest. In general, the problem domain is that of Global Optimization.
I think the interviewer was looking for some variation of gradient descent, because according to my knowledge gradient descent is used and used heavily; but with some tweaks. Check this momentum-rmsprop-and-adam, it'll give the idea.
I think the interviewer its confused ... Stochastic gradient descent is heavily used and its the "de facto" method for learning
Of course there are very good alternatives like Swarm optimization, but I will not say that nobody use SGD nowadays ...
Thanks Vincent will try to implement and replace with gradient descent.
Vincent Granville said:
Here is a great alternative: swarm optimization, see https://www.datasciencecentral.com/profiles/blogs/swarm-optimizatio...
The animated picture below shows how it works:
I also think so will try to compare both stochastic and swarm. Thanks
Eduardo Di Santi said:
I think the interviewer its confused ... Stochastic gradient descent is heavily used and its the "de facto" method for learning
Of course there are very good alternatives like Swarm optimization, but I will not say that nobody use SGD nowadays ...
I don't agree with the interviewer fully. ADAM, ADAGRAD,SGD are some of the popular flavors of gradient descent method used in solving real world problems when employing regression (of any kind) techniques.
May be there is a specific problem he had in mind....
That is a very nice article. Thanks for sharing.
surya prakash Sahu said:
I think the interviewer was looking for some variation of gradient descent, because according to my knowledge gradient descent is used and used heavily; but with some tweaks. Check this momentum-rmsprop-and-adam, it'll give the idea.
© 2019 Data Science Central ® Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Most popular articles