Hello and Welcome!
This is my attempt to start cataloging all the interesting articles, industry reports, whitepapers, and news that I read every month, related to technology and data science. There are tons of material published everyday. Of course, I can't read them all because I am human! But I want to share everything that I found to be valuable.
I have been collecting these since October 2014, so a few posts will be backdated.
Jumping straight in -
The MIT Technology Review is one of my favorite publications. The link takes you to an interactive page that allows you to click on the top 50 companies and learn cool facts about each one. I was intrigued about the number of companies that were investing in analytics, robotics and/or AI. Besides the usual suspects that I expected to find here and did - Facebook, Google, Netflix, Microsoft and Amazon - I also found the following companies to be interesting and here is why:
Weather forecasting has traditionally been done using convoluted models running on supercomputers that manipulate insanely large amounts of data. With today's developments in machine learning, simpler ML algos are replacing the need for these supercomputers and still produce better results. The reason why weather forecasting in key in the renewable energy context is that in order to integrate more renewable energy sources into the grid, we need to be able to predict their supply very accurately. But their supply is dependent on the weather, so we need to be able to predict weather accurately.
Oh, there is also the butterfly effect, but let us save that for later.
Last week, at the European Control Conference held in Linz, Austria, scientists from IBM and the National Renewable Enegry Laboratory (NREL) announced that they will be sharing their forecasting model to the public. (news link) (IBM's video link)
Much of the excitement in Artificial Neural Networks (ANN) comes from a field called deep learning. I came across this review paper published by the three pioneers in this field - Yan LeCun, Yoshua Bengio and Geoffrey Hinton in Nature.
We know that in a simple neural network, the nodes apply a sigmoid function to a weighted linear combination of the input vectors. Therefore, to build the neural network, we need to know the weights.
In the 80s, LeCun came up with the backpropagation method to compute the weights in a neural network for a supervised learning problem. In the 90s, techniques like Support Vector Machines were discovered and they began to outperform backpropagation. And ANN fell into disuse. Geoffrey Hinton, in 2006, found an efficient way to compute these weights. And ANNs are back to an all-time high in popularity.
The review paper is really good and I recommend reading it.
By now, you may have already seen the plethora of deep dream images out on the web. Google engineers - Mordvintsev, Tyka and Olah - discovered a really beautiful application of deep learning in image processing. 19 of the best images produced as a result of deep dream are here.
Remember when you perform Matrix Decompositions (SVD or Eigen-value decomposition) on images of faces and then get eigen faces as a result? Now start superimposing the eigen face on to the original image, and then learn the eigen face of that new image and so on. I think with Deep Dream , Google was trying to do the same thing. Google is using their already trained image recognition neural network. Assuming this neural network has eigen-images calculated. When you input a new image into Deep Dream, it checks to see which of the eigen images are closest to the input image and then it iteratively super-imposes that onto the original image itself. This happens to produce some stunning results.
That is all for this post. Please share your thoughts and comments below. Do you have additional interesting stuff to share? Please share those too! I will call you out on my next post to say thank you. :)