This could a little late given that we have already embarked upon a new year. But it could be worthwhile looking back for a moment...
2016 was definitely the year of AI in the recent technology timeline. If that’s a little far fetched, considering the wide use of drones, advances in VR/AR and blockchain, that’s because of the ‘bias’ (read enthusiasm) in my neurons. I haven’t been for long in this field but after Deepmind’s paper a few years back, this year was among the first to show commercial viability of AI and showed how well poised it is for a few established problems. I have tried here to distil some major events that happened earlier in 2016 but, you know, like all networks, my brain might have missed out on some signals. Feel free to add your own events in the comments.
Events that made news:
Engineers’ top picks:
5. OpenAI gym launched: Democratizing AI became the pillar of Musk’s effort to make AI safe. Result: An environment that’s as much a gym, as a playground. Learn from peers at gitter, try some new algo or laugh at Humanoid’s gait. I was pleasantly surprised to see Infosys as one of the lead investors in this venture.
6. Google releases ML APIs (cloud vision and others): Again, releasing the power of ML to the developers and not-so-equation-friendly geeks. You just call the APIs as you are used to being a developer. A great relief for projects that need a small ML functionality but can’t invest in the infra.
7. AMD Instinct: This is my personal observation. Little do we realise how we get tightly bound to a certain brand, technology or a device over a period of time. I felt the same with nvidia. A standard deep neural net setup process was: Launch AWS GPU instance, install CUDA, CuDNN, then theano/tf or libraries of your choice. nvidia GPUs became the standard (which is because nvidia GPUs are good). But stability also brings stagnation and complacency. Enter AMD to give nvidia a run for their money. AI community is benefiting anyway.
8. AWS now houses P-type instances: Speaking of AWS GPU instances, Engineers and Hacker/Makers have a tough time ahead. Here’s why: 16 core GPUs with 192GB GPU memory and 732GB RAM? Now you can’t run training and say to your boss “will get back to you in a week ;)”.
From Academia: (I will not dabble too much into this section but here's a couple of mentions)
9. I noticed Generative models did seem to be the theme for a lot of papers (resurfacing of autoencoders).
And finally, 10. ICML conference this year was great: seeing all companies Microsoft/fb/Google vying for researchers’ attention. All the talks/workshops discussed there can be found at http://techtalks.tv/events/
That's all folks. Thanks for reading. What do you think about this list? Are there events that I missed out (esp. in academia)? Comment now and let other readers know as well.