Home

Data Science and Technology Monthly – December 2015

A whole bunch of incredible things have happened in Machine Learning and Artificial Intelligence since November.

1. Google Open-Sources TensorFlow

In November, Google open-sourced their machine learning technologycalled TensorFlow that powers a bunch of their products like Google Photos Search, Smart Reply, speech recognition and more. 

TensorFlow was a successor to their DistBelief technology that remained dependent on Google infrastructure, and hence wasn’t ready to be open-sourced. However, TensorFlow was developed with the open source concept in mind.

Some analysts believe that this strategy was similar to what Google adopted for Android. Open-sourced Android has grabbed 80% market share in the smartphone market. As pointed out in this Forbes article, Google is probably open sourcing TensorFlow to help it become the gold standard in machine learning. In a few years, as the artificial intelligence and machine learning market becomes more popular, a robust open source platform will become attractive to new users. And this is exactly what Google is hoping for.

2. Hardware Infrastructure

When Google shared TensorFlow, they shared the algorithms that ran on top of their very advanced hardware architecture. They didn’t necessarily share the infrastructure itself. Without the hardware design, the algorithms are limited in their application. However, Facebook jumped on this bandwagon and announced that it is open sourcing the hardware design for the servers it uses to train deep learning algorithms. This server was codenamed Big Sur. Check out their press release here. This solves for the hardware issue. The world is moving very quickly towards better AIs.

Data Science and Technology Monthly – December 2015

3. Project Oxford

Along the same lines, Microsoft announced that it will outsource stuff too. Project Oxford is the name of Microsoft’s machine Learning APIs that span across different productivity and user experience enabling applications. Last week they announced that they were making the Speaker Recognition APIs and Video APIs from Project Oxford available for developers to build applications on.  See press release.

With companies racing each other to open source their algorithms and hardware designs and their APIs, what all this really signifies is not the triumph of open-source, but the triumph of data. When it comes to Machine Learning or Artificial Intelligence, the real value now lies not so much in the algorithms, but in the data needed to make the algorithm smarter.

It doesn’t matter if you have the algorithms or the architecture, you only win if you have the data.

4. AI vs. Humans

As we have seen so far, data is everything. Traditionally we need thousands of example data for machines to learn an algorithm. There was a featured story on this week’s MIT tech review about a deep learning startup called Geometric Intelligence. The human mind as opposed to machines have a faster learning process even with smaller amounts of data. The human mind is able to derive deeper abstractions from relatively little data. Geometric Intelligence works on giving machines the ability to learn such abstractions quickly by developing algorithms for this purpose.

 With real world classification problems, there are a lot of exceptions. Humans are able to encode exceptions into their cognitive functions much better than we can code these up in algorithms for machines to understand. And typically, these exceptions are included in the data that we use to train the algorithm. So the more exceptions we have, the more training data we need. This makes real world problems really difficult to teach to machines.

 Meanwhile, researchers from NYU, Univ of Toronto and MIT published this seminal paper in Science called “Human-level concept learning through probabilistic program induction”They describe a model that learns as quickly as humans, with just one training example and the results produced were stunning, because they achieved better than human performance.

 While Elon Musk and Y Combinator plan to stop computers from taking over Yann LeCun, the head of Facebook AI Research calls it as it is andbusts 3 myths about AI. If you have to pick one of these two to read, my unanimous vote is with the myth busting article.

5. NIPS 2015

The Neural Information Processing Systems conference happened in Montreal in December.  Brad Neuberg of Dropbox shares some of the trends in deep learning that he observed from attending the conference here.

And to end, here are all the reasons why 2015 was so big for AI.

Tags: