Home » Uncategorized

The Three Way Race to the Future of AI. Quantum vs. Neuromorphic vs. High Performance Computing

Summary:  There’s a three way technology race to bring faster, easier, cheaper, and smarter AI.  High Performance Computing is available today but so are new commercial versions of actual Quantum computers and Neuromorphic Spiking Neural Nets.  These two new entrants are going to revolutionize AI and deep learning starting now.

 2808336570

AI and Deep Learning has a Problem – Three Actually.

Time:  The amount of time needed to train a deep net like a CNN or an RNN can be weeks.  This doesn’t count the weeks or months in defining the problem and the iterative successes and failures in programming deep nets before they reach the required performance thresholds.

Cost:  Weeks of continuous compute time on hundreds of GPUs is expensive.  Renting 800 GPUs from Amazon’s cloud computing service for just a week would cost around $120,000 at list price.  This doesn’t begin to account for the manpower costs.  Spinning up an AI project can mean months or a year or more of the highest cost talent.

Data:  In many cases the unavailability of labeled data in sufficient quantity simply makes the project a non-starter.  There are lots of good ideas out there that go unexplored since it’s clear that the training data just isn’t going to be available at an affordable price.

So where we have made good advancements in the commercial realm mostly involving image processing or text and speech recognition, as often as not these startups have exploited the work of Google, IBM, Microsoft and others that have made many good image and speech models available by API.

A Three-Way Race for the Future of AI

If you’re following the field you’ll see that we’ve sprinted out ahead using CNNs and RNNs but that progress beyond these applications is only now emerging.  The next wave of progress will come from Generative Adversarial Nets (GANs) and Reinforcement Learning, with some help thrown in from Question Answering Machines (QAMs) like Watson.  There’s a good summary of this in our recent article “The Three Ages of AI – Figuring Out Where We Are.

 2808338324

Here’s the most frequently expressed vision of how we move forward.  Use what we know; that’s increasingly complex deep neural nets with different architectures from the now common CNNs and RNNs.  Just make them run faster.

Actually though the future may be quite different. What we see shaping up is a three-way race for the future of AI based on completely different technologies.  Those are:

  1. High Performance Computing (HPC)
  2. Neuromorphic Computing (NC)
  3. Quantum Computing (QC).

One of these, high performance computing is the major focus of what we’re seeing today.  There’s a flat out race among chip makers plus some less likely non-hardware folks like Google to build chips designed to accelerate deep learning.  The other two, neuromorphic, also known as spiking NNs, plus quantum computing always seemed that they were years away.  The fact is however that there are commercial neuromorphic chips and also quantum computers in use today in operational machine learning roles. 

Depending on how you like your metaphor this is either the tip of the iceberg or the camel’s nose under the tent.  Hot or cold, both of these new technologies are going to disrupt what looked like a straight path to artificial intelligence, but disrupt in a good way.

High Performance Computing (HPC)

The path that everyone has been paying most attention to is high performance computing.  Stick to the Deep Neural Net architectures that we know, just make them faster and easier to access.

Basically that’s meant two things: better general purpose environments like TensorFlow, and greater utilization of GPUs and FPGAs in larger and larger data centers, with the promise of even more specialized chips not too far away. 

The new business model in AI is ‘open source’.  During the first six months of 2016, literally just 12 months ago, essentially every major player in AI made their AI platforms open source.  These are all competitors with enormous investments in data centers, cloud services, and the IP of AI.  The strategy behind open source is simple.  He with the most users (platform adopters) wins.

While Intel, Nvidia, and other traditional chip makers were rushing to capitalize on the new demand for GPUs, others like Google and Microsoft were swimming in completely new waters by developing proprietary chips of their own that make their own deep learning platforms a little faster or a little more desirable than others.

Google certainly threw a good punch with TensorFlow as its powerful, general purpose solution combined with their newly announced proprietary chips, the TPU (Tensor Processing Unit).

Microsoft has been touting its use of non-proprietary FPGAs and just released a major 2.0 upgrade of its Cognitive Toolkit (CNTK).  CNTK offers a Java API for direct integration for Spark among others.  It supports code written in Keras, a major competitor and essentially a front end for TensorFlow making it easy for users to migrate away from Google.  CNTK is reported to be somewhat faster and more accurate than TensorFlow and also provides Python APIs.

Spark integration will continue to be an important driver.  Yahoo has already brought TensorFlow to Spark.  Spark’s main commercial provider Databricks now has its own open source package to integrate deep learning and Spark.

The key drivers here address at least two of the three impediments to progress.  These improvements will make it faster and easier to program for more reliably good results, and faster chips in particular should make the raw machine compute time shorter.

The question, much like the limits of Moore’s Law, is just how far will these improvements take us.  They’re available today and they will keep on moving forward.  Will they be sufficient to break us out into GANs and Reinforcement Learning?  Probably yes, at least insofar as we know how to use these deep learning architectures today.

Neuromorphic Computing (NC) or Spiking Neural Nets (SNNs)

2808333250Neuromorphic or Spiking NNs are on the pathway to strong AI and are based on several observations about the way that brains actually work that is significantly different from the way we’ve designed our deep neural nets so far.

To start with, researchers can observe that in a brain not all neurons fire every time.  Neurons send selective signals down the chain and it appears that the data is actually encoded in some way in the spikes of potential in that signal.  Actually these signals consist of a train of spikes so research proceeds on whether the information is encoded in the amplitude, the frequency, or the latency between the spikes in the train, or perhaps all three.

In our existing deep neural nets all neurons fire each time according to the relatively simple activation functions that are sigmoid, or increasingly ReLU.

Neuromorphic computing has already demonstrated several dramatic improvements over our current deep learning NNs.

  1. Since not all ‘neurons’ fire each time then a single SNN neuron could replace hundreds in a traditional deep NN yielding much greater efficiency in both power and size. 
  2. Early examples show they can learn from their environment using only unsupervised techniques (no tagged examples) and very few examples making them very quick learners.
  3. They can generalize about their environment by learning from one environment and applying it to another.  They can remember and generalize, a truly breakthrough capability.
  4. They are much more energy efficient which opens a path to miniaturization.

So changing this basic architecture can solve all three of the fundamental problems facing deep learning today.

Most importantly, you can buy and utilize a Neuromorphic Spiking NN system today.  This is not a technology that’s far in the future.

BrainChip Holdings (Aliso Viejo, CA) has already rolled out a commercial security monitoring system at one of Las Vegas’ largest casinos and has announced other applications about to be delivered.  In Las Vegas, its function is to visually and automatically detect dealer errors by monitoring the video streams from standard surveillance cameras. It learns the rules of the game entirely by observation. 

BrainChip is a publically traded company on the Australian Exchange (ASX: BRN) and claims significant IP patent protection of its SNN technology.  It’s rolling out a series of its own gambling-monitoring products and pursuing licensing agreements for its IP.

Yes there are many improvements yet to come but SNNs are a commercial reality and option in developing AI today.

Quantum Computing

2808331939Some things you may not realize about Quantum Computing:

  • It’s available today and has been in commercial operational use by Lockheed Martin since 2010.  Several other companies are launching commercial applications all based on the D-Wave Quantum computer which is first-to-commercial-market.  D-Wave has recently been doubling the size of its Quantum computer every year and is on track to continue to do so.
  • In May IBM announced the commercial availability of its Quantum computer, IBM Q.  This is a cloud based subscription service which will undoubtedly lead the way in vastly simplifying access to these otherwise expensive and complex machines.  IBM says that so far users have run 300,000 experiments on their machine.
  • Google and Microsoft are on track to commercially release their own Quantum machines over the next two or three years as are a whole host of independents and academic institutions.
  • Open source programming languages have been introduced by D-Wave and some independent researchers to make programming these devices much more accessible.
  • Quantum computers excel at all types of optimization problems which include the entire category of algorithms based on stochastic gradient descent solutions.  They readily mimic Restricted Boltzmann Machines which you’ll recognize as one of many deep neural net architectures and they are currently being used in deep learning configurations to do image classification similar to CNNs.  Due to some differences in architecture we need to differentiate these as QNNs (Quantum Neural Nets).
  • According to a 2015 research report by Google benchmarking a D-Wave quantum computer against traditional computers the quantum machine outperformed the traditional desktop by 108 times — making it one hundred million times faster.  “What a D-Wave does in a second would take a conventional computer 10,000 years to do,” said Hartmut Nevan, director of engineering at Google, during a news conference to announce the results.

So Quantum represents still a third path forward to strong AI and overcomes the speed and cost issues.

How Will This All Shake Out?

The fact is that both Neuromorphic and Quantum are laying out competitive roadmaps for getting to deep learning and even newer versions of artificial intelligence faster and perhaps easier.

  1. First of all, the time line.  High performance computing is here today and will likely continue to improve in performance over the next several years based on new types of chips just now being introduced.  However, huge investments in new fabs and data centers could be disrupted soon by advances in Quantum and Neuromorphic.
  2. Deep learning platforms exemplified by Google’s TensorFlow and Microsoft’s Cognitive Toolkit (CNTK) are here today and other competitors will undoubtedly join in an effort to earn the most users.  These will be adapted to include Quantum and Neuromorphic as those capabilities spread.
  3. Both Neuromorphic Spiking Neural Nets (SNNs) and Quantum computing are just appearing commercially.  Each will offer extraordinary new capabilities to AI. 
  4. SNNs promise to be powerful self-learners opening up vast efficiencies through smaller unlabeled training sets and the ability to transfer knowledge from one domain to the next.
  5. Quantum computers will completely eliminate the time barrier and eventually the cost barrier reducing time-to-solution from months to minutes.  Importantly the style of learning currently being used is called Enhanced Quantum Computing because it is based on our current deep learning algorithms and enhances their performance.  Yet to come are totally new types of machine learning based on wholly different capabilities unique to these machines.

My personal sense is that with both Quantum and Neuromorphic we are at a point in time much like 2007, the year Google’s Big Table became open source Hadoop.  At first we didn’t quite know what to do with it, but three years later Hadoop had largely taken over advances in data science.  I think starting today, the next three years are going to be amazing.

For more understanding of these new technologies try these recent articles:

Quantum Computing and Deep Learning. How Soon? How Fast?

Understanding the Quantum Computing Landscape Today – Buy, Rent, or Wait

Quantum Computing, Deep Learning, and Artificial Intelligence

Beyond Deep Learning – 3rd Generation Neural Nets

More on 3rd Generation Spiking Neural Nets

About the author:  Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001.  He can be reached at:

[email protected]