Summary: As we have become ever more enamored with DNNs, and their accuracy and utility has been paced only by their complexity we will need to answer the question of whether we will ever really be able to explain what goes on inside.
Added by William Vorhies on May 11, 2020 at 2:41pm — No Comments
Summary: The ability to train large scale CNNs directly on your cell phone without sending the data round trip to the cloud is the key to next gen AI applications like real time computer vision and safe self-driving cars. Problem is our current GPU AI chips won’t get us there. But neuromorphic chips look like they will.
Added by William Vorhies on June 4, 2019 at 9:00am — No Comments
Summary: Recurrent Neural Nets (RNNs) are at the core of the most common AI applications in use today but we are rapidly recognizing broad time series problem types where they don’t fit well. Several alternatives are already in use and one that’s just been introduced, ODE net is a radical departure from our way of thinking about the solution.
Added by William Vorhies on March 11, 2019 at 7:30am — No Comments
Summary: In the literal blink of an eye, image-based AI has gone from high cost, high risk projects to quick and reasonably reliable. C-level execs looking for AI techniques to exploit need to revisit their assumptions and move these up the list. Here’s what’s changed.
For data scientists these are miraculous times. We tend to think of miracles as something that occurs instantaneously but in our world that’s not quite so. Still the rate…Continue
Added by William Vorhies on March 4, 2019 at 9:41am — No Comments
Summary: Not enough labeled training data is a huge barrier to getting at the equally large benefits that could be had from deep learning applications. Here are five strategies for getting around the data problem including the latest in One Shot Learning.
Summary: This may be the golden age of deep learning but a lot can be learned by looking at where deep neural nets aren’t working yet. This can be a guide to calming the hype. It can also be a roadmap to future opportunities once these barriers are behind us.
Summary: There are some interesting use cases where combining CNNs and RNN/LSTMs seems to make sense and a number of researchers pursuing this. However, the latest trends in CNNs may make this obsolete.
Summary: Deep Learning, based on deep neural nets is launching a thousand ventures but leaving tens of thousands behind. Transfer Learning (TL), a method of reusing previously trained deep neural nets promises to make these applications available to everyone, even those with very little labeled data.
Added by William Vorhies on April 17, 2018 at 12:25pm — No Comments
Summary: There are several things holding back our use of deep learning methods and chief among them is that they are complicated and hard. Now there are three platforms that offer Automated Deep Learning (ADL) so simple that almost anyone can do it.
Summary: As a profession we do a pretty poor job of agreeing on good naming conventions for really important parts of our professional lives. “Machine Learning” is just the most recent case in point. It’s had a perfectly good definition for a very long time, but now the deep learning folks are trying to hijack the term. Come on folks. Let’s make up our minds.
As a profession we do a pretty poor job of agreeing on good naming conventions…Continue
Summary: There’s a three way technology race to bring faster, easier, cheaper, and smarter AI. High Performance Computing is available today but so are new commercial versions of actual Quantum computers and Neuromorphic Spiking Neural Nets. These two new entrants are going to revolutionize AI and deep learning starting now.Continue
Summary: We are approaching a time when we need to be concerned that our AI robots may indeed harm us. The rapid increase in the conversation about what ethics should apply to AI is appropriate but needs to be focused on the real threats, not just the wild imaginings of the popular press. Here are some data points to help you in thinking about this, what our concerns should be today, and what our concerns should be in the future.
Added by William Vorhies on October 24, 2017 at 9:26am — No Comments
Summary: We are swept up by the rapid advances in AI and deep learning, and tend to laugh off AI’s failures as good fodder for YouTube videos. But those failures are starting to add up. It’s time to take a hard look at the weaknesses in AI and where that’s leading us.
Added by William Vorhies on April 18, 2017 at 8:04am — No Comments
Summary: The data science press is so dominated by articles on AI and Deep Learning that it has led some folks to wonder whether Deep Learning has made traditional machine learning irrelevant. Here we explore both sides of that argument.
Summary: Convolutional Neural Nets are getting all the press but it’s Recurrent Neural Nets that are the real workhorse of this generation of AI.
We’ve paid a lot of attention lately to Convolutional Neural Nets…Continue
Added by William Vorhies on October 24, 2016 at 3:53pm — No Comments
Summary: Here’s some background on how 3rd generation Spiking Neural Nets are progressing and news about a first commercial rollout.
Summary: What comes next after Deep Learning? How do we get to Artificial General Intelligence? Adversarial Machine Learning is an emerging space that points to that direction and shows that AGI is closer than we think.
Deep Learning, Convolutional Neural Nets (CNNs) have given us dramatic improvements in image, speech, and text recognition over the last two years. They suffer from the flaw however that…Continue