Summary: Since BERT NLP models were first introduced by Google in 2018 they have become the go-to choice. New evidence however shows that LSTM models may widely outperform BERT meaning you may need to evaluate both approaches for your NLP project.
Added by William Vorhies on September 21, 2020 at 12:00pm — No Comments
Summary: In the literal blink of an eye, image-based AI has gone from high cost, high risk projects to quick and reasonably reliable. C-level execs looking for AI techniques to exploit need to revisit their assumptions and move these up the list. Here’s what’s changed.
For data scientists these are miraculous times. We tend to think of miracles as something that occurs instantaneously but in our world that’s not quite so. Still the rate…Continue
Added by William Vorhies on March 4, 2019 at 9:41am — No Comments
Summary: Not enough labeled training data is a huge barrier to getting at the equally large benefits that could be had from deep learning applications. Here are five strategies for getting around the data problem including the latest in One Shot Learning.
Summary: Remember when we used to say data is the new oil. Not anymore. Now Training Data is the new oil. Training data is proving to be the single greatest impediment to the wide adoption and creation of deep learning models. We’ll discuss current best practice but more importantly new breakthroughs into fully automated image labeling that are proving to be superior even to hand labeling.
Added by William Vorhies on August 28, 2018 at 7:27am — No Comments