When AI (Artificial Intelligence) Goes Wrong...

The intelligence in AI is computational intelligence, and a better word could be Automated Intelligence. But when it comes to good judgment, AI is not smarter than the human brain that designed it. Many automated systems perform poorly, to the point that you are wondering if AI is an abbreviation for Artificial Innumeracy.

Critical systems - automated piloting, running a power plant - usually do well with AI and automation, as considerable testing is done before deploying these systems. But for many mundane tasks, such as spam detection, chatbots, spell checking, detecting duplicate or fake accounts on social networks, detecting fake reviews or hate speech in social networks, search engine technology (Google) or AI-based advertising, a lot of progress must be made. It works just like in the video below, featuring a drunken robot.

Why can driverless cars recognize a street sign, but Facebook algorithms can not recognize if a picture contains text or not ? Why can't the Alexa robot understand the command "Close the lights" but understands "Turn off the lights"? Sometimes the limitation of AI just reflects the lack of knowledge of the people who implement these solutions: they might not know much about the business operations and products, and are sometimes glorified coders. In some cases, the systems are so poorly designed that they can be used in unintended, harmful ways. For instance, some Google algorithms automatically detect bad websites using tricks to be listed at the top on search results pages. These algorithms will block you if you use such tricks, but indeed you can use these tricks against your competitors to get them blocked, defeating the purpose of the algorithm.

Why is AI still failing on mundane tasks?

I don't have an answer. But I think that tasks that are not critical for the survival of a business (such as spam detection) receive little attention from executives, and even employees working on these tasks might be tempted to not do anything revolutionary, and show a low profile. Imagination is not encouraged, beyond some limited level. Is is as "if it ain't broken, don't fix it."

For instance, if advertising dollars are misused by some poorly designed AI system (assuming the advertising budget is fixed) the negative impact on the business is limited. If, to the contrary it is done well, the upside could be great. The fact is, for non-critical tasks, businesses are not willing to significantly change the routine, especially for projects where ROI is deemed impossible to measure accurately.. For tiny companies where the CEO is also a data scientist, things are very different, and the incentive to have performing AI (to beat competition or reduce workload) is high. 

DSC Resources

Popular Articles

Views: 50019


You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by Kevin Kinsey on October 23, 2017 at 12:12pm

William Vorhies wrote this a couple days ago (in "Have you heard about unsupervised decision trees"):

"We observed that there’s no standard for when the imbalance becomes so great that it ought to be considered an anomaly.  Like great art, you’ll know it when you see it."

I kind of think that "standards" are somewhat at fault for some of the faults in AI.  Human programmers have been raised in a system that rewards a "high score" instead of a "perfect score".  As a case in point (which I'm familiar with):

At OMBE.com we use a sort of an AI, perhaps what was called an "expert system" to automatically categorize online office product listings based on textual clues in the description, the nature of the seller's business endeavors, the price, and so on.  When we tested it on small samples (say, a few hundred products), we found that it was accurate within a percentage of roughly 94% for wholesale sellers of office machines.  That's a "passing grade", even an "A", in most of our school experiences, right?

Extend that to over a hundred thousand products and suddenly we have a lot of human effort being spent to correct errors the machine makes (and since it *remembers* its previous decisions they can be perpetrated for a LONG time)...

AI needs to be *perfect* in order to run at scale/speed in the real world, and we're conditioned to think that getting a 99% on a test is good enough.

Comment by Lee McGraw on October 9, 2017 at 10:11am


Comment by AB Patki on August 21, 2017 at 5:14am
A very timely article and a real need of the hour.
In the past, the theme of AI was primarily on the methodologies described in academic books like Artificial Intelligence- a modern approach Russel & Norvig which all- through neglected COGNITIVE aspects of AI. Sheer excessively dominated use of high processing power of multicore processors as well as parallel distributed processing hardware infrastructures was demonstrated and deployed to revive AI interest and funding.This resulted in limited success and gave rise to " Crises through AI" as brought out in the article.
The need to provide
(1) Cognitive support
(2) Introducing Large Scope Computing instead of Large Scale Computing were undermined. Some attempts in review of Journal papers even revealed that such a thing was deliberate (? !!! ?). We even noticed that practices recommended by text books like " Fundamentals of the New Artificial Intelligence" by Toshinori Munakata were ignored completely even amongst academicians. No other book emphasizes technical fundamentals of newer AI areas.

Now that AI systems are failures and crises have surfaced out, the search for solution is on. AI has to overcome Academic community biases.
Incentive to have performing-AI can be possible only by embracing Large Scope computing inplace of current practice of Large scale computing.
This calls changeovet for hardware processor architecture. Multicore only created a problem of "Dark Silicon". No Hardware to support AI exits and the existing hardware simply packs off for Real AI applications.
Let us look forward to computing hardware to support Daniel Pink's vision - Why Right Brainers will rule this century - traditional AI is no good to support migration from Information Age to Conceptual age and Toshinori Munakata's New AI is the first step in that direction.
Prof. Arunkumar B Patki
COEP, Pune India.
Comment by Bill Schmarzo on August 19, 2017 at 3:47am

This lack of interest by the business might be tied to understanding the business and financial ramifications of Type I and Type II errors - what are the business and financial costs of false positives and false negatives.  As you point out, the cost of showing you the wrong website ad is minimal, but the cost of giving a patient the wrong medicine could be catastrophic.  

I tried to create a simple matrix to help my clients to try to understand those costs.  May not be entirely accurate, but it at least gets my clients to have that conversation.

© 2021   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service