Home » Uncategorized

Who Do We Blame When an AI Finally Kills Somebody

Summary:  We’re rapidly approaching the point where AI will be so pervasive that it’s inevitable that someone will be injured or killed.  If you thought this was covered by simple product defect warranties it’s not at all that clear.  Here’s what we need to start thinking about.

310415230So far the press reports of AI misbehavior fall mostly in the humorous or embarrassingly incompetent categories.  Alexa orders doll houses when a little girl asks it to.  Robot guards fall into pools or push people over.  Google tags black people as gorillas.  Microsoft’s chatbot Tay spouts sexual and Nazi responses.  Vendors are embarrassed.  Social media gets a laugh.

But we’re rapidly closing in on the time when the AI embedded in our products will almost certainly harm or kill someone.  It’s only a matter of time.  That won’t be funny.  What do we do then?  Who do we finger to pay compensation or even go to jail?

There’s already been significant discussion about bias in machine learning applications, how to spot it and how to fix it.  But so far the conversation about legal and financial liability has been sparse. 

It’s a tough question that developers and vendors aren’t anxious to shine a light on.  So let’s not be the ones to say ‘Wow why didn’t we see that coming’ and at least try to frame the problem and the questions that arise.

What Types of Harm Are We Talking About?

Our headline led with the possibility of death and that’s certainly on the table given the mechanical capabilities of self-driving cars and industrial robots.  Physical injury is on that continuum. 

Financial compensation for physical damage caused by these same AI-driven mechanical devices is also on the table.  And how about non-physical damages like direct financial loss or reputational loss that have real economic consequences. 

These are all items that regularly end up being litigated for causes other than an AI failure so certainly they will be a source of litigation as AI becomes a regular part of our lives. 

Moreover, companies who will increasingly incorporate smart AI features into previously dumb appliances but who have no role in developing the AI or in understanding how it can go wrong will also be on the hook.  Think about refrigerators and thermostats, or home security systems.

Rapidly, all manufacturing and many service companies are becoming AI technology companies even if they don’t understand the limits of the AI technology provided by others.

Types and Sources of Risk

It’s easy to understand that if an AI driven car or industrial robot suddenly goes berserk and operates outside of its expected operational parameters that the product has malfunctioned.  Or has it?  Is it really broken or is it a design flaw?

Mechanical and logical malfunctions as a result of a failed component are easy to understand.  But similar events can occur when an AI reacts to rare edge or corner cases. 

An edge case might occur when an extremely infrequent input value is received, perhaps one that never appeared in the training data.  Think of when a child might suddenly dart out from between parked cars. 

Corner cases are those where two or more rare inputs are received simultaneously making the response to training even more unpredictable.  Humans deal with these circumstances by taking the action they judge to have the most positive outcome at the time.  AIs on the other hand may never have seen this combination of variables before and may not react at all.

There are also questions of original design.  The best known example of this category is the famous ‘trolley problem’ in which the human operator must make a quick decision to change direction saving bystanders but injuring passenger, or not changing direction to save passengers but kill bystanders. 

This is not theoretical.  This specific logic must be defined for every autonomous vehicle.  Will the manufacturer tell you which logic is at work in your AUV?  Would you even ride in one implicitly programmed to harm you and save bystanders?  Would the injured bystanders have recourse against the manufacturer because of that design decision?

We might also be damaged when an AI fails to act.  In many cases humans may incur liability for failure to act when there is an expectation of care.  The failure to act may result in physical harm or financial damages that you might reasonably pursue in court.

How Would Courts Even Identify Artificially Intelligent Systems

Even within the data science community there is significant disagreement over what is and is not AI.  How would a less informed court officer or attorney determine if there was AI present and potentially at fault?

For purposes of this discussion, let’s assume that AI is present when they device in question can sense a situation or event and then recommend or take an action.

However, this definition would include a device as simplistic as a mechanical thermostat that uses not logic but the physics of the different rates of expansion of metals to turn on and off our heaters.

So there does seem to be a second criteria necessary.  That criteria is most likely to be that the system considers multiple inputs simultaneously and that the action has been derived not by explicit programming but by algorithm-driven discovery based on training data or repetitive discovery (to cover both deep learning and reinforcement learning).

Isn’t This Already Settled Law?  Aren’t These Simply Products?

310426664We deal with faulty products through our legal system on a regular basis.  Aren’t programs and products like AUVs simply products with warranties handled like any other potentially faulty product?  As it turns out, no not quite.

John Kingston, Ph.D. who conducts research in knowledge-based Artificial Intelligence, cyber security and law at the University of Brighton has written a very comprehensive paper on this topic.  I encourage you to read it.  Highlights here.

First the distinction between product and service is not settled.  There are precedents on both sides.  The definition as product narrows the types of legal claims that can be made which benefits the maker. 

The definition as a service opens the legal interpretation to claims of negligence which can follow the chain of invention from end device manufacturer (e.g. the AUV) back up through individuals including prior designers, programmers, and developers.

When dealing with the death of a loved one, the potential individual payout is greater under negligence and also fulfills our emotional need to find a specific individual guilty.  Negligence also opens the door to potential criminal liability. 

Kingston’s paper lays out all these alternatives but it’s a non-legal element he raises that caught our attention.  Especially for AUVs where the potential for personal injury is high, defining the AI as product may financially hold back the industry.  “Settlements for product design cases (in the USA) are typically almost ten times higher than for cases involving human negligence, and that does not include the extra costs associated with product recalls to fix the issue.”

At some point these economics may actually drive AI providers to favor the interpretation of service, even with the risk of claims of negligence.

What about 3rd Party Testing to Ensure Safety

Underwriter Laboratories (UL), the largest of the testing, inspection, and certification laboratory in the world recently suggested it would begin examining the AI elements of the products it evaluates.

Some writers have seized on this to suggest a whole list of areas they would like to see certified by such a test.  These include:

  • guarantees that the app or device could not operate other than intended (rogue agency)
  • that the probabilities inherent in algorithmic development are completely predictable and free from unintended side effects
  • that sensor blind spots are fully revealed and controlled
  • that they are free from privacy violations
  • and also secure against hacking.

That’s quite a wish list and pretty obviously one that cannot be met economically or practically considering that most AI systems are both probability based and dynamic and continue to update their learning through exposure to the user’s environments.

Also, we need to be realistic about the accuracy and capabilities of our AI and the fact that their very method of creation means that false positives and false negatives will occur at some level.  It’s also interesting that even though our AIs may in some cases outperform human capabilities, we are much less forgiving of these errors than we would be from another person.

Just as a reminder, in our recent article Things That Aren’t Working in Deep Learning, we pointed out that best accuracy on moving video images (how AUVs see) as just short of .82 and that in reinforcement learning (how AUVs learn to drive) 70% of models using deep learning as an agent failed to train at all.

Particularly in reinforcement learning, the core technology behind AUVs and industrial robots, data scientists continue to report just how difficult it is to create complete and accurate reward functions.  The literature is full of RL experiments that have gone humorously awry including some so focused on their reward function that they even learn to disable the off button so that nothing can interfere.

There is some reason to be hopeful in this area, at least where reinforcement learning is concerned.  DeepMind, Alphabet’s deep learning lab has a paper describing how they are developing testing of RLs to make sure they are safe in three areas.

  1. The off-switch environment: how can we prevent agents from learning to avoid interruptions?
  2. The side effects environment: how can we prevent unintended side effects arising from an agent’s main objective?
  3. The ‘lava world’ environment: how can we ensure agents adapt when testing conditions are different from training conditions?

Other Out-of-the-Box Thinking

Estonia which fashions itself a leader in all things digital is examining the possibility of granting AIs a separate legal status almost akin to a person that would allow the AI to buy and sell products and services on the owner’s behalf.

This note from Marten Kaevats, the National Digital advisor of Estonia gives this some context.

“The biggest conversation starter is probably the idea to give separate legal subjectivity to AI. This might seem like overreacting or unnecessary to the status quo, but legal analysis from around the world suggests that in the long-term this is the most reasonable solution.”

“AI would be a separate legal entity with both rights and responsibilities. It would be similar to a company but would not necessarily have any humans involved. Its responsibilities would probably be covered by some new type of insurance policy similar to the vehicle/motor insurance nowadays. In Finland there is already a company whose voting board member is an AI. Can you imagine a company that has no humans in their operations?”

One thing is clear, trying to sweep this issue under the rug is to welcome having unplanned and unpleasant realities forced upon us as AI moves into our everyday life.  It’s early in the development cycle but we may be only days away from having to face these issues in the court of law and make sure the users of AI are reasonably protected.

Other articles by Bill Vorhies.

About the author:  Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist since 2001.  He can be reached at:

[email protected] or [email protected]