The section of the blog that fueled the most comments stem from a scene in the movie I, Robot where Detective Spooner (played by Will Smith) is explaining to Doctor Calvin (who is responsible for giving robots human-like behaviors) why he distrusts and hates robots. He is describing an incident where his police car crashed into another car and both cars were thrown into a cold and deep river – certain death for all occupants. However, a robot jumps into the water and decides to save Detective Spooner over a 10-year old girl (Sarah) who was in the other car. Here is the dialogue between Detective Spooner and Doctor Calvin about the robot’s “decision” to save Detective Spooner instead of the girl:
Doctor Calvin: “The robot’s brain is a difference engine. It’s reading vital signs and it must have calculated that…”
Spooner: “It did…I was the logical choice to save. It calculated that I had 45% chance of survival. Sarah had only an 11% chance. She was somebody’s baby. 11% is more than enough. A human being would have known that.”
One of the readers, Warren, shared an MIT site (http://moralmachine.mit.edu/) that allows one to compare their answers to others around various autonomous vehicle life-and-death situations. Some of the scenarios are fairly straightforward…unless you’re a cat lover (see Figure 1):
However, the scenarios get increasingly more complex (see Figure 2).
Another reader, Swen, provided an interesting perspective about the potential insurance ramifications related to the “life-and-death” analytic models pre-programmed into the autonomous vehicle:
“But, there is another, very important party involved which has not been mentioned before. It is the very powerful insurance companies. Based on a general “zero law” they will have a very decisive impact on what will be and what not. They will only insure you – and the damage you make – if you have driving software version “XYZ” that complies with their regulations. Else you will not get insured.”
Hacking the Autonomous Vehicle
Maybe my favorite perspective came from Patrick Henz , who shared with me the article “Compliance Tasks Related to Self Driving Technology.” The article poses another challenge facing the autonomous vehicle industry – hacking of the “life-and-death” analytic models:
“Today chip-tuning is already used to change the management of the engine and find additional horsepower. This is in most cases legal, but liberates the car manufacturer from its guarantee. When self-driving cars are a relevant market, it is a question of time, when programmers will offer software to ensure a higher safety for their owners, programmed preference for the passenger against the pedestrians.”
In the same way that there are after-markets for computer chips that override the engine performance settings that come with the automobile out of the factory, will there evolve an after-market for technicians who can “hack” the life-and-death settings that are pre-programmed into an autonomous vehicle?
We are already seeing situations where customers are resorting to “hacking” their vehicles. Farmers are hacking their John Deere tractor’s firmwarein order to perform their own maintenance repairs on their John Deere tractors. Farmers are struggling with the John Deere software license that only allows Deere dealers and “authorized” shops to perform maintenance repairs on tractors.
According to some farmers, John Deere “charges out the wazoo” for repairs. Plus “authorized” mechanics might not arrive to fix a broken tractor in a timely manner, which can affect a farmer’s operations and eventually, their finances.
Will smart mechanics hack the life-and-death decisions pre-programmed into an autonomous vehicle? Or maybe there’ll be a “Death Selector” user setting in the autonomous vehicle preferences (see Figure 3).
On September 6th, the United States House of Representatives voted to speed the introduction of self-driving cars by giving the federal government authority to exempt automakers from safety standards not applicable to the technology.
I’m not sure how this will end, but I’m certain that this is not an issue that should be decided by technology companies. And now I have concerns about the federal government’s ability to address this issue, given how quick they were to obfuscate the automakers from any safety liabilities associated with an autonomous vehicle.
However, I also know that I don’t want “machines” making these decisions themselves. Machines don’t fear death, and I’m not certain how to program an autonomous vehicle operating system that fully appreciates the moral consequences and ramifications of death.