Subscribe to DSC Newsletter

Ethics in machine learning are what comes to mind when we imagine a worst case scenario in the context of artificial intelligence. As an example, we can think of HAL 9000 from ‘2001: A Space Odyssey’ and Skynet from the ‘Terminator’ films or more recently Ultron from the ‘Avengers’. Sadly, the main thinking behind most of the depicted self aware artificial intelligences is that they become sentient with the sole purpose of destroying the human race to ensure its survival.

While such a scenario is not impossible or thankfully a ways off from this type of dystopian future. There are however pressing ethical matters in AI that we need to be considering right now. In the Terminator franchise the machines weren’t frightening because they were learning from data and acting on it, that is the purpose of AI after all. They were frightening because they were learning from the data and acting on in an unintended way, these unintentional consequences are one of the biggest ethical dilemmas facing the AI community today. Of course this could happen because of a programming error, more likely with AI the error will occur either while specifying or training the model.

Training Data

In the rush to deliver on the hype of cool new AI. Microsoft researchers created Tay, a natural language processing experiment consisting of a chatbot that would train on the nuances of teen slang. On March 23rd 2016, Microsoft released Tay on twitter and within 16 hours, users from fourteen to eighteen had turned the bot into a Nazi and Microsoft had to pull the experiment. Later on, Microsoft replaced it with Zo, a chatbot that is so far still working and mimicking a teenage girl conversational pattern, it is available on most popular chat services.

It is clear that an ethics training is very essential for AI designers and developers. The technical aspects of the projects is of no exclusive importance, having in mind the users with whom the conversation will occur is a high priority too.

Bias

In June of 2015, Google found itself in an embarrassing situation when its new photo tagging algorithm mistakenly categorized a black couple as gorillas. The engineers responded quickly and apologetically but the bias had already shown through. There simply weren’t enough people of color in the train data for the algorithm to correctly categorize the photo. 


This happened in Nikon as well, the lack of diversity and training data will continue to be a major ethics issue as it manifests itself in all ways developers couldn’t imagine. Even when this is unintentional, implicit bias plays a huge role. Comprehensive ethics training should include sections on diversity and implicit bias.

Safety

Safety is another major concern, it should be noted that AI can greatly improve our safety. From Tesla’s crash prediction and warning system to the federally mandated positive train control program, designed to stop head-on collisions between trains. AI can do things we as humans cannot and it can step in when humans are prone to error, it must be understood though that the more we lend our trust to AI the more we need to be aware of the inherent risk.

Something like a Roomba may pose no greater harm than scaring the cat, or scratching the floor, but we move up to something like Tesla the autopilot the stakes increased dramatically. One man had already been killed because he was watching a movie while trusting Tesla’s autopilot entirely, though the system warned him as many as seven times, he ignored the warnings and was killed in the crash. Not only must we as engineers integrate safety features into our algorithms but we must make sure our users know the risks and how to operate our product safely.

You may be familiar with Isaac Asimov’s Three Laws of Robotics. They are:  number one, a robot may not injure a human being or through inaction allow a human being to come to harm. Number two, a robot must obey orders given to it by human beings except where such orders would conflict with the first law. And three, a robot must protect its own existence as as long as such prediction does not conflict with the first or second law. There were a couple of laws added later on and there’s been a lot of debate around them but these were the original three laws, and these were developed in the 40’s by Isaac Asimov and they’ve been again subject to a lot of debate but the fact of the matter is, these are not actually laws enacted by a government or international community yet but rather guidelines by a science fiction writer.

In reality autonomous robots that cause harm are already here. We certainly have the technology develop them. There are extensive debates to be had about the laws we put around the autonomous machines that are made to harm humans. The topic is largely unprecedented perhaps closely rivaled by something as outlandish and not as inconceivable as a space war, for instance. The only guidance we will have, going into this uncharted territories, is our existing grasp of ethics, if we haven’t developed an ethical foundation yet.

So What Are Good Ethics?Through some of the previous case studies, you can begin to understand that there is a need for ethics in some form. Ultimately, this is where it gets tricky, we begin to part ways with the binary ones and zeros of machine learning and stumble into the nebulous universe of philosophy. Aristotle argue that in his first book of his Nicomachean ethics that we can all agree that the goals of our actions is to attain some good, but our agreement on what good is, is why we differ. Defining good ethics is still an unsolved problem, but nonetheless it is always an urgent matter that needs to be argued about. With the advancements in processor technology the sophistication of machine learning algorithms has improved drastically in recent years. While it feels like we’ve had AI around all along we are in fact at the beginning of truly useful AI and only within the last few years at that.

Morality & Regulation


Regulating AI is hard, 

“I keep sounding the alarm bell but until people see robots going down the street killing people they don’t know how to react because it seems so ethereal”

~Elon Musk 

This quote should capture everyone’s attention musk and his endeavors are using some of the most advanced AI out there, so he’s a credible source on the subject. David Hard a researcher on google brain is no less credible and he believes that the the real issues are going to be in how humans mask unethical activity using statistics and and machine learning. I believe that Hard’s assertions are more pressing for the immediate whereas Elon Musk’s are more of a future concern. But at any rate, governments will have a hard time regulating AI.

In October 2012 an Italian Court sentenced its scientists to a six-year prison sentence for manslaughter for failing to predict an earthquake that killed more than 300 people. That sentence was eventually overturned and scientists argue that this sets a dangerous precedent that may very well discourage other scientists from important work for fear of retaliation for wrong answers. There’s a balance to be sought of discouraging negligent behavior but encouraging research and discovery. This is likely to be a moral uphill battle.

"computer scientist and engineer must examine the possibilities for machine ethics because knowingly or not they’ve already engaged in some form of it"

~J.H. Moore

This is true and anytime you’re engaging in these very advanced features you are engaging in some kind of ethical behavior or in unethical behavior. For that matter, it becomes necessary to integrate ethics training into the core curriculum of Computer Sciences and maybe it should be present in all technical curricula. Taking this small step can have a big impact on future generations of researchers and engineers.

In Conclusion

Silicon Valley pioneers who are forging the bleeding edge of AI, or for that matter anyone intervening in the design of this kind of technology, need to take it upon themselves to institute strong ethical standards. Creating an overall culture of high ethical standards sets a solid foundation on which AI can thrive without causing harm be intentional or unintentional. But the time for consideration is now if we wait any longer to bring ethics into the conversation it may be too late.

Article originally posted on mlbits.com by Youness ECHCHADI

Views: 335

Tags: ethics, machine learning, morality, regulation, training data

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service