Subscribe to DSC Newsletter

The scariest use of machine learning

Just like nuclear physics, machine learning, AI, and data science can be used either for the better of for the worse. You can make either useful energy or terrible bombs using nuclear fission. The same applies to machine learning, and in my example below, it gets even worse than Hiroshima or Nagasaki. 

Here I am discussing a potential use of machine learning in military operations. The scenario below is entirely hypothetical.

Imagine an army deploying tiny drones that look like bugs in an area infested by people believed to be terrorists (think about Molenbeek in Belgium). In that area, the vast majority of terrorists have a specific profile (a specific race for instance) that can make them identifiable using machine learning algorithms trained on a database of pictures (see article on face recognition). Unfortunately, it will create tons of false positives - that is, people with similar profiles, that are not terrorists. So no army is ever going to consider using such a technique to kill enemies.  

But let's say a rogue country, or a new, crazy president manage to get elected, is aware of the technology, and want to use it - not to spy on people as it was intended for initially - but to have 100 of these robot bugs each killing 100 people a day, people with a profile that the machine learning algorithm, implemented in a chip in the robot bug, will recognize. These insects are autonomous: in short, not controlled by a human.

The result would be 10,000 people killed each day (most of them innocent), and in short, it would be equivalent to one nuclear bomb exploding every week.  The victims would be killed as if they were stung by a terribly poisonous mosquito, except that the mosquito in question is a robot.

Is this a scenario likely to happen? How to prevent this from happening?

DSC Resources

Additional Reading

Follow us on Twitter: @DataScienceCtrl | @AnalyticBridge

Views: 14322

Reply to This

Replies to This Discussion

In terms of killing thousands by flying machines based on machine learning, this is already happening. http://arstechnica.co.uk/security/2016/02/the-nsas-skynet-program-m... In this case, your nightmare scenario of a "crazy president" must be Obama.

Making the drones autonomous, however, does introduce a new element, which your piece did not contemplate: the control problem. There may be no way to recall these drones once launched. Plus a malfunction may start producing 100% false positives (instead of, say, 70% false positives).

Finally, if the engineering behind the drones ever gets advanced enough to where the drones can construct clones of themselves from raw materials, then we end up with the problem known as https://en.wikipedia.org/wiki/Grey_goo 

In line with this thought exercise, the false positives could be reduced by introducing additional dimensions to the operating algorithm that extends this beyond profiling by a single factor - that of race.  Given this is a thought exercise, this could be extended to include some type of behavioral aspect (such as consistently staying late at a "worship center" or proximity to known criminals). This could also be correlated with data gained from other sources (Human, Image, etc. Intelligence) to further increase success (defined as reducing false positives).  

Ultimately, the underlying question posed by this scenario is moral in nature - is the reduction of false positives to anything above 0 (or less than 100% accuracy with true positives) acceptable to us as humans?  Alas, we operate under those conditions currently with the acceptance of loss of the few for the greater gain of the many.  So the real problem set is - are we willing to accept those conditions when a human isn't pressing the button, but a drone is making the decision autonomously?  

Suggested scenario is dangerous to those that create it, test it and launch it. The first two are more dangerous than the latter one. Progression to such state when the machine is totally complex and self-aware requires preliminary stages, which are slightly more complex and "intelligent" than the previous step. I am fairly confident that those who plan it, design and test the stages will be exterminated way before the final product is gone loose.
So I believe the robot rebellion is not possible.
Examples: deadly biotech, viruses, nuclear, etc.

you are talking about the Hydra project from the Captain America movie.....

The control or action is what matters not the model itself. The decision strategy on the model's output. For example, if the AI model built into the bots says that this person is a 63% possible terrorist, do you shoot or abstain? This control is in the decision strategy logic built into decision making systems. AI has nothing to do with these decisions. if a threshold of 99% is used, yes the bot will shoot most likely the right terrorist or the right profile. It will depend on how the model has been trained.

We are already facing problems with false-positives. It happens all the time and will not stop. The key question is: how much autonomy do we want to allow? Why? Because it raises all the ethical questions (not yet answered at all), what we empower a machine to do.

Out of control bioterrorism is the most serious existential threat & will increase. Smaller and smaller groups will have increasing destructive power until single individual hacker can impact the world. Our highest priority is creating AI detection, prediction, and effective countermeasures. Within 10 years.

Reply to Discussion

RSS

Follow Us

Videos

  • Add Videos
  • View All

Resources

© 2018   Data Science Central™   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service