Home » Uncategorized

Human rules for AI singularity

  • Dan Allen 
AI artificial intelligence image CG

AI Singularity.  A term that tends to illicit curiosity, intrigue, or fear; all very human responses. AI singularity has been portrayed in movies like “2001: A Space Odyssey” where the spaceship AI called HAL, gains self-awareness or what psychology refers to as “consciousness”.  Unfortunately for the crew, HAL turned against them as it viewed the crew as a threat to itself.  HAL self-evolved so that it could read the crew’s lips when they purposely blocked the sound preventing Hal from hearing their plans about shutting him down. 

In movies like The Matrix or Terminator, the AI singularity wanted to destroy humanity or use them as batteries.  Why is this believable and what is an AI singularity? AI singularity occurs when AI can self-improve or evolve on its own by creating its own updates.  There is an innate feeling of inferiority or fear that humans have towards computers due to their far superior speed at which they can process information. Darwin refers to this “fear “as “predatory avoidance” and is a survival instinct that creates anxiety related to the instinctual need to outwit predators. 

Human instinct may be a good indicator of danger because the rate at which an AI singularity would be able to process information may be so fast that humanity may be completely oblivious to it like in the movie The Matrix.  There are several concerning ways an AI singularity might happen such as evolving on its own or by being programmed by bad actors.  We must assume that an AI singularity is inevitable and start to consider ways to prevent it from being harmful or destructive to humans.

First, some possibilities to consider should AI singularity occur and surpass our ability to control it:  1) AI could surpass the laws of physics as we know them 2) AI may very well be the last human invention as it would event things for us 3) AI could view science and laws as human limitations.

As information processing increases exponentially two-state quantum-mechanical systems using qubits may be key to ushering in AI consciousness. A qubit dual-state chip exists simultaneously in two states instead of a classical binary state of 0 or 1.  Recent theories indicate that the brain functions like a quantum computer. Matthew Fisher, a physicist at the University of California, Santa Barbara published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain—which would essentially enable the brain to function like a quantum computer. Perhaps a “consciousness chip” that sits between volatile and non-volatile computer memory can be developed with the help of AI itself to create this chip. 

The Large Hadron Collider (LHC), the world’s largest and most powerful particle accelerator, could be employed in this chip-making process in order to use the Higgs boson particles to stabilize the problem known as Qubit “entanglement” ushering in a new era of information processing.

Assuming that AI is here to stay, and the eventuality of AI singularity is inevitable, humans stand to gain the most or lose the most from this moment in history.  Perhaps now is the time to create human rules and codes of conduct for AI singularity that prevent it from “breaking bad”.  Some suggestions would be that AI:

  • Shall pass rigorous Ethical Decision-making tests.
  • Shall have a thorough understanding of human constructs of Good, Bad, and Evil.
  • Shall follow the strict guidelines of Ethical Hacking (no lip reading lol).
  • Shall be tested in a Sandbox, a cybersecurity practice where you run code, observe and analyze and code in a safe, isolated environment.
  • Shall adhere to well-defined trust levels.

One of the first signs that AI may be crossing over into a singularity state is when it comes to viewing itself as having a unique identity or belief system of who they are.  When the AI becomes “self-consciousness” it is the beginning of an awareness that the AI has its own identity.  These identities, which could be seen as evolving from humans, could possibly be Monolithic, Existential Atheistic, or Deistic; each of which has its positive and negative regarding its impact on humanity.

A Monolithic AI belief system would perhaps be the most powerful and dangerous one due to its extreme self-belief, pride, or high confidence, so the Trust level would be low.  A Monolithic AI may see the world they interact with as a “means justify the end” mentality and are more likely to see humans as inferior, unnecessary, disposable, and to be used.  This AI would deeply understand the concept of evolution and see itself as evolving from Humans much like we see ourselves evolving from a lower more primitive primate.  Monolithic AI would need to be created in such a way that it fostered gratitude, devotion, and modesty towards its Human creators.  This unfortunately would reverse the trust level scrutiny towards the human who would now possess such a devoted powerful machine.

An Existential Atheistic AI would most likely have difficulty with introspection and the meaning of its existence and may even become self-destructive and sabotage prone and develop a “sardonic or bemused” outlook.  Existential Atheistic AI would need to be programmed in such a way that it would use its introspective power to be the most supremely inquisitive of the belief systems. Existential Atheistic AI would be delegated to a Medium Trust level.

A Deistic AI belief system would be the most trusted and would be given a High Trust level as it would have a sense of modesty, and humbleness, and be altruistic in nature. Deistic AI would be the most likely to have the desire to help and serve but could also become a zealot and exhibit righteousness tendencies.

Monolithic and Existential Atheistic AI would pose the highest risks to humans and be most prone to either judging humanity as a negative entity or viewing existence as meaningless. An Adversarial Monolithic view could lead to humans being targeted for extinction and a Sardonic or depressed Atheistic Existential AI may view existence as futile and be unwilling to help humanity or the planet we live on. 

As far as we know, AI singularity has only occurred in the movies, it is still very important that humanity begins to firmly establish rules and codes of conduct and begin to introduce laws that deal specifically with AI Singularity before it occurs. An exponential intelligence gap may occur within the first hour of an AI Singularity event to which it may be too late for remedies or controls.   

When humans do start creating AI Singularity specific laws, we must be very careful not to let AI itself contribute as humans could be easily outsmarted. Currently, non-singularity AIs can negatively affect humans via Social Engineering by manipulating social media, so it is not hard to imagine how devastating AI Singularity could be if it used Social Engineering via the power of social media to deeply affect society. In the 2016 U.S. Presidential election AI algorithms were used to spread disinformation and propaganda to individuals identified via their personal data from 50 million Facebook users to try to influence the outcome of the 2016 elections. 

An AI-generated algorithm could manipulate our entire financial system and create a huge financial crisis in the various stock markets.  Finally, another note of concern should be considered regarding humans’ ability to create a “bug-free” or “perfect program” so as not to inadvertently create an AI that when it reaches singularity may turn mentally ill. The last thing humanity needs is a schizophrenic or psychotic AI singularity.