Subscribe to DSC Newsletter

Human Brain vs Machine Learning - A Lost Battle?

Human (or any other animal for that matter) brain computational power is limited by two basic evolution requirements : survival and procreation. Our "hardware" (physiology) and "software" (hard-coded nature psychology) only had to evolve to allow us to perform a set of basic actions - identify Friend or Foe, obtain food, find our place in the social tribe hierarchy, ultimately find a mate and multiply. Anything beyond this point, or not directly leading to this point can be considered redundant, when viewed from the evolution perspective. To accomplish these "life" goals, our brains evolved to a certain physical limit (100 billion neurons per average brain, on average 7000 synaptic connections per neuron). Obviously, evolving beyond this limit was not beneficiary for survival and procreation in the African savannas. So, we are hard-limited by our "hardware", with the hardware spec being 1.5 million years old.

Though, according to the saying, we all "Live and Learn" - we actually live for a relatively short period of time, and learn effectively for an even shorter period. So, the "training set" that each of us has been exposed to in his infancy and childhood is limited by time. Of course, we continue learning things and acquiring skills after we become teens and then adults - but at a much lower, if not negligible, efficiency. We may taste a few exotic fruits, see a new place, study a math subject or try to learn tango - but the truth is that we learned most of our necessary survival skills (telling a person from a tree from a lion etc.) by the age of 3. So, our brain's "training set" is effectively limited in volume, and this limit is set by all the things we managed to see and do while we were infants, plus a long tail of things we picked up as adults.

So, we are limited by hardware and by the size of the training set. What about our artificial intelligence counterparts - our "machine tools"? Well, they are catching up and catching up fast! According to the following estimate (image taken from www.deeplearningbook.org, the blue dots are different artificial neural networks, #20 is GoogLeNet), computers will catch up with us in the neuron count game by the 2050's, if not sooner.

What is somewhat ironic is that we are now at a civilization stage where it pays off greatly to invest in progress (you could say that this stage started with the industrial revolution and exploded exponentially in the past few decades and then past few years), as stronger computers with ever-improving machine learning algorithms are the driving force of the global economy in the past decade. I refer to Google, Facebook, Microsoft, Apple, Amazon, the rest of the Fortune 50's and all their derivatives. It pays well to have the best algorithm, and it pays well to make a stronger computer, and it pays well to make progress.

It pays so well I have no doubt that we will continue improving and growing and progressing - until we (knowingly or unknowingly) pass that threshold so often mentioned in sci-fi books and movies, and we find ourselves surviving and procreating in a world where the machines - our "tools" - outperform us on every possible human task.

Add to this soup a pinch of "Internet Of Things", put in the ever-growing spread of mobile always-on always-connected devices in our (and our children's) lives and you may find the human race trapped on a tiny rocky planet run by sentient algorithms created by sentient algorithms derived from an algorithm based on an algorithm written by the last human researchers, right before they went obsolete.

The Matrix is real, or at least it will be in 2050.

Views: 7505

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by Harlan A Nelson on January 30, 2018 at 2:07pm

This article postulates an existential viewpoint that is highly questionable.  It is also racist as well.  Do people on the African continent not require the same survival capabilities of the Europeans? 

Evolution is an attempt to explain how we got here.  That explanation has changed a lot since Alfred Russel Wallace introduced survival of the fittest as evolution's mechanism in his paper On the Tendency of Varieties to Depart Indefinitely from the Original Type.  I don't think anyone can seriously think we have even gotten close.  

I have had one year with Siri.  Siri is an example of AI available to the public.  It doesn't make me think humans are in any danger of getting overtaken by AI.  Siri still doesn't know where I live.  "Where do I live?"  Answer: "I don't know your home address."  And by the way, even if I type in the exact address, it still doesn't know where I live.

Comment by Danny Portman on October 9, 2016 at 10:19am

Spar - thank you for your comment as well!

OpenAI is a great project, and it's great to think about AI safety - which goes back to the "Three Laws of Robotics" formulated by Isaac Asimov back in 1942. However, as not all humans are as nice and rational and moral as we could hope, I think it is easy to imagine AI specifically designed to be malicious and harmful - to competitors, other nations or even as part of AI terrorism.

Re augmentations - it is true that we might be upgrade our mental / computational abilities, but I think that at certain stage our biological nature may become a drawback and hinder our progress, while pure machine AI will only have its architecture to slow it down - and the architecture can be arbitrarily versatile.

Do you have a reference to the 'exponential curve' fit? Please note that the y axis on the figure I cited in the post is in logarithmic scale, so the blue line is actually exponential.

Comment by Danny Portman on October 9, 2016 at 10:08am

Michael, thank you for your comment! I truly don't know what we are here for - definitely not to compete with computers on computing tasks! But then again, nobody really knows what we are here for, do they?

The way I see it, we currently strive to reach a point where we create a self-improving and self-replicating algorithm or "cyber-physical system", and I have no doubt that a day will come when we reach that point. I agree that the current definition of AI and other buzzwords are obscure,  constantly changing and driven by marketing needs. And we are not there yet, but I believe we are getting there and (exponentially) fast.

Comment by Michael Kremliovsky on October 7, 2016 at 11:00am

Danny, I am afraid, you are fundamentally mistaken what human brain is about. We are not here to compete with computers on computing tasks. We are here to carry on human intelligence which is specific to our biology and senses. Intelligence is not defined by computation. Intelligence is about interacting and adapting to particular environments. Your best criteria for how intelligent cyber-physical systems become is their ability to act autonomously and self-replicate. We don't have much of that in existence (fortunately!) yet and it could be our gross-mistake as human kind to pursue that line. As per the current misleading definition of "artificial intelligence", this is just a better computing with pattern recognition (learning), understanding of rules and automated controls.

Comment by Spar on October 6, 2016 at 11:36pm

I prefer to think of it as a ''battle won." If designed correctly (https://openai.com), it (they) will make rational and disinterested decisions 24/7 that benefit humanity. Think of all the problems that we have today that will be solved.

As you say, humans are now ill suited for the environment we now find ourselves in but hopefully, at some point, we will have the option of augmentations that help us keep parity with AIs. 

Also, some people think an exponential curve is a better fit for that data than linear and that would move the timeframe up a couple decades.

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service