Do you think that one day, humans will find a way to not work and enjoy the life, relying on robots to help them with their needs, just like dogs who don't need to spend their time finding food and protection against mother nature, diseases and other problems, being taken care of by humans?
Here is an interesting answer posted by David Taylor (PhD in psychology, UC Irvine):
Here, in summary form, are my reasons for believing that this will happen.
1. Neurons are excruciatingly slow compared to silicon. Depending on the way you do the comparison, current computers are something on the order of a million times faster than brains. So if you could get the same cognitive processes to work in silicon, dealing with the resulting AI would be like trying to run fast enough to keep up with a rocket.
2. I strongly believe that we are physical beings, just like the other animals on the planet, and do not include a separate, non-physical entity called mind. Simply put, what we call the mind is nothing more or less than the brain’s inner perspective on its own workings. So we don’t have to create something as nebulous as a mind; we just have to create a brain. And I don’t see any hint of a brain process that could not be replicated in silicon.
3. Artificial neural networks have laid the basic architecture for a silicon brain. Using just the most rudimentary principles of neural functioning, we now have machines that can beat humans at chess and Go. Moreover, they do it in a way that suggests deeper insights and greater creativity than any symbol-based program in the past. And they do it by teaching themselves, starting with nothing more than the set of rules and the goal of the game. Imagine what can be done when we start to incorporate more sophisticated aspects of brain functioning.
4. The human brain has 86 billion neurons with thousands of synapses for every neuron, so building a silicon brain on that scale will be a challenge. But the neural networks cited above are already using graphics processing units (GPUs) to simulate billions of neurons, and the neural hardware now coming onto the market will allow these networks to grow larger still.
5. We humans are weak. If we can build something powerful, no matter how dangerous, we will do it. Even if that something poses an existential threat to its creators. Building a machine with human-level intelligence will make its creators feel like gods, and there is no resisting that kind of pull. We can rail against it, and we can even pass laws against it (which some people are calling for even now), but all that will do is push the effort underground and beyond regulation.
6. Once we have created a machine with intelligence approaching our own, it will be smart enough to take over the process of designing future versions of itself. That will be a positive feedback loop that will accelerate rapidly and leave us standing there wondering what happened.
This answer leaves many vital questions to be addressed. My favorite one is: Will the evolution of machine superintelligences necessarily be a bad thing? I have thoughts about that, but I’ll hold off on that answer until I see the question asked.
Technology comes with a number of advantages that have the power to substantially improve the way we live and do business. The most important benefits of AI are these:
In the future, both these robots and humans altogether will work on many beneficiary projects or reduce risks in all the sectors, especially in the military sector.