Do you think that one day, humans will find a way to not work and enjoy the life, relying on robots to help them with their needs, just like dogs who don't need to spend their time finding food and protection against mother nature, diseases and other problems, being taken care of by humans?
Here is an interesting answer posted by David Taylor (PhD in psychology, UC Irvine):
Here, in summary form, are my reasons for believing that this will happen.
1. Neurons are excruciatingly slow compared to silicon. Depending on the way you do the comparison, current computers are something on the order of a million times faster than brains. So if you could get the same cognitive processes to work in silicon, dealing with the resulting AI would be like trying to run fast enough to keep up with a rocket.
2. I strongly believe that we are physical beings, just like the other animals on the planet, and do not include a separate, non-physical entity called mind. Simply put, what we call the mind is nothing more or less than the brain’s inner perspective on its own workings. So we don’t have to create something as nebulous as a mind; we just have to create a brain. And I don’t see any hint of a brain process that could not be replicated in silicon.
3. Artificial neural networks have laid the basic architecture for a silicon brain. Using just the most rudimentary principles of neural functioning, we now have machines that can beat humans at chess and Go. Moreover, they do it in a way that suggests deeper insights and greater creativity than any symbol-based program in the past. And they do it by teaching themselves, starting with nothing more than the set of rules and the goal of the game. Imagine what can be done when we start to incorporate more sophisticated aspects of brain functioning.
4. The human brain has 86 billion neurons with thousands of synapses for every neuron, so building a silicon brain on that scale will be a challenge. But the neural networks cited above are already using graphics processing units (GPUs) to simulate billions of neurons, and the neural hardware now coming onto the market will allow these networks to grow larger still.
5. We humans are weak. If we can build something powerful, no matter how dangerous, we will do it. Even if that something poses an existential threat to its creators. Building a machine with human-level intelligence will make its creators feel like gods, and there is no resisting that kind of pull. We can rail against it, and we can even pass laws against it (which some people are calling for even now), but all that will do is push the effort underground and beyond regulation.
6. Once we have created a machine with intelligence approaching our own, it will be smart enough to take over the process of designing future versions of itself. That will be a positive feedback loop that will accelerate rapidly and leave us standing there wondering what happened.
This answer leaves many vital questions to be addressed. My favorite one is: Will the evolution of machine superintelligences necessarily be a bad thing? I have thoughts about that, but I’ll hold off on that answer until I see the question asked.
The idea of silicon mimicking biological neurons isn't so straight forward as simply a challenge. Biological systems float in gilia and can move about to form connections. I don't know of any form of silicon technology that can do that. On top of that biological systems have 3 degrees of spatial freedom whereas silicon only has two which bottlenecks the number of connections you can establish. Silicon however can try to compensate for the lack of spatial degrees of freedom using indexing but that requires infrastructure that takes up space and suffers a bottleneck of sequential queuing. We can reduce the bottleneck by increasing the bus size where some CPUs actually have a bus of several hundred wires to a couple of thousand which could be improved to millions of wires. But an AGI doesn't have to be a mimicry of biological systems and can apply approaches that look at cortical tissue functions that can have digital equivalents or improvements even.
However, your argument of AI takes over the world is a form of anthropomorphization and while most psychologists and psychiatrists don't apply social features of apes to humans, humans do apply ape troop mentalities or social paradigms, that is our evolutionary legacy. An AGI doesn't have to apply such social paradigms so simply being intelligent doesn't necessarily lead to domination or alpha mentalities that want to control humanity.
Technology comes with a number of advantages that have the power to substantially improve the way we live and do business. The most important benefits of AI are these:
In the future, both these robots and humans altogether will work on many beneficiary projects or reduce risks in all the sectors, especially in the military sector.