The media has brainwashed us into thinking of AGI as something akin to the ‘Terminator’
But when (and if) AGI comes – what would it really look like?
It’s tempting to thinking that AGI may be human-like
But I read two books recently which point to otherwise
Recently, the economist listed five best books to understand AI
Their premise in selecting these books was that “Specialists outside the field do better at explaining the implications(than AI experts do”
While I do not fully agree to this premise – their choice of books to explain AI is quite interesting
Two of these books point to a future of AGI that you may not quite expect
The first is “The Age of AI. By Henry Kissinger, Eric Schmidt and Daniel Huttenlocher.”
Kissinger is the famed diplomat and the book says that AI marks a big epoch which will replace humans at the centre of knowledge with AI
But what would this AI look like?
The book talks of an “AI enabled network platform” – which looks to me something like an evolution of a search engine (after all Eric Schmidt of Google is one of the co-authors!)
The change could start with AI bot assistants
Children would grow used to allocating tasks to these assistants
This would continue in their adult life
The bots themselves would ultimately become all knowing and solve problems
All this sounds plausible to me based on current technology – especially driven by reinforcement learning and large language models
But intriguingly, would the bots choose their own goals?
That takes us into the realms of AGI – but AGI which looks like a search engine
The second book on this list that paints an interesting picture of AGI is
Novacene: The Coming Age of Hyperintelligence. By the late James Lovelock with Bryan Appleyard.
James Lovelock originated the Gaia hypothesis that the Earth acts as a self-regulating, living organism. This book essentially proposes that cyborgs will actually save humanity(from climate change) – in their own self interest (so that both cyborgs and humans will survive)
Again for this book, these cyborgs are superior to humans – and James lovelock even suggests that they may keep us as pets. Their answers would be very intricately reasoned.
In both these cases, the concept of AGI is not embodied in humanoid format.
The idea itself is not new – because there has been work on substrate independent (and distributed) intelligence
In any case, what we think of as ‘AGI’ may well end up looking very different from what the movies tell us!
Image source: Image by 0fjd125gk87 from Pixabay