Home » Sector Topics » AI and Science

Approach to AGI: how can a “synthetic being” learn?

Approach to AGI: how can a “synthetic being” learn?

Three inspirational articles from Nikko Ström about Symbolic AI [1], Melanie Mitchell about AI-based Understanding [2] and Yoshua Bengio about the Conscious Prior [3], have inspired me to write my own small contribution. Since some years ago I have been thinking of how our brain learns and its relationship to human being goal of approach to AGI. Abstraction learning could be a key aspect to include in the equation. Human being obtains extremely detailed information from real world. However, we really learn abstractions by filtering and selecting the most common features (Analogies [2]).

Abstractions role in learning process

Abstractions are very efficient as a way to represent knowledge in our brains as they are simpler. We become more complex when add features to the abstraction and start building a hierarchy of real world objects. Intuitively, concepts and abstractions are tightly related, so two concepts will be related if one or more of their abstracted features match. Furthermore, reasoning and intuition could be related to the efficient use of the abstraction trees and their interrelationships. I apologize but I have a unstoppable technical soul and I need to mention a concrete implementation. Hierarchical GNN techniques and Attention Mechanism could be key techiques. They can mimic both the learning of the abstractions and the most relevant relationships of real-world objects with the abstractions.

Continue working on Symbolic AI

We should continue working on Symbolic AI as first approach to AGI. It could allow us to represent concepts and their interrelationship in a similar way the brain does. I would like to imagine that we could develop a prototype that learns from scratch following this simple approach. A hierarchical representation could be created and evolved by means of “synthetic being” ability to perceive the real world. Then it extracts the relevant features and match them with its current abstraction tree to finally update it. At time zero, there is nothing, so the “synthetic being” learns all the features that it is able to perceive. Since then, it will behave “bottom-up”. It learns irrelevant details and simplify its “universe” updating the tree by weighting the most scored features.

This humble text is intended to be more inspirational than rigurous. However, I hope it could fire some useful thoughts.

[1] https://www.amazon.science/blog/whats-next-for-deep-learning

[2] https://www.quantamagazine.org/melanie-mitchell-trains-ai-to-think-with-analogies-20210714/

[3] https://arxiv.org/abs/1809.03956