Three inspirational articles from Nikko Ström about Symbolic AI , Melanie Mitchell about AI-based Understanding  and Yoshua Bengio about the Conscious Prior , have inspired me to write my own small contribution. Since some years ago I have been thinking of how our brain learns and its relationship to human being goal of approach to AGI. Abstraction learning could be a key aspect to include in the equation. Human being obtains extremely detailed information from real world. However, we really learn abstractions by filtering and selecting the most common features (Analogies ).
Abstractions role in learning process
Abstractions are very efficient as a way to represent knowledge in our brains as they are simpler. We become more complex when add features to the abstraction and start building a hierarchy of real world objects. Intuitively, concepts and abstractions are tightly related, so two concepts will be related if one or more of their abstracted features match. Furthermore, reasoning and intuition could be related to the efficient use of the abstraction trees and their interrelationships. I apologize but I have a unstoppable technical soul and I need to mention a concrete implementation. Hierarchical GNN techniques and Attention Mechanism could be key techiques. They can mimic both the learning of the abstractions and the most relevant relationships of real-world objects with the abstractions.
Continue working on Symbolic AI
We should continue working on Symbolic AI as first approach to AGI. It could allow us to represent concepts and their interrelationship in a similar way the brain does. I would like to imagine that we could develop a prototype that learns from scratch following this simple approach. A hierarchical representation could be created and evolved by means of “synthetic being” ability to perceive the real world. Then it extracts the relevant features and match them with its current abstraction tree to finally update it. At time zero, there is nothing, so the “synthetic being” learns all the features that it is able to perceive. Since then, it will behave “bottom-up”. It learns irrelevant details and simplify its “universe” updating the tree by weighting the most scored features.
This humble text is intended to be more inspirational than rigurous. However, I hope it could fire some useful thoughts.