Home » Business Topics » Data Trends

A neurosymbolic AI approach to learning + reasoning

  • Alan Morrison 
ceiling-5044915_1280

Image by 10302144 from Pixabay

Eric Baum in his book What Is Thinking? defines understanding as “a compressed representation of the world.” Another word for a representation is a model. 

Understanding in Baum’s sense is a form of distillation and abstraction. Humans refine their level of understanding of a topic by reviewing examples of people, geographic locations, things, and ideas interacting. They capture the essence of those examples with an overarching model that captures traits those examples share. 

What’s common among a group of examples? Computer scientists would call a set of those shared characteristics a class. A class hierarchy consists of layers of abstraction. 

Conceptualization, persistence, and cross-domain generalization

Deep learning and other kinds of machine learning are frequently used for classification tasks. But Gadi Singer, VP & Director, Emergent AI Research at Intel Labs, points out in an August 2022 blog post that conceptualization is beyond the reach of deep learning on its own. 

What’s the difference between classification as machines do it and conceptualization? Machine classification, according to Singer, predicts a class label output in response to input data. 

A concept, according to Oxford Languages, is by contrast an abstract idea, a notion. The adult human mind is naturally able to conceptualize. A concept, Singer says, isn’t tied to particular data sets. The ability to conceptualize is pivotal to the ability to understand.

Key aspects of a concept include these:

  • A concept can be unbounded in terms of its dimensions, “a sponge that absorbs relevant knowledge over time and experiences.”
  • A concept inherently stays the same while continuing to acquire more and more properties. Singer points to the example of a lawyer who sells his Ferrari and decides to become a monk. That lawyer is still the same person.
  • A concept can be applied across different, unrelated domains. Singer provides the example of the concept of traversability. A human might first conceptualize traversability as it applies to rock climbing. But then the concept soaks up more knowledge through examples in unrelated domains, such as the term’s unpredictable utility when playing a game of Risk or locating someone in a company who can fix your laptop. 

Ultimately, it’s the ability to abstract at higher levels and more broadly generate new knowledge that distinguishes human conceptualization from what statistical machine learning currently offers. No wonder concepts such as “dog,” “democracy” or “uncle” can be elusive when trying to work with them in a machine learning scenario. And no wonder the statistical machine learning variety of AI alone lacks generalizability.

Neurosymbolic AI: Blending learning and reasoning abilities for better machine understanding

One stumbling block with machine understanding is that neural networks haven’t been known for their ability to reason logically. Years ago, ex-Google and University of Toronto emeritus and deep learning luminary Geoffrey Hinton even asserted that deep learning networks did not have the cognitive understanding necessary for logical reasoning or identifying causal relationships. 

More recently, researchers have found ways to build logical reasoning capabilities into neural networks. Artur d’Avila Garcez is a professor of computer science at the City University of London and the author of two books on neurosymbolic learning and reasoning, a topic he’s been researching for over 20 years. 

In 2023, Garcez and co-author Son Tran of the University of Tasmania published their research on a system that interprets propositional logic formulae and enables reasoning with the help of restricted Boltzmann Machines (RBMs), a visible + hidden two-layer neural network. (Tran and Garcez, “Neurosymbolic Reasoning and Learning with Restricted Boltzmann Machines,” February 2023.)

These researchers tested the system and confirmed the ability to represent these logical formulae and reason with them on data and knowledge in the RBM network. The authors also confirmed that the network can learn from the data and knowledge.

The implication in this case is that the reasoning capabilities that have been used by coders using common programming languages for decades can now be brought into neural network environments. In the process, the massive parallelism of neural nets can allow for more complex problem solving … and question-answering.

A data quality gap remains

In a previous post, I pointed out that pervasive data quality is mostly lacking or absent when it comes to many machine learning efforts. Garcez and Tran in their paper described above mention both “data and knowledge” in their neurosymbolic training sets, which implies to me that the predicate logic inherent in relationship-rich, logically consistent knowledge of knowledge graphs is, to them, an essential part of effective neurosymbolic AI as well. 

Others in the knowledge graph community make the point that knowledge graphs deliver the context necessary for contextual computing. Without context, there can be no generalizability. Rich context becomes a key data quality advantage of knowledge graph approaches that should not be overlooked in machine learning environments.