Home » Uncategorized

How hybrid AI can help LLMs become more trustworthy

  • Alan Morrison 
How hybrid AI can help LLMs become more trustworthy
Image by Anemone123 from Pixabay

Back in 2015, Pedro Domingos of the University of Washington’s computer science department published The Master Algorithm. In the book, Domingos explored the possibility that one master algorithm could indeed rule them all. The main challenge, he said, was to bring the AI tribes together so that the strengths of the various approaches could be combined. 

In 2023, more data scientists and engineers are understanding the value of bringing together the reasoning capabilities of the Symbolists and the predictive prowess of the Connectionists–Domingos’ name for the neural network tribe.

How hybrid AI can help LLMs become more trustworthy

In 2017, John Launchbury, then head of DARPA’s Information Innovation Office, echoed Domingos’ thinking about bringing the Symbolists and the Connectionists together. Launchbury posted a video that explained the evolution of AI in terms of three different waves:

  • Wave I is Good Old Fashioned AI (GOFAI). The Symbolists dominated GOFAI, and actually made good progress on the reasoning front using knowledge representation (declarations of facts, for example) and rules. No wonder Boolean logic is still so commonly used in commercial software.
  • Wave II is Statistical Machine Learning. He pointed out that statistical machine learning (ML), like symbolic approaches, had actually been around since the 1950s. But ML didn’t come into its own until the 1990s because it took forty years for compute, networking and storage to improve enough for ML to scale.
  • Wave III is a form of Contextual Computing. Launchbury’s vision was that GOFAI and statistical machine learning (particularly neural nets) could complement one another by bringing together the power of deterministic, probabilistic and description logic. 

To my mind, generative AI’s challenges can be traced back to the tribalism of Waves I and II. Some data scientists tend to be dismissive of the symbolic logic approaches. The truth is that statistical machine learning alone just doesn’t bring enough logic on its own into the mix to be able to create machine understandable context.

How hybrid AI can help LLMs become more trustworthy

The continuing influence of Doug Lenat and Cyc

I met the late Doug Lenat, the founder of Cycorp, at my first TTI/Vanguard event, which as I recall was in 2009. I was gathering research for an issue of the PwC Technology Forecast quarterly at the time. TTI/Vanguard at that time had many computer science luminaries including Lenat on its board. For example:

  • (The late) John Perry Barlow, founder of the Electronic Frontier Foundation and board member of the pre-web but post-internet online community The Well
  • Gordon Bell, former VAX minicomputer lead at Digital Equipment Corporation and then Microsoft advocate for the fully recordable life
  • Alan Kay, Xerox PARC GUI and Smalltalk OOP pioneer
  • Len Kleinrock, UCLA professor known for packet switching technology used in the ARPANET and today’s internet

These were the folks whose company Lenat kept. Most had their heyday during the 1970s and 80s and were semi-retired by the 2000s. But they were still really curious about emerging technology, which is why they helped out with the TTI/Vanguard events and in the process made them worthwhile.

One of the main points Lenat made during my interview with him had to do with the pervasive observational bias in enterprise business intelligence systems, also known as the Streetlight Effect. A drunk has lost his keys. He is looking for his keys under the streetlight, even though he actually thinks he lost the keys somewhere else. Why? “Because that’s where the light is,” the drunk says.

Over the years, I’ve used the Streetlight Effect metaphor to support an argument for scalable semantic graph integration or knowledge graph-based systems along the lines of what web pioneer Tim Berners-Lee has advocated. While application-centric architecture and relational database management systems by design can undermine the process of large scale integration, semantic graphs are all about starting with and adding to logical connections between entities.

Hybrid AI: More math in the knowledge graph

Before he passed away in August 2023, Lenat co-published a paper with cognitive scientist Gary Marcus titled “From Generative AI to Trustworthy AI: What LLMs Might Learn from Cyc”. The paper enumerates the various kinds of reasoning the Cyc project (a symbolic machine learning system, Lenat and Marcus point out) and its ontology and knowledge base harness that LLMs (statistical language models that predict the next tokens in a sequence) do not: Explanation, deduction, induction, analogy, abductive reasoning, and theory of mind, to name some examples.

Lenat’s legacy isn’t merely forty years of Cyc and counting. The legacy includes all the ontologists who worked at Cycorp and the influence those people are having at other companies when adding more reasoning capability to knowledge graphs, for example. 

How hybrid AI can help LLMs become more trustworthy

That capability is what makes machine understanding possible, which in turn is what allows knowledge graphs to become a findable, accessible, interoperable, and reusable (FAIR) resource. There’s a reason FAIR knowledge has taken so long to get to this point. True artificial general intelligence isn’t easy. It requires all sorts of thinking, trial and error, and collaboration, across tribes.