Home » Uncategorized

Textual predictive coding: Do LLMs and the human mind compare?

  • David Stephen 
Developing programmer Team Development Website design and coding

There is a new letter on TIME, What Generative AI Reveals About the Human Mind, where a professor wrote, “Natural brains must learn to predict those sensory flows in a very special kind of context—the context of using the sensory information to select actions that help us survive and thrive in our worlds. This means that among the many things our brains learn to predict, a core subset concerns the ways our own actions on the world will alter what we subsequently sense.”

“At first sight, this suggest that ChatGPT might more properly be seen as a model of our textual outputs rather than (like biological brains) models of the world we live in. That would be a very significant difference indeed. Still missing, however, is that crucial ingredient—action.”

Comparisons, if necessary, of predictive similarities between the human mind and generative AI should be limited to texts, for fair assessment. Why does the mind not confabulate or hallucinate like LLMs, even in cases of memorization?

There are examples of people who have read about things they have never experienced and had situations, like exams, where they had to recall absolute abstractions. Why does the mind not just conjure the next likely?

The emergence of generative AI should have rebutted any possibility that the human mind predicts. Even if the observation seems so, just because generative AI does, the “occasional catastrophic failures” should have shown that there is no comparison there, and the label for what the brain might be doing should have changed.

If the brain were predicting, how does it? If the answer is that it is not the brain, then if it were the mind, how?

What is the human mind, in a way that is different from the brain, or the body? How come it is possible to have the mind present options that appear like predictions but can be corrected if inaccurate?

Someone might step on the actual tail of a cat, or step on something that seems like it. In both cases, the mind might present options, if it matchesafter looking downthen some options presented either way is followed, if not, the incoming input is corrected. This applies also to texts, for humans, where correction often follows, or distributions are not just what next.

Assuming the human mind were predicting, there will be nothing like “using the sensory information to select actions that help us survive and thrive in our worlds” because hallucinations or confabulations would have resulted in many mistakes. So, what might be happening?

The human mind is postulated to be the collection of all the electrical and chemical impulses of nerve cells, with their features and interactions. Impulses carry out functions in sets, within clusters of neurons.

It is established in brain science that electrical impulses leap from node to node in myelinated axons, in a process called saltatory conduction. It is proposed that in sets, some electrical impulses [in the incoming bundle] split from others, to go ahead to interact with chemical impulses, like before, within a set or elsewhere. If the input [cat tail] matches with what went ahead, the second part of the bundle follows in the same direction [and maybe weakly, or in pre-prioritization] for what may result but if not, the incoming one [not cat tail] goes in the right direction, then no “next sensory stimulations”.

This explains the observation of predictive coding, processing and prediction error. The brain is not predicting, electrical impulses, conceptually, are splitting early. This applies to internal and external senses. Splits [of electrical impulses] lead distributions, applying as well to holding things in mind, while speaking or writing.

The human mind can be said to prepare options not predict, even as changes to outputs are probable at the last minute. Also, the outcomes are often possibilities, not just to predict and complete, like LLMs, which may sometimes be inaccurate.