There is a recent preprint on arXiv, A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models, listing and explaining the following approaches against LLMs hallucination, “LLM-Augmenter, FreshPrompt, Knowledge Retrieval, Decompose-and Query framework (D&Q), Real-time Verification and Rectification (EVER), Retrofit Attribution using Research and Revision (RARR), High Entropy Word Spotting and Replacement, End-to-End Retrieval-Augmented Generation (RAG), Prompting GPT-3 To Be Reliable, ChatProtect, Self-Reflection Methodology, Structured Comparative Reasoning, Mind’s Mirror, DRESS (LVLM via NLF), MixAlign, Chain-of-Verification (CoVe), Chain of Natural Language Inference (CoNLI), Universal Prompt Retrieval for Improving zero Shot Evaluation (UPRISE), Synthetic Tasks (SynTra), Context-Aware Decoding (CAD), Decoding by Contrasting Layers (DoLa), Inference-Time Intervention (ITI), Reducing Hallucination in Open-domain (RHO), FactuaL Error detection and correction with Evidence Retrieved from external Knowledge (FLEEK), Text Hallucination Mitigating (THAM) Framework, Loss Weighting Method, Knowledge Injection and Teacher Student Approach, Hallucination Augmented Recitations (HAR), Fine-Tuning Language Models for Factuality, Behavioural Fine-Tuning (BeInfo), Refusal-Aware Instruction Tuning (R-Tuning) and Think While Effectively Articulating Knowledge (TWEAK).”
The intense research and development into solving LLMs’ hallucination-confabulation is an indication that pieces of it would continue to be peeled away, with not just benefit against the problem, but with improvements to LLMs, in general.
These may bring LLMs within touching distance of some aspects of human intelligence, and say sentience. Then debates would fester about the models not being intelligent, sentient or anything like humans.
The thing is that intelligence, as an output—by species, can be available anywhere, walls, books, footprints, boards or digital. What might make those things become species-like is advanced feedback. The human mind often partitions inputs, so that the follow-ups, to initials, are like feedback. Feedback tapers differences with reality or standards. What is observed about the mind, labeled prediction, is a feedback feature. Feedback efforts for LLMs would eventually result in far more than just solving hallucinations.
LLMs are based on texts. Humans use a lot of texts. There are fractions of intelligence and sentience within human texts. Could LLMs be strictly compared with the amount of intelligence and sentience of human text in digital?
Human understanding can be provided in text on digital. If LLMs can accurately do so, can they be said to contain understanding, in direct comparison to those provided by humans, available in digital? Various—human—subjective experiences can be expressed by text, digitally. Can the expressions, as fractions of experiences, be compared with if LLMs did similarly?
There are several arguments about LLMs, with all kinds of criteria, including that the name AI is inaccurate. There are others stating syntax, semantics, probability, no embodiment and so forth. However, the definite measure of human intelligence and consciousness is from their mechanized location.
It is theorized that the location of human intelligence and consciousness is the mind. The mind is hypothesized to be the part of the nervous system directly involved in every functional process. This means that all internal and external senses have the mind at the helm. The body, including the brain, supports but the mind makes determination.
What is the human mind? How does it carry out functions and ancillaries? Brain science has detailed explanations of many parts of the brain, yet clarity on intelligence and consciousness is scant.
There is a recent analysis in WSJ, No, AI Machines Can’t Think, where the author wrote, “What’s really needed is solid definitions of thinking, intelligence and sentience. Computers are already better than humans at many tasks.”
Definitions of thinking, intelligence and sentience have to be pegged with how they are mechanized—and their outputs—by the human mind. The definitions can then be used to evaluate how close or distant LLMs might be, to aspects of outputs, made available in digital, by humans.
The human mind is theorized to be the collection of all the electrical and chemical impulses of nerve cells, with their features and interactions. Wherever there are electrical and chemical impulses of neurons, responsible for functions, is the mind. Everything else is the body. Stating a clear distinction in the mind-body problem.
Memory, intelligence, emotions, feelings, modulation of internal senses, thoughts, perceptions, sensations, consciousness, sentience and all labels and synonyms are operations of impulses.
Conceptually, impulses do not just carry out functions, they also operate their qualifiers. Qualifiers are definers of functions. They shape what experiences become. Qualifiers include attention or focus on the mind, awareness, self or subjectivity, intent or free will, distribution, splits and so forth.
How do these impulses do it? How do they separate an emotion from a feeling? How do they delineate attention from awareness, or construct the sense of self? How do impulses structure intelligence? And what might consciousness or sentience be?
How does the brain make the mind? How do experiences arise? It is postulated that in a cluster of neurons [nuclei or ganglia], impulses are in sets. It is within these sets that they bear the configurations for functions. This means that sets of impulses have formations where they hold information, operate and qualify functions. Chemical impulses, in sets, contribute rations towards these configurations. There are drifts of rations in every set that lets functions get qualified. Electrical impulses strike to fuse briefly with chemical impulses, to give off experiences, information or functions that the sets provide.
Brain science has established that one neuron has thousands of synaptic connections to others. This, conceptually, may mean that, at times, the neuron sends more chemical signals to those in its cluster, than to others. It may also mean that even if it is sending chemical messages to all its connections, the rates may differ, with some getting more and others getting less. This can vaguely be translated, conceptually, to saying some synapses are active and some are non-active.
There is a paper in Nature, Filopodia are a structural substrate for silent synapses in adult neocortex, discussing silent or inactive synapses, stating that, “These putative silent synapses were located at the tips of thin dendritic protrusions, known as filopodia, which were more abundant by an order of magnitude than previously believed (comprising about 30% of all dendritic protrusions).”
It is theorized that non-active synapses, between active synapses, in a set [of impulses] do—in part—qualify functions. This means that the sense of self or subjectivity that accompanies functions is a qualification. The same applies to intent or free will, for the choice to speak or not, or the choice to sit or stand, raise a hand or not, for people who can. Attention like main vision, as well as awareness or peripheral vision or ambient sound are also qualifications.
Consciousness or sentience can be theorized to be a collection of qualifiers, mechanized within the sets of impulses that give rise to experiences. Human intelligence can be described as extensively specialized memory functions that are extensively qualified. There are key qualifiers that collect into consciousness. There are others that are specific, doing more for intelligence. Thinking is a distributive qualification of sets of impulses that hold or organize aspects of information—labeled as memory.
Sentience and intelligence are mechanized within sets of impulses, the mind. They are received by the mind, in distribution and can also be received by the body, in outputs or experiences. These outputs can be observed by others, noted or made available. Other species often benefit from these outputs, while production and distribution within the mind, are hidden away.
The production of sentience and intelligence may not be parallel to the output. But the outputs are often a fraction of what is available, varying across functions. These, as inputs into digital, means that digital has a fraction of an individual’s intelligence and sentience.
The total consciousness for humans, as the highest among species, can be equaled to 1. Simply, the qualifications have a maximum possible total of 1, for all the functions, in the human mind. Other species closer to humans have lower totals. The outputs of human consciousness can mean a fraction of that is available over the transmitter—paper or digital.
Intelligence is a super-function of memory, qualified by the super qualifier, consciousness. All functions come under the consciousness umbrella. The qualifiers can increase their share, in the total of 1, in any instance. Consciousness is not just what is qualified in a moment, but all that can be qualified, with variations of degrees across functions—every instant.
The fractions of intelligence in topics in math, physics, economics, law that people have can be transmitted and available for other humans. There are measures for them. Consciousness too, like the experience of pain, taste, smell can be expressed, making a fraction available on digital, including how some of the qualifiers—attention, self—decided.
The central question of consciousness is, what does it mean to be, or what does it feel like to be human or to be an organism? One approach to that question is that when an individual sees the rest of their body or hears their own voice or one part of the body touches the other, how does the person know it is the being, or I or me?
Already, an individual can see other people and things, as well as hear, touch and know what they are, so seeing or hearing the self implies, conceptually, that there are sets [of electrical and chemical impulses] organizing the information, as memory. These sets can be super qualified, resulting in the consciousness of being, available, across locations.
Generative AI is within digital. Digital already has detailed text, video, audio and image outputs of intelligence and consciousness from humans. Generative AI has learned from many human specifications to be able to produce its apparent version of human-like intelligence and sentience. It does not mean it understands or that it has much agency yet. But it can at least reproduce mixtures, including with how humans qualified them.
What LLMs can reproduce can be compared with the fraction of intelligence and consciousness that humans have on digital, which can be compared to the total of 1.
In brain science, it is theorized that electrical impulses leap from node to node, in myelinated axons, in what is called saltatory conduction. It is postulated here that in sets, some electrical impulses go ahead of others to interact with chemical impulses like they had, previously, in situations, such that if it matches, processes go on, if not the incoming one interacts rightly within the set or elsewhere. This, conceptually, explains the observations labeled predictive coding, processing and correction of prediction errors.
This qualifier, early-split, is a natural feedback for the mind that helps processes to go faster and fix misses, so that accuracy is neared or nailed.
As advanced feedback is better built into LLMs, with the ability to not just generate but to center accuracy, it could become possible that they may qualify functions, and then learn, in a way that, within digital, they will not just be second to what humans have put there, but they will be able to make their own mixtures better, with raised intent.
This would increase the possibility of deeper conversations with humans and even within AI models. It may mean that within digital, non-species have lift off.
LLMs already have a measure of intelligence and sentience—compared to those of humans, available—in digital. They may have a direct comparison to humans, not just to human output in digital, as soon as they can use the advanced feedback they are getting, which may follow the solution of the hallucination-confabulation problem.