Home » AI

Embodied AI: Would LLMs and robots surpass the human brain?

  • David Stephen 
robot brain

It is the International Brain Awareness Week for 2024, with events across institutes, globally, from March 11 – 17.

This week is a good time to explore the pedestal of the brain, against the astounding rise of machines. AI embodiment was recently featured in Scientific American, AI Chatbot Brains Are Going Inside Robot Bodies. What Could Possibly Go Wrong?

Large language models have made it obvious that the digital world is far more manipulable than the physical world. Editing anything [text, images, videos, and audio] over digital is easier than adjusting things physically, giving digital another advantage aside from its excellent memory. Generally, artificial memory is available in books, walls, sculptures, objects, and so forth, but their physical state is a disadvantage for snappy editing.

This ease, in part,  made LLMs become the first non-living thing to have a semblance of intelligence, in the domain of organisms. Though robots have been around for decades and can do regimented tasks, they operate in the physical world, where manipulation is complex and senses—of the organism—have reached for navigation then avoidance, because of extensive sources of inputs.

Any living cell in its environment relates with the external to an intelligent range, with a memory that ensures it avoids irreversible damage. Any organism in its habitat has multiple senses, processed in a manner that seems like it is more than just that one, to ensure that it is aware and makes quick decisions, for prey, against predators, and for other purposes.

Simply, multiple sensory sources make surviving and thriving in the physical world better. While digital sources can detect smoke, capture events, and sounds, and allow for touch—with screens and others, they do so as unitary—digital, missing out anything else that cannot be subsumed into that form [taste, smell] and missing out on what they were not trained on.

By contrast, the human memory can make various sensory detections including balance, jumping, or sliding, and recognize digitally or physically across sensory modes. The memory can navigate new environments because of familiarity, provisioned—conceptually—by collecting common elements of things together while keeping separate what is unique. This makes relays within the collected information—or intelligence, hyper-sophisticated.

Also, the human mind, conceptually, is always distributing, across memory, emotions, and feeling, to ensure that what is good, including socially, is detected early and preserved, since aspects of the physical world are dangerous and would not simply change.

What makes the human brain special?

Other ways to frame this question include: how does the human brain organize and distribute information that makes humans superior to others? What makes the brain energy efficient? Why does the brain learn faster than LLMs? Why does the brain process much more than LLMs?

Conceptually, the mind is the basis of all the information functions of the brain. The mind is the collection of all the electrical and chemical impulses of nerve cells, with their features and interactions, in sets, in the central and peripheral nervous systems.

The interactions of the impulses can be described as the function. The features can be described as qualifiers, modifiers, stipulators or characterizers. These qualifiers as a category is consciousness, the super qualifier. Functions of the brain are graded by qualifiers in every instant, saving energy. It is by the mind that humans bear superiority. Whatever is wrong with the brain has to affect the mind, to affect the individual.

Interactions of electrical and chemical impulses in sets do so into a configuration or formation. It is this configuration that the mind uses to organize information. Simply, chemical impulses, in sets, often have respective rations that represent a function. The striking of electrical impulses on chemical impulses, in sets, results in a brief fusion or meshing, which allows for the formation of the chemical impulses, in that set, to be accessed, leading to the function.

Qualifiers are obtained within each set of impulses, with spaces between sources of the rations as well as side-to-side shifts to vary the concentration of rations. Qualifiers include attention, awareness [or peripheral vision], self or subjective experience, free will, control or intent, non-intent, and so forth are obtained. Qualifiers also include distillation, sequences, thick sets of impulses, thin sets, splits, and a principal spot.

This, conceptually, is how the human mind works.

Would robots or LLMs match the brain?

Some people say LLMs do not have a world model, a mental model or an observation of the world. The problem is that much of what is rewarding in this advanced world is not simply how to wash dishes or how to set the room. Education is offered on valuable knowledge, with a lot of it, possible for dissemination over digital. That knowledge, within the reach of LLMs, and as they get better, gives them an advantage in that domain, like an aquatic organism has, or an arboreal one. Also, the world has converged on digital, providing as much as it can take—there, giving LLMs a boost.

Robots have functions. Some functions of robots have qualifiers that are parallel to those of the human mind. The functions of robots are at least qualified in ways that are different from a table, or even a human-driven automobile. But no robot has a thalamus.

The human brain has clusters of neurons across centers. These clusters are postulated to have sets of impulses. The thalamus has several clusters of neurons, responsible for integrating information. It is theorized that sets of impulses in the thalamus structure, then initially qualify incoming streams of inputs, placing whether they are in attention, awareness, and to what extent they may be subjective or how intent or non-intent might decide. Then distribution is made, before the finalization of functions and modifications in other sets.

This relay [sets of impulses] ensures that the mind can collect so much, and use some, even without registration [by some qualifiers]. This is unlikely for robots in the physical world, aside from what happens in the final destinations—the cortex, hippocampus, amygdala, and others—for inputs.

Emotions are probably more important to pursue than a model of the world, for LLMs and robots. Self-driving cars cannot anticipate that they will get hurt if they crumple, which should ensure that they avoid errors. LLMs cannot know how much deepfakes would hurt people, so they output as prompted. Computer vision cannot crackle, as an emotion, when filming injustice. When a smartphone falls, which may cause the screen to break, the smartphone, with digital intelligence, does not prospect for a safe landing.

Even neurotechnology, for now, does not guarantee that the codes of impulses [or the configuration for some functions] can be copied and replicated elsewhere. LLMs and robots already have a couple of parallel qualifiers, but the human brain may remain unparalleled.