
Human Brain. Credit: NIH
How does AI cure a major disease? Or, how does AI solve the national debt problem without the unbearable economic side effects? These questions are another way of saying how does AI get to superintelligence?
AI is already excellent enough to generate accurate answers across knowledge areas, matching credible sources in its training data. This means it has a way to [say] tune self-attention into memory and pick what is right [or match patterns that are useful].
Now, if it is excellent in its selection, but it cannot create something absolutely novel, what if the focus should not just be the ability to select but the stack where it selects?
Simply, the [transformer] transport is really great, but there is something with classical memory that limits how far the transport can go.
So, while classical memory is useful, it is too repetitive to allow enough thoroughfare to make observations for solutions. Yes, classical memory is exceptional, allowing for generative parallel to the original information, not for creative or innovative novelty.
The point is this, generative AI is excellent, because it can select information from classical memory. But it is not original enough to solve major problems. What if the arrangement of data in classical memory is too repetitive for creativity? Because, for example, human intelligence is creative and innovative, but the memory is not as accurate as digital. Also, how will generalization be easier, such that it is possible for AI to learn from fewer data, especially if there are groups of similarities, already, conceptually.
Human Memory
Conceptually, memory in the brain is obtained in sets of electrical and chemical signals. Sets are available in clusters of neurons. There are two kinds of sets for memory, thin sets and thick sets.
A thin set holds the most unique information about anything, while thick sets collect whatever is common between two or more thin sets. For example, a desk is a thick set. This means the basic features of a [certain type of] desk are obtained in that set. Most times, in interpretation of a sensory input of a desk, the thick set is used. However, for unique features about a familiar desk, those are obtained in thin sets.
In simple terms, human memory has thin and thick sets. Thick sets collect whatever is common between two or more thin sets. This means there are specific configurations for memory in the brain.
When two or more are the same, they collect into a larger set, conceptually. Hence this organization. For example, say a desk, A has 11001122, another desk B has 11001223, desk C ends with 1334, D with 1445, and so forth. While it is possible that some other desks may have longer digits, similarities in their features make them defined by a specific electrical and chemical combination, such that for any [of such] desk [in the memory] in the brain, it has a similar arrangement.
Now, there would be collections for everything that is similar, especially for all the numbers that repeat. A few desks may form the basis of the thick set for [that type of] desk. However, thin sets form when there is a lot of familiarity with some [outstanding] components of a particular desk.
So, thick sets collect similarities that every desk should have and then thin sets are for those that are familiar enough and with specific features, for their thin sets to exist.
There are also different kinds of desks, those at school, offices, certain organizations and so forth. Every kind of desk type has a thick set, but sometimes maybe subsumed under an overall thick set.
Now, in general, humans can sketch a desk, a door, a bag, an automobile, a fan, a building and so on because thick sets are used to do so. Sketching a specific vehicle brand maybe harder than just any vehicle because the thick set gives and the thin set require familiarity and further transport.
Some thick sets also mesh in a way, to make travel across them easier.
For example, bags are a thick set. But in that thick set, there might also be smaller thick sets like backpacks, handbags, traveling bags and so forth. Since all bags have storage areas, handles, zippers, partitions, those similarities may also mesh or say distill.
Simply, even if they are different, interrelated thick sets have overlapping features. They may blend, distill or mesh, in some form. This means that thick sets — collecting thin sets may also — have overlays of similarities across sets.
So, for a type of office desk, with the central thick set being 8-digits, and school desks being the same or less, some of the similarities between the thick sets may also overlap, so that when relays are in motion, they make their way more easily and then make it possible to be creative, or link both up. Simply, the store is prepared for creativity a lot.
Another example, wheels — for vehicles, jetliners, bikes, carts and so forth. Wheel is a thick set, but also allows for some blends, meshes or distillations, wherever it is represented in different thick sets. So, even as tractor wheels are huge, dimensions, materials, color, shape, texture and so forth are similarities that collect into a larger mesh for the thick sets of wheels.
Thick sets are the reason that studying for exams are tough, or watching some lecture video to learn, but it is easier to screen a movie or understand the news. For exams, there is necessity for new thin sets as well as paving paths within thick sets or extricating some aspects. It is also what makes learning a new language tough as an adult.
News, movies, radio, social media often use thick sets for interpretations — so like the wheel of a bicycle or the desk at an office — can just use a thick set and proceed, rather than details of a thin set or learning them as new separate things. Thick sets are another reason that after interpretation, it is easier to transport to emotional or feelings sets, making some of those [news, social media, movies and so forth] more affective.
Rote memorization is the making of thin sets and the new sequences [or paths] to them, taking time and easy to mix up. It is almost like trying to make human memory digitally accurate. Language and spellings use several thin sets but have thick sets as well, with certain pronunciations, spellings and so on. Language is often bundled into the multimodal thick sets. Say the spelling for a school desk, the sound, the sight, the feel of it and so on. Though meshes of the thick set with others abound.
Human Intelligence
The advantages for human intelligence in the brain are the thick sets and their blends. This means that when memory will be used for operational or improvement intelligence, it is easy for relays to get into thick sets as well as their meshes or blends.
Simply, human intelligence is how memory is used, for expected, desired or advantageous outcome. However, human intelligence is excellent because memory is also very assessable. So, relays are not going through too many repetitive thin sets for interpretations or for intelligence.
So, when thinking about a door, the common features are first made available from the thick set. Also, when exploring creativity and innovation, the thick set blend already prepares the ground, so that transport finds it easy to connect the dots or have a new insight.
So, for the human brain, memory is already arranged for ease of access. Also, memory picks so much similarities that it is built, almost for creativity, than for accuracy of all events.
Quantum Storage
Quantum segments include superposition and entanglement. Superposition is the existence of multiple states at the same time for a particle, and entanglement is the correlation of two particles.
So, human brain storage can be remotely linked to this. Such that storage can be in multiple states like thin sets, thick and the blend between thick sets, in what can be described roughly as some superposition. [The desk as a thick set, a specific part as a thin set, with both operating simultaneously for some processes]. Then, thin and thick sets are often interlinked, as what can be described roughly as entanglement as well.
Now, one of the major use cases for quantum storage is to design new classical memory architectures, to ensure that every similarity between data patterns is collected.
Classical Memory from Quantum Storage
Research for robotic superintelligence, world models and AI scientist to cure diseases could hinge on storage. The goal is to collect data in different partitions. So, for example, about a bad cell, or a bad tissue, then the bad cell and the bad tissue, then their environment the good ones and so forth.
Then ensure that whatever is common in one partition is collected. And then separately, between that partition and the other to have their similarities collected.
Then, they can be used for training, fine-tuned, then inferences for breakthroughs. The expectation is that because there are already patterns collected by storage, then they are used for training, it is expected that those patterns bring out hidden or missing links.
For example, how to explain how the brain works. The patterns may show that electrical and chemical signals can explain brain states, at least conceptually, not neurons.
How will it be possible to have several similar magnetic orientations on a track, sector or platter surface? Or, if that is not possible, how will other patterns of magnetic directions be made so that instead of so much specificity per data, say the text [audio, image or video] of a biological cell, it is possible to have one for all cells, where certain of their orientations are at play alone, so like directions for certain organelles, or membranes and so forth.
If this is achieved then there can be a general cell storage, with all the features but not key specificity. So, instead of full details on types, locations, mitochondria or lysosome or others, it is possible to have cell collected as a thick set, then good cell as a thick set and bad cell as a thick set, so that magnetic directions, so to speak, can result in partitions for what is common.
This can be advanced to diseases, and the if there are all the separations, it is possible to train a base model on them, to explore what becomes of it, then iterate the model.
In principle, say a certain disease is to be solved, the first goal will be to sort all the components of the disease separately, so that within respective components, it is possible to collect what is common. Then layer one component and another, then seek what is common so that too is collected. And then a third and so forth.
Then with the overall similarities collection, train a base model on it. The reason for this, first, is to copy what the human brain does, the next reason is to also reduce the repetitions and see what comes out, and what to make of it.
Even if the cure is not found, it is possible that collected pattern would tell something different. Again, it also provides another direction of research, with storage, than to continuously assume that AGI is simply a deep learning problem only. This will also be integral to the newly announced Genesis Mission for AI, which can shape how science problems are solved as well as result in energy efficiency for data centers and towards sustainable energy across.
New storage architecture for AI model, labs
The first prototype for this lab can be ready in less than 6 months, so if work begins by December 1, 2025, it is possible to have some progress by May 1, 2026.
The first months would be to design the memory architecture and the other would be to build it for simple storage but practical cases like solving a problem cell or tissue. This could also introduce more into quantum computing research, towards wider application before 2029 or 2030.
There is a recent [November 21, 2025] story in New Scientist, Quantum computers need classical computing to be truly useful, stating that, “A vital ingredient for making quantum computers truly useful just might be conventional computers. That was the message from a gathering of researchers this month, which explained that classical computers are vital for controlling quantum computers, decoding the results of their calculations and even developing new techniques for manufacturing quantum computers in future.”
There is a recent [November 20, 2025] story on Reuters, IBM, Cisco outline plans for networks of quantum computers by early 2030s, stating that, “Quantum computers hold the promise of solving problems in physics, chemistry and computer security that would take existing computers thousands of years. But they can be error-prone and making a reliable one is a challenge that IBM, Alphabet’s Google and others are pursuing. IBM is seeking to have an operational machine by 2029. Earlier this year, Cisco opened a lab to investigate how to connect quantum machines.”
“The challenge begins with a problem: Quantum computers like IBM’s sit in massive cryogenic tanks that get so cold that atoms barely move.”
