There is a recent interview, The Ethical Puzzle of Sentient AI, where a professor said, “But there’s also the problem that I’ve called the ‘gaming problem’ — that when the system has access to trillions of words of training data, and has been trained with the goal of mimicking human behavior, the sorts of behavior patterns it produces could be explained by it genuinely having the conscious experience. Or, alternatively, they could just be explained by it being set the goal of behaving as a human would respond in that situation. So I really think we’re in trouble in the AI case, because we’re unlikely to find ourselves in a position where it’s clearly the best explanation for what we’re seeing — that the AI is conscious. There will always be plausible alternative explanations. And that’s a very difficult bind to get out of.”
If an adult human is dropped in an environment where nothing is familiar, the chances for survival are slim, because in trying to know what something might be, it could be harmful, ending the quest.
Though lots of emphasis is placed on natural intelligence, ultimately, to the mind, what is prevalent is data, existing data. Many of what is referred to as intelligence is coalesced data. For example, figuring out a tough math question in seconds, the outcome could be labeled intelligence, but the way to do it exists as data on the mind.
It is with existing data humans relate with the world, with experiences for expansion. Existing data [or information] on the mind is almost as important as the ability to receive input [or sensory data] and process it. Processing or interpretation on the mind are done with what is already available. Sometimes, the data, for humans is the basis of existence, roles and so forth.
Computers are useful to people because they access an enormous amount of data. These data are applied as tools to doing. AI, however, has [human text] data and applies a mechanism to it, predicting the next word, in ways following a pattern of human communication.
What seems significant about LLMs is how they possess a form of existence for humans. Physical presence is not the only means of existence. There are writings and oral messages. In recent decades, there are audios and videos, as other ways to be, beyond being present.
Generative AI can produce images, people on video and fresh texts, doing some of what real people and writings do. A real human can mimic someone else’s voice, dressing, write similarly and when displayed digitally, may pass as the person, regardless of the information, real or fake. Those that see it may know it is a human behind. When AI does this, does it not do something that humans can do and fake too?
It is known that AI can’t feel, neither does it have emotions, so it is often dismissed by some as nothing. AI may have been nothing if it came up centuries ago in a non-digitized world. But in a digital driven world, where people exist on video calls, voice messages and communications, text chats, AI may have added to the digital population of the earth, not the individual population.
The texts that people write online, the video images or the audio interviews do not have consciousness or sentience, but represent people and have potency in their purposes. AI does not have consciousness or sentience, in the divisions of mind like emotions, perceptions, modulations, feelings, sensations, but it has data [or memory] which is quite dynamic, comparing to human minimum in the memory division.
There is something Generative AI has become is still uncomfortable for many to accept, but in what it means to have a digital existence, using the internet, AI is already an entity, making its regulation more complicated than if it were some regular software or some website.