Yes, I really appreciate your perspective; it is enriching. It is always pleasant to hear different ideas. I will respond to you in the opposite way. Instead of explaining why not, I will explain through the acceptance that you propose.
Alright, instead of evaluating LLMs based on how they function, we will assess them based on their apparent capability. We take for granted that they indeed have consciousness.
Let’s suppose that the language model is truly capable of acquiring consciousness. But why should we stop there? Let’s evaluate everything that seemingly has consciousness, not by its internal functioning, but by its apparent cognitive capacity.
The LLM demonstrates consciousness in its words and actions, right? And therefore, it has consciousness.
Then, a video game also has consciousness. When I shoot in a video game, the characters inside it run away in fear. Can we then deduce that they have emotions? Does a video game also have consciousness? And what about an internet browser? When I type something into it, it understands what I am saying and searches for results on the web. Does that mean it also has consciousness?
And when I pump gas, the dispenser verbally thanks me—can we say it has consciousness because I perform an action, and it responds with a conscious act of gratitude?
This method of evaluation does not seem very effective to me because, following this logic, almost everything would appear to have consciousness.
Humans and their cognitive abilities: you are confusing cognitive ability with motor ability, sensory ability, and learning capabilities.
Consciousness is the processing of information at a subjective level. When a person has a neurodegenerative disease, what remains is the primitive motor shell of the system. They cannot process information due to the disease, which creates a malfunction in the synaptic connections of neurons in different areas of the brain.
The dynamics of a biological brain and a virtual brain are different.
Therefore, extrapolating a person’s cognitive processes to a machine is not effective. A machine cannot be as immensely complex as a human brain or simply a biological brain.
Not even virtual neurons, which are named neurons, are a copy of biological ones—they perform a similar function but are not replicas.
The goal is to replicate the function, but not the components, just as all technologies replicate functions but not components.