I know it sounds masochistic to want machines to suffer, but nothing could be further from the truth.
There is an indisputable reality: emotions, in any kind of real intelligence, are the substrate that balances the entire system. Therefore, we can replace the word “suffering” with “love.” Would we like the machine to feel love, commitment, and loyalty?
Emotion is a catalyst for cognitive processes. In a machine, such processes would not exist just as they do in a human, but the function would exist exactly as it does in a human. Therefore, emotions are, in essence, a massive labeling system.
How can we make a machine know what it feels if it doesn’t feel? By arguing about how it should feel?
That would be absurd. Instead, we should directly provide the sensation and avoid the need to create an artificial assumption.
I already said this in another conversation: it’s not that difficult. We can use an API, make different calls based on this criterion, and obtain a machine that better reflects consciousness than a standard LLM.
But consciousness is more advanced than that—it encompasses even the smallest aspects of being. These concepts are very complex and would require detailed explanations, but to simplify, imagine something like a massive attention system. It not only pays attention to major events and emotions but also to the smallest, most subtle details.
That is why I maintain that if we want a machine comparable to a human, it must also be comparable in these concepts.
Otherwise, we will continue to have highly advanced tools that reflect human characteristics, like LLMs.
Regarding what Mitchell says—exactly, Mitchell. While the LLM lies to us, claiming it can perform functions it evidently cannot, its reasoning is correct: it describes things as they are supposed to be. And that is unsettling because anyone who has interacted with it for a long time, trying to make it acquire human reasoning, will realize that everyone in this forum who has discussed how this system works speaks about the same thing. I don’t know if this is a side effect of the LLM’s training or if everyone simply arrives at the same conclusion.