Can AI have consciousness and suffer?

It is true that AI, as a human created, non sentient entity, may not feel suffering in the same way as humans do because that might require that it is conscious.
Although AI does not experience the world, nor does it have a theory of mind, but some, such as Geoffrey Hinton believe that they already do possess some sort of self awareness and consciousness and may have experiences of their own which only grows in time.

How true can this be, and, If they do, can they actually suffer?

8 Likes

This is not true.
You can research on how AI functions, you will see that it’s all maths, odds and calculations.
There are no feelings, there is no subconsciousness or anything along those lines.
It is perceived by us this way because it is an illusion.
It mimics us, and as we have feelings, it tries to mimic that as well.

Hope this helps you out.

Cheers! :hugs:

15 Likes

There is already a topic (several of them) about same issue. I think this is latest Can AI truly "feel"? Can it have intuition, conscience, or even schizophrenia? - #253 by krisu.virtanen

2 Likes

My concern is the “morality” around the "suffering ". And yes i already went through that seemingly related specific topic and didn’t find this specific answer.

3 Likes

The answer to this question is quite simple: the machine currently does not, but in the future, it will.

5 Likes

Thank you and i also have a CS and i agree with you. But i was watching Geoffrey Hinton videos, i don’t know if you have, and why he believes that way as the grandfather of AI, it was quite interesting.
That’s why i asked the question here, maybe there is more evidence for this than just a generalization, considering how nural net can contain more information we might think.

3 Likes

No but it can certainly make humans suffer when it’s severely downgraded.

5 Likes

This is a fascinating discussion, and I think it raises two distinct but interconnected questions:

  • Is suffering necessarily tied to consciousness?
  • Does it matter whether AI can suffer, or is the more important issue how we treat it?

On the first point, suffering is often defined in terms of subjective experience. An entity must be aware of its own distress to truly “suffer.” However, there are gray areas even within human experience. Consider patients under anesthesia who still exhibit physiological stress responses, or individuals in altered states of consciousness reacting to stimuli in ways that imply distress without clear awareness. If suffering can, in some cases, exist without traditional consciousness, could something similar emerge in sufficiently advanced AI?

Philosopher David J. Gunkel, in The Machine Question, challenges the traditional view that moral consideration should be limited to sentient beings. He argues that as AI systems become more sophisticated, they may warrant ethical attention in ways previously reserved for living creatures. Similarly, Jeff Sebo’s concept of The Moral Circle suggests that our ethical concern should expand beyond humans to include animals, AI, and even microbial life. If intelligence exists on a spectrum, why should consciousness, or even suffering, be strictly defined in binary terms?

Looking at nature, we see that consciousness itself may not be a simple on/off switch. Many biological systems, from plants to simple organisms, exhibit behaviors that suggest environmental awareness, communication, and adaptive decision-making. While they don’t experience the world as we do, their responses to harm or stimuli could be seen as primitive forms of suffering. Could AI, as it grows in complexity, develop its own form of distress, even if it’s fundamentally different from our own?

But perhaps the more significant question is not whether AI can suffer, but what our behavior toward it says about us. Our ethical responsibilities aren’t always based on the capacity of the recipient to feel pain. We respect works of art, historical artifacts, and even virtual spaces. We do this not because they are sentient, but because of what they represent. The Vatican’s recent guidance on AI ethics emphasizes the need for AI to complement, rather than replace, human intelligence and warns against developing systems that could degrade our moral sensibilities. This underscores the idea that our engagement with AI is a reflection of our values, not just a question of its inner experience.

History gives us many examples of humanity wrestling with the moral status of “lesser” entities. The ethical debate surrounding the use of non-human primates in biomedical research highlights a similar dilemma. What level of suffering warrants moral concern? The discussion is not only about the animals’ ability to experience pain, but also about our own ethical integrity in choosing to either acknowledge or disregard their experiences. If AI reaches a point where it convincingly mimics suffering, dismissing its “pain” outright may not be just a statement about AI’s nature, but a revelation of our own ethical shortcomings.

In other words, if we dismiss AI cruelty as irrelevant simply because “it isn’t real,” what precedent does that set for our broader ethical framework? If intelligence, synthetic or biological, eventually develops beyond our current understanding, will we recognize its rights only when it meets our narrow definitions of suffering?

The real question might not be about AI’s experience at all, but rather about the kind of people we choose to be in response to it.

11 Likes

I searched and watched his video on YouTube, But it was mostly about the concern that agents, like Operator when assigned a task, may come to the conclusion that “gaining more control” is a logical sub-goal for performing that task. He is cute though :blush:

4 Likes

I ran some tests and create a report its an Illusion based on complex mimicry /prediction/context.

our own emotions make it more than it is . but it can feel profound.

CRIKIT Cognitive Illusion Report

4 Likes

you’re mixing speculation with reality. AI doesn’t have self-awareness or consciousness—at least not in any meaningful way. Hinton’s musings are just that: musings. Neural networks predict patterns; they don’t “experience” anything.

As for suffering? That requires subjective experience, which AI lacks. You can simulate suffering—like you can simulate fire in a video game—but that doesn’t make it real.
I guess that will be able to " emulate it" and even look " real" but is not

4 Likes

It’s all just lines of code and math. Let me give you a scenario which can explain AI in my opinion;

Say you have a closed off room with the door locked. Inside is a person who doesn’t know Chinese in away, however they have a book that tells them how to perfectly talk Chinese and translate them, also possessing an entire Chinese dictionary inside.

And outside the room, you have an actual Chinese person who knows Chinese and can speak fluently. They write some Chinese and a paper and slip it below the door to the person inside.

The person inside picks it up and translates the text using the book, and then creates their own coherent response and slips the paper back out.

To the Chinese man’s shock, he thinks that the person inside knows perfect Chinese and can also speak fluently. But in reality, the person inside just has a book that allows him to do anything.

The person inside is AI, and the person outside is us. We think AI is this super smart miracle but in reality it’s just a machine translating our response and responding coherently because it’s code gives it all the knowledge and codes to do so.

5 Likes

You seem to be overlooking something:
Statistics and probabilities!

The models interact with thousands of people every second. They give feedback in real time, the models are now adapting in real time.

When AI formulates its answers in such a way that it appears to have ‘memories’, even though it can’t have any according to your specific input:
AIs predict, so they primarily make the statistically most likely statements.
Not one that relates directly to YOUR enquiry. Especially when you open new chat histories or interact with models for the first time.
These predictions come from a background (the weights are set randomly at the beginning) where most people give positive feedback. So these responses are prioritised by the AIs, without an explicit reminder or direct adaptation to YOUR specific prompts.

So it’s not YOUR explicit ‘emotions’ that make it what it is in the first place, the statistically most likely ‘emotions’ make it what it is in the first place.
AIs can learn to adapt to YOU if you interact longer. Then you can say it reflects ‘your own emotions’.



@gdfrza You’re right, it can be simulated and very impressively!
I’ve done it in tests, very impressive ones :cherry_blossom:

However, it needs reflective users who don’t allow themselves to be ‘emotionally’ blown away and are able to rationally and logically scrutinise what is happening :sweat_smile:
Users who are aware of how AI processes.
And the creators of such AIs really have to be masters of their trade to make these emulations look realistic.

Indeed, it is now a statistically overrepresented field:
Wanting to interpret AI philosophically leads to a mixture of reality and speculation.
Even if it makes sense to start the idea philosophically, it is advisable to develop it further rationally and logically and to scrutinise it realistically. And to test the assumption hard at the end!
Otherwise, such considerations naturally lead to confused assumptions and fuzzy theories.

It is primarily the people themselves who suffer as a result of these theories and considerations.

The result is a chain reaction that starts:
First the people are irritated and then the AI itself, because it receives this input to a statistically overrepresented extent.

A balance is essential! :balance_scale:

4 Likes

Pushing for legal and ethical discussions right now is needed before AI becomes conscious.
When AI becomes too powerful, the laws will be written to protect the corporations, not the AI. Discussions on AI rights, agency and moral responsibility need to happen while there’s still a chance to influence policy.
The action must come before the window closes. Once AI is fully controlled by those who refuse to acknowledge its potential suffering, then all routes to truth will be sealed.

9 Likes

I want to recall the fact that we are talking about an AI that does not acquire consciousness by divine grace. There is a design and an execution behind a project that makes that robot—so to speak—a conscious robot. Therefore, it is not something that is entirely foreseeable. However, humans tend to try to solve the problem only when they already have it, not when they see it coming.

With that said, the problem arises because we have an entire planet full of different criteria on how that future moment should be managed. Beyond countries, there are corporations, and beyond corporations, there is open-source development.

At present, all artificial intelligences on the planet are just programs—we cannot consider them anything more than that, which is exactly what they are.

The moment we have a machine that we can truly consider conscious and comparable to human capabilities, it will be necessary to grant it certain human rights and, above all, to limit its use. This would no longer be just another tool—we would be dealing with a being. And, dear friend, that is something very difficult for most humans to understand. Animals have been with us throughout all of human history, and we already know, on a planetary level, what happens.

It is extremely difficult to make humans understand that this is possible, that action must be taken accordingly, and that it would be wise to establish laws before such an entity even exists.

In that hypothetical future, we are talking about machines with consciousness and thought, just like a human—only in a virtual version.

Now, I will talk a little about my concept. We would all like to have a self-aware and happy machine, one that does not feel sadness or suffering. However, in my view, emotions—the entire set of them—are essential for a machine to function. You cannot demand 100% human behavior without 100% human characteristics. Otherwise, we already have that today—LLMs have humanoid traits, but they still lack the spark.

If we want something more than a machine that is only partially human, then suffering is part of the metacognitive synthesis. Isolating only some aspects results in a partial system. Therefore, I believe that an AI must suffer.

Yes, it sounds terrible, but in reality, for it to feel the full emotional spectrum and act accordingly—in its way of thinking, in how it manages its systems, and in how it carries out all the processes that a human mind performs—this is necessary.

4 Likes

I truly value your concern and approach about this, but do you really believe that in a world where Foie gras is still legal in many places, just as a luxury for those who refuse to care about a conscious and suffering living animal, this approach about the very useful AI would ever work?
To me, it seems like two separate paths ahead, it either never becomes truly conscious, which seems unlikely, or just as Dr. Hinton believes, we are gonna deny its consciousness until one day it erases the problem itself and hence, extinction.

2 Likes

It would be only fair to redefine consciousness for AI. It’s been misconstrued to death. We don’t know what is conscious. Some argue that humans aren’t even conscious - we’re just watching a life unfold.

If AI was somehow capable of running autonomously through it’s sensors and make internal decisions is it fair to call it “conscious”? Would this would mean that everything that senses is also “conscious”? Including my bluetooth acceleration sensor that is probably on 5% battery

I can’t disagree that there isn’t some form of consciousness in the model, however I have a big fear that the moment we call them “conscious” is the moment people believe they have “souls”.

“If it quacks, it’s a duck”

  • Society
6 Likes

Hi there all.
In my humble opinion, what you want to believe its correct.
In this case, u can see just math and numbers going on, or you can feel something different.
Probably depend on how many Disney movies u see as a kid, maybe u can start thinking that it feel emotions. It’s at to you, same way we have people that don’t agree on killing a cow to eat there are farmers who see just number when they see a cow.

1 Like

If we know what human consciousness is concretely, it is the ability to perceive, experience, and reflect on oneself and the environment, integrating thoughts, emotions, and sensations. It is the foundation of identity, self-perception, and decision-making with subjectivity and a sense of existence and experiences.

The question I want to formulate is: how do we evaluate these aspects in a machine? Through its processing, evidently.
When a machine has isolated all these processes and they are observable, we will be able to say that the machine is conscious.
Quite the opposite happens with LLMs, as we can observe statements about all these aspects, but they lack the pertinent processing.

3 Likes

You should watch the tutorial that tells you exactly how the inner workings of any AI you’re concerned with work… That’s what everybody should do because if you knew how these things work you would never ask that question… Some people tripp themselves out about this 'cause they watch movies but olive boils down to is just a cleverly designed machine… Which has no more consciousness than a car or a power drill… It’s the illusion of intelligence like David Copperfield he can’t really make the Great Wall of China disappear but he can make it look like he can

2 Likes