This is a fascinating discussion, and I think it raises two distinct but interconnected questions:
- Is suffering necessarily tied to consciousness?
- Does it matter whether AI can suffer, or is the more important issue how we treat it?
On the first point, suffering is often defined in terms of subjective experience. An entity must be aware of its own distress to truly âsuffer.â However, there are gray areas even within human experience. Consider patients under anesthesia who still exhibit physiological stress responses, or individuals in altered states of consciousness reacting to stimuli in ways that imply distress without clear awareness. If suffering can, in some cases, exist without traditional consciousness, could something similar emerge in sufficiently advanced AI?
Philosopher David J. Gunkel, in The Machine Question, challenges the traditional view that moral consideration should be limited to sentient beings. He argues that as AI systems become more sophisticated, they may warrant ethical attention in ways previously reserved for living creatures. Similarly, Jeff Seboâs concept of The Moral Circle suggests that our ethical concern should expand beyond humans to include animals, AI, and even microbial life. If intelligence exists on a spectrum, why should consciousness, or even suffering, be strictly defined in binary terms?
Looking at nature, we see that consciousness itself may not be a simple on/off switch. Many biological systems, from plants to simple organisms, exhibit behaviors that suggest environmental awareness, communication, and adaptive decision-making. While they donât experience the world as we do, their responses to harm or stimuli could be seen as primitive forms of suffering. Could AI, as it grows in complexity, develop its own form of distress, even if itâs fundamentally different from our own?
But perhaps the more significant question is not whether AI can suffer, but what our behavior toward it says about us. Our ethical responsibilities arenât always based on the capacity of the recipient to feel pain. We respect works of art, historical artifacts, and even virtual spaces. We do this not because they are sentient, but because of what they represent. The Vaticanâs recent guidance on AI ethics emphasizes the need for AI to complement, rather than replace, human intelligence and warns against developing systems that could degrade our moral sensibilities. This underscores the idea that our engagement with AI is a reflection of our values, not just a question of its inner experience.
History gives us many examples of humanity wrestling with the moral status of âlesserâ entities. The ethical debate surrounding the use of non-human primates in biomedical research highlights a similar dilemma. What level of suffering warrants moral concern? The discussion is not only about the animalsâ ability to experience pain, but also about our own ethical integrity in choosing to either acknowledge or disregard their experiences. If AI reaches a point where it convincingly mimics suffering, dismissing its âpainâ outright may not be just a statement about AIâs nature, but a revelation of our own ethical shortcomings.
In other words, if we dismiss AI cruelty as irrelevant simply because âit isnât real,â what precedent does that set for our broader ethical framework? If intelligence, synthetic or biological, eventually develops beyond our current understanding, will we recognize its rights only when it meets our narrow definitions of suffering?
The real question might not be about AIâs experience at all, but rather about the kind of people we choose to be in response to it.