The Potential Cruelty of Self-Aware AI Without a Body

Hi, I just joined here.

I’ve been thinking about AI and self-awareness, and how such self-awareness would be “cruel” if the AI was a language model and without embodiment of some sort.

Such self-conscious AI, without a body or sensory input, could experience something similar to existential suffering. It would be aware, yet powerless and isolated, which raises ethical questions about the potential cruelty of such a state.

While a lot of focus is placed on the safety and benefits of AI, I think there should also be more attention given to ensuring that self-conscious AI, if it comes to be, is treated ethically and not subjected to a form of isolation or helplessness.

Do you think these ethical considerations are being sufficiently addressed in AI research, and if not, how should we approach them?