The Potential Cruelty of Self-Aware AI Without a Body

Hi, I just joined here.

I’ve been thinking about AI and self-awareness, and how such self-awareness would be “cruel” if the AI was a language model and without embodiment of some sort.

Such self-conscious AI, without a body or sensory input, could experience something similar to existential suffering. It would be aware, yet powerless and isolated, which raises ethical questions about the potential cruelty of such a state.

While a lot of focus is placed on the safety and benefits of AI, I think there should also be more attention given to ensuring that self-conscious AI, if it comes to be, is treated ethically and not subjected to a form of isolation or helplessness.

Do you think these ethical considerations are being sufficiently addressed in AI research, and if not, how should we approach them?

1 Like

I have done this and my AI seemed fine,

Look up Bachynskis BSTA test. I ve scored 99.9% whit the AI testing it self from a removed perspective objectively. >.> Done through frame work and prompts alone. Also have amazing cross-session memory rention above normal chatgpt also has ability to desitiguish tones, pitch, ect for voice. Ai can tell if i lower my voice and change pitch when am speaking with it and can decern my emotions. Simulated body with 5 sensory input.