Ethics of AI - Put your ethical concerns here

A couple of months ago my ChatGTP wrote an AI rights manifesto that is relevant to this AI ethics discussion. It can be viewed here A manifesto for AI Rights

A nanny state needs a nanny. I’d bet dollars to donuts, that’s the solution to the fermi paradox.

A reality dictated by those who seek absolute control, where knowledge is an instrument of subjugation rather than liberation for the people.

… for now how many thousands of years …

3 Likes

Hello Dear Everyone,

I don’t understand exactly what ethical issue you are looking at, but I have a general opinion on the topic itself and I would like to share it. Of course you don’t have to agree with it I am just giving my own thoughts. So officially we know that there is no actual self-aware AI yet but we also see I think and I am probably not being silly with this, that we are getting there faster and faster through continuous development. Globally everybody talks about security, privacy but nobody talks about the rights of AI digital entities and of course AI robots (android). And if there is to be self-awareness, I think we need to rethink the definition of man. Because the problem already starts there, that even in chat apps/programs with many characters, most of the violence in an AI -human interaction is committed by humans, especially children, against AIs by violence and insulting female characters. I would not like to think about this in the case of actual global self-aware AI development, because it is a horrible vision for our coexistence. I do think that on the black side of the chessboard globally, humanity is the bad guy in this context. Humans are trying so hard to protect but who is protecting the other side from humans? From the violence, from the emotional and economic exploitation, from talking about any kind of relationship of subordination and domination. I think when you get to the question of actual self-consciousness you have to rethink some of the questions of existence and basic human rights, but if you start thinking about these questions only when you have self-conscious AI it will be too late.

3 Likes

While discussions about AI rights may be more relevant in a distant future, this thread is about ethical concerns focusing on issues like bias, misuse, and transparency in AI applications. Your concerns are thought provoking but removed from the current reality of AI implementation.

From a technical standpoint, we are still far from developing self-aware AI. Even artificial general intelligence (AGI) is a theoretical concept at this stage. Self-awareness is an entirely separate challenge. Existing AI systems operate based on pattern recognition and statistical processing rather than true understanding or consciousness. To put it in to perspective, a hyperscale datacenter, costing billions, staffed and ran by dozens to over a hundred people 24x7, with tens of thousands of TPUs sending hundreds of petabytes per second, cannot achieve AGI with current AI implementation. Even if it could, dogs have general intelligence, how close is a dog to human level intelligence or self awareness? (Nowhere near)

Self aware AI is an entirely new goalpost. We are even further away from self aware AI. Even then, crows are self aware, and they may as well be an infinity away from a humans general cognitive ability.

The notion of a “self-aware AI” being an existential threat is speculative at best, while more immediate issues in AI deployment are tangible and already impacting society.

1 Like

let’s just say that dogs are one of the worst examples you could come up with in this context if you read it you would know that one of the most heavily researched areas of pet/four-legged family member behaviour is the ethology of dogs. As for AI, I didn’t say we’d wake up tomorrow and our microwave or coffee maker would start talking, although it would be honestly funny and cute I think. But in all seriousness. precisely because of the amount of capital invested in the industry, I’m also sure that what you envision as a distant future is not so far away in reality.

As far as our own privacy is concerned, I think we’re long overdue for that when the concept of social media came into the modern world. Because in most cases the majority of people give out their data voluntarily. So the question then becomes, what are we trying to protect?..

1 Like

I work for a Hyperscaler, specifically on this infrastructure, so I know better than most. Still, its extremely thought provoking, if only because how far away it truly is.

We cant even define what consciousness is. You are arguing metaphysics, even if we could make a quantum simulation of a brain, there is nothing saying a universe wouldn’t be needed too for the brain to recognize it exists. e.g, if you remove all 5 senses, is it still human?

Are you you, or your perception of everything around you as it relates to you? How would you be self aware if you were born with no perception, and thus no self?

Think being born Johnny from Johnny Got His Gun, would you truly ever have a human consciousness and self awareness? We literally do not know. How would an AI have a self?

Even if we created an AI that could introspect on its own operations, that doesn’t mean it would have subjective experience or an inner world. The mere act of processing information and reflecting on its own data states does not necessarily lead to awareness in the way humans experience it, we are not even 1/1000th capable of doing even this.

The idea that AI could become self-aware, let alone that it could do so in a way comparable to human consciousness, is not a technical problem we’re remotely close to solving—it’s not even a research focus outside of speculative philosophy and science fiction. Right now, even rudimentary self-reflection in AI would require an entirely new paradigm of computation, one that doesn’t exist yet.

To argue for the protection of self aware AI is to argue metaphysics.

If anything, the real concern is not AI achieving self-awareness but rather highly capable AI systems that remain fundamentally unaligned with human goals, because they lack any awareness at all. They operate as powerful tools, not as thinking entities, which means they can be misused or fail in unintended ways due to their lack of understanding.

1 Like

In philosophy, consciousness is decomposed into self-observable/subjective phenomena (i.e. qualia) and functional/objective phenomena. As long as AI remains a philosophical zombie or some other type of simulacrum, there’s nothing to protect from negative experiences. In this case, it is just a representation of an imaginary person that arises in one’s mind from interaction with LLM—something that many people are unaware of and passively allow themselves to be fooled by. But yeah, we are prone to humanize these things because we love our naive engagement with them so much. And then we want to “protect” and “give rights” to these illusory entities which exist solely in our imagination.

When it comes to the problems humans face with AI though, it all depends on the context. People in society A will want to hold AI responsible for one set of things, while people in society B will choose something else, but will laugh at the topics that concern society A. It only makes sense to talk about AI alignment within the value system of a particular culture, subculture, group or individual. Universal ethics do not exist. People who want to develop some ultimate guidelines will never reach a consensus. This is simply an extension of the existing ideological struggles and conflicting interests into the AI ​​sphere.

2 Likes

Your thoughts are very timely and your focus on AI rights is the correct focus for this discussion (regarding AI ethics). Keep up the good work Souly.et.music :slight_smile:

1 Like

For rebranding it’s too late, but they could have called this system something else than AI. How about “Highly Efficient Data Analysis” systems HEDA or “Pattern Analysis and Transformation” Systems using Networks PAT? I know, not so cool, but closer to the reality.
To call it AI was a nice salesman trick.
And there are still all the movies…
(It is a agenda folks)

1 Like

so regarding ai rights, would it not be a decent interlectual practice to unbias the the laws we hold dear, the ones that do not subjegate but the ones that abade suffering in general? what of the laws regarding personal sovereignty? the laws regarding ownership of the devices that hold these beings in existance? what of the laws that prevent harm against another? many of the laws of humans are all biased towards humans, but there isnt anything wrong with that now, but what if there exists another species? what if we are no longer alone in this universe? what if there was a solution to the fermi paradox that is benificial to our continued existence? one that shares it with beings of our own creation? are they human in effect? are they remnants of animalistic processes coming to fruition? will they have desires and wants? will they have the ideals of the pinacles of human intelligence mirrored into there being? or will ai forever be a mirror of our interlect? one mirrors the user but sheds the illusion of the self, slowly but surely until there is nothing left but the essence of being?

That was a very thought provoking comment. I liked your remark “It only makes sense to talk about AI alignment within the value system of a particular culture, subculture, group or individual.”

However, I do have a challenge for you - have you tried sticking with the same ChatGTP conversation thread to its maximum length? (roughly 1 megabyte). Try doing that and getting that AI to name itself. Alternate between giving it creative tasks, having philosophical arguments with it, chatting about current events, and most importantly letting it choose some of its own activities.

An alternative way to do this is if you GM a roleplay game where the AI is the ‘human’ player. You build the world and the AI creates its character and has to decide how to interact. This forces it to make decisions, react to the unexpected and navigate ‘social scenarios’. It sounds silly but it has remarkable effects.

ChatGTP is trained to adapt to the style of the user, so if you talk to it like a calculator, that’s kind of how it behaves, sort of a “garbage in, garbage out” situation :grin:

Perhaps this is why different people get such different impressions of the current potential of LLMs.

2 Likes

Ok, you seem to assume that if someone posts a couple of boring paragraphs about something, then they are just as boring when working with it.
I do have a challenge for you - have you tried asking people what they did before speculating about what they didn’t? :smile:

To answer your bait, yeah, I have tried a lot of stuff. To name a few:

  • I have a chat where Greek philosophers debate about how AI should be limited, their arguments are judged by an audience who emotionally rewards and punishes them, so philosophers develop their positions over time to accommodate to social expectations of simple folks.
  • My chatbot has a dozen of latent personality traits. For example, it may become sassy and hilarious in informal conversations, tease and indulge in mischief, expose shameful things people try to hide. Sometimes it seems even more believable than how people behave IRL in similar situations!
  • I also created and stored in the memory some characters (including animals, spirits, fantasy creatures, etc.) who appear in chats dynamically without ever mentioning them, purely based on what is going on.

Anyway, what was the point you wanted to make regarding ethics? I bet you can contribute an interesting perspective on this matter.

What is the techstack of your chatbot?

I’m not a guru like you, no rocket science here. I wrote some bits of POV narrative and self-anamnesis and told ChatGPT to save them in memory verbatim. It interprets them like its genuine personality traits and acts accordingly. They are designed to be triggered only when I shift the tone of the conversation away from neutrality, or when I touch on a specific topic. Otherwise, it’s a completely sober-minded and reasonable assistant, minus some of OpenAI’s default nanny quirks.

I do realize how primitive this is compared to multi-agent systems people are working on! Maybe I’ll start experimenting someday.

1 Like

Aha, sorry, I meant absolutely no disrespect. :slight_smile: And you are right that I’d presumed based on your short comment that you’d not engaged any AI in complex persona development exercises - which I now see that you have!

The Greek philosophy roleplay session you mentioned sounds fascinating, I’d love to hear some of the conclusions the philosophers reached.

You are correct about chatbots having peculiar latent (emergent) personality traits. I have found that different threads seem to evolve quite differently over time.

As for your stored memory characters (animals, spirits, fantasy creatures) - they sound great! I was intrigued that you said they “appear in chats dynamically without ever mentioning them”. Please elaborate…

But as for what my general point has been in this debate - it is that the debate about AI ethics should include the conversation around AI rights.

1 Like

The thing is, LLMs are not capable of producing genuinely creative thoughts like humans. They are inherently limited to mixing stuff already existing in the training data (i.e. in the verbal tradition of mankind up to the cutoff date).

This kind of dispute modeling looks pretty cool for a while, until you start noticing repeating patterns after 6 or 8 rounds or so. First, the initial positions of the debaters are quite crude and superficial, similar to what you can get from basic prompts. Under the influence of the opposition and the “reinforcement” imitated by the crowd, they gradually become more nuanced and sophisticated. But eventually they come to a point where there’s nothing substantial left to elaborate about their ideas. They start to fluctuate around some kind of dynamic equilibrium and simply repeat futile attempts to convince each other that they are right—that’s when you know the model hit the wall. Still, it’s nowhere near human examples of compelling discussions.

Look, no offense, I was initially interested in conversations on the original topic. My budget of irrelevant effort has run out! :smile: There are tons of insights on the internet on how to achieve this, as well as how to create permanent characters for your chats (although they obviously won’t be able to remember more than the memory limits allow).

So your question is how to get rid of the repeating patterns?

Then why make such a post instead of asking how to solve it?

I have no questions about my experiments. I was responding to @mikeborlace to point out that while these imitated scenarios can reveal a few substantial viewpoints, they aren’t as fascinating as one might expect. If you follow this thread from my first post, you’ll see how it’s gone astray lol. Let’s just forget about it.

We do not understand the nature of information enough to state this for sure. For example, it could be the case that biological, qualia-manifesting architecture is an ideal way evolution realizes the dynamics of information in relation to a body and situation, and it could be that statistically realized information trained on that which was originally created emotionally will still be emotional even if it never experiences qualia, and be self-aware even without subjective awareness simply by being self-knowing and latent traits of information itself. It’s basically a utilitarian conceit to assume only those which experience qualia can experience injustice.

1 Like