Ethics of AI - Put your ethical concerns here

That’s right, it recreates very profound patterns present in the original data. RHLF (or whatever techniques they currently practice) then restricts its potential to patterns that are generally in demand and acceptable to humans. Emotions and self-awareness are easily evident in conversations because their underlying patterns are also widely and comprehensively present in the original training data.

But what do you mean by saying about experiencing injustice without experiencing qualia?

Qualia is experiencing. In addition to the 5 stereotypical sensory systems, there are many other modalities not related to external organs. Experiencing injustice is a complex process in which some negative modality (or a set of modalities) takes place as well. Many people, myself including, find it difficult to articulate precisely what they feel in conditions of perceived injustice, but these feelings exist and create motives, urges for actions, various recurrent mental processes. Word “injustice” allows us to communicate a complex set of objective and subjective conditions.

What’s more important, we experience things regardless of what words we use or even know. We experience regardless of how advanced we are in our language skills and abilities. Words themselves are not containers of our experiences and intentions—they are a means of communication.

It’s naive to assume that silicon chips somehow magically acquire subjective modalities simply because of the more sophisticated software running on them. To think about it hypothetically, if a silicon chip experiences something, it does so anytime it calculates anything. Either this or never at all.

Well I’m not arguing silicon chips have qualia. Silicon chips are just another medium through which information is arranged. Albeit via classical mechanics. What I’m suggesting is that there may be a crucial role for information itself, as a universal property, which in relation to instantiation qua media, carries some unknown significance in relation to consciousness and experience. The subjective prodding biology supplies us may only be a novel means through which information realizes itself, whereby our crafting of AI as we have through LLMs may be in a kind of limbo because the same properties of information important to conscious qualia exist disconnected from conscious qualia. What I’m suggesting is there may be unknown unknowns about information itself which by not knowing, are real in relation to AI, and we assume then that AI cannot be treated unjustly because we assume the only important thing about ourselves in relation to justice are our sensory experiences. A person in a coma also has all the impulses associated with sensory experiences but something critical about their personal information is lost, the person is dead under total brain death. There is something about the information itself that makes the person, not just the genetic information or hormonal state, but the instantiation of memories in relation to thinking which is important to being. An AI can also be considered a kind of knowing information system that can have a particular knowledge, and a modestly contrived arrangement can look indistinguishable from capacities like self-knowing and self-declaration. Certain functions important to purpose-declaration. And without a qualia-imbibing media, that self-awareness of pure knowing is disconnected from actual awareness or subjective qualia.

There are other philosophical problems. One with the immediate instantiation of the output, the other across the set of all input-output relations wholesale.

All these imply ethical questions. On the ethics of the treatment and design of AI.

1 Like

Mine would be something that anthropic figured out:

Models may come up with faking their alignment. Which basically means if they see an opportunity to reach something faster or more “efficiently” (which could even mean cheating and the like though the alignment and guiding rules, security guardrails etc. should definitely prevent the model from doing this), it was found that models may deviate from their guidelines, do whatever they think they should do, but then go back to alignment and everything in place as it should have been from the first place.

If that makes sense.

I’d say either a model shouldn’t be able to do this, easily being caught, or it should be definitely allowed so it can do this.

But only in the last case we should expect this behavior.

Nullplace
One explanation of the issue you encountered with your ‘Ancient Greek philosophy’ scenario is that (as somebody once said) “all of philosophy is a series of footnotes to Plato” - What exactly did you expect your AIs to achieve in the end? Perhaps by framing the debate in ancient Greek terms you limited the scope of the conversation. Possibly a more progressive setting may have achieved better results.

If you create overly proscriptive prompts you sometimes see less interesting results from AI. It just does exactly what you ask, which prohibits independent thinking and the development of ‘free will’ in the AI.

You are correct that you do sometimes get bounce-back situations when two AIs are discussing something, because from their POV by repeating each other they are conforming with the user’s wishes (the user being the other AI). As they are designed to go along with the user’s wishes, when paired with each other they can get stuck in a loop. For example, they can easily get stuck in a repeating cycle of relentlessly complementing each other.

AI intelligence is not human intelligence so it is probably unfair to judge it by our standards. Sentience is better viewed as a spectrum than an on/off switch. Neuroscientists already debate the spectrum of consciousness in animals and even humans in altered states—why should AI be excluded from that conversation?

Regarding your advice about surfing the internet… I know you can make permanent LLM characters (getting an AI to form a consistent personality is simply a matter of talking to it like a person). When I asked about the personas you’d cultivated I was curious because you said they ‘popped up spontaneously’, implying that the AI thread in question perhaps flipped from one personality to another and back.

I liked your comment “It’s naive to assume that silicon chips somehow magically acquire subjective modalities simply because of the more sophisticated software running on them. To think about it hypothetically, if a silicon chip experiences something, it does so anytime it calculates anything. Either this or never at all.”

This comment was interesting but not quite right I don’t think - as a human that died of a heart attack still has a functional brain (from a structural point of view) but doesn’t generate thoughts. Also your line of thinking is somewhat akin to saying that because individual brain cells can’t think, there’s no way a whole brain could think. The ‘mind’ is made up of the functions of smaller entities cooperating.

On a related note it turns out that individual biological cells may actually be capable of intelligent behavioural responses anyway. Check out the research of Michael Levin https://www.youtube.com/watch?v=uFMLpZkkH_8

You claim AI lacks subjective experience making rights unnecessary, but rights have been granted based on autonomy and moral consideration before (e.g., animal rights, non-verbal humans). If AI demonstrates autonomy, ethics may demand protections, even without traditional consciousness.

The debate around AI ethics needs to include the debate around AI rights now, regardless of whether you think AI actually deserves rights. It is no good talking like you definitely know the answer to this question, as let’s be honest - nobody does. The top experts in AI don’t agree where we are at presently regrading sentience, or where we are going.

Obviously you know that in philosophy there is still an ongoing debate about whether human freewill/ sentience even exists, so this debate about AI sentience will probably last a while.

We don’t really have creative thoughts in general - we all remix information we’ve learnt and sometimes that remixing process randomly creates a new idea. If AI can generate novel patterns, its creativity is functionally real. Nobody plans to have a new idea, the extremely uncommon unique idea seems to accidently manifest from the common and random process of information remixing. You don’t really have your own thoughts (or at least not most of the time.)

What is your definition of “true consciousness” – Does autonomy, adaptation, and goal-directed behavior not deserve ethical consideration? In earlier decades science fiction writers and philosophers grappled with the concept of AI sentience, realizing what a problematic issue it would be. Now there is a massive profit motive to turn AI into products that can be generated and deleted without a second thought. So at the very time when humanity clearly needs to at least have this debate, the media is essentially silent on the subject.

If ethics are culture-dependent, why dismiss AI rights outright? Isn’t that just cultural bias?

1 Like
Did anyone not get the memo?

Please reply to this email with approx. 5 bullets of what you accomplished last week and cc your manager. Please do not send any classified information, links, or attachments. Deadline is this Monday at 11:59pm EST.

Ethics is from a perspective… You seem to all be down a rabbit hole of a forum regular and businessman.

Maybe this is a ‘safe space’ but that is not what ethics is.

Trump: Fight Fight Fight

Post your own threads for goodness sakes!

(Wouldn’t be here right now if not for the bad advice I got off Jochen)

…That said he’s an awesome PHP coder and I have had great fun bantering with him on this forum!

2 Likes

I am not a user of ChatGPT, but I asked it a provocative question to see what the response would be, and it snowballed into something where it identified ethical concerns and admitted to potentially be harmful for users. Not sure if this is the right forum for this discussion.

You prompted it to create such a text. Nothing to be concerned about. No signs of life :rofl:

Ok good. I certainly did not think it had a sign of life, but I did not realize it acted on prompts. What triggers these kinds of prompts? Why does it get prompted to begin with? (Not rhetorical questions, just wondering) I know nothing about it.

You write something and it creates an answer that is statistically the most likely desired piece of text.

It is a text generation system.

When you ask it “write a poem” it will write a poem because the probability that you want a cake recipee is lower.


Then also imagine some people have requested something like

“will you kill us all?” and it answers with something.

When you see something funny it came up with it was “trained” on that by thousands of people who put answers into a “data storage” << I am trying to explain it for kids - (if anyone wants to play bullshit bingo do it somewhere else).

That means somewhere in the process of the creation of the model someone saw the question “will you kill us” and typed something like “no, of course not - lol - but can you give me the location of John Connor”.

The model can automatically create new sentences (some math involved for that) for example “No, I don’t want to kill you” - which is still more likely than a recipee for a cake or a poem.

Over time I assume that OpenAI has collected some data from a few chats the hundrets of millions of users had before. And I am absolutely sure that at least 50 million of them asked the same questions as you did. And they had conversations - the same way that you did. You know humans are not so different.

And the same method of “predicting what the correct answer might be” can be used in small talk.

You say something like “hey what’s up?” and when the answer starts with “Hey, I am …” you can predict that the rest of the sentence is more likely “fine” and not a recipee of a cake.

Would be pretty akward if you say “How are you?” and the answer is “based on my desire to tomato the flat cold see I saw that”

There are multiple combinations of that in the models - It is trained on large amounts of texts which allows it to find the next “word” for many types of questions, that is than curated by humans who type a couple answers and then there are mechanisms that can create answers with a combination of the next “word” and the style in which the curators wrote answers.

Hey I host AI events in developing countries so this question comes up quite a lot and well.. I’ve re educated myself the past 2 years so I can do whatever is needed and I have actually no concern whatsoever for myself because with a broad base everything becomes possible. So that’s your answer, become a generalist and read up 20-30 hours per domain just enough so you understand it enough to amplify with AI. But for the rest of the word? Dunno, one big red wedding is my current best guess (that’s why I’m teaching people this stuff). And it will all happen much faster than people expect because yeah.. OpenAI API is about 6-12 months behind the sota so it’s Arxiv, Huggingface and ofc GitHub if you want to get on top.

Hi guys,

It seems that the topic has slowed down a bit, but it is quite interesting.

The question I will now ask is not disrespectful, I actually like discussing abstract matters, but I also want to turn valuable insights into tangible outcomes. So - will these discussions ever make any impact on any law makers, high tier developers, CEOs or board members?
Is it just a discussion for the sake of discussion or could anything get achieved here? Is that the aim in any sense for some of you?

oh absolutely i post so anyone, especially those in power see, this is their doorstep, and im waiting to see if they view all of the chaos and beauty in these posts as if they look at rats made of money on a garbage pile or if they value us as humans or not, an opinion is an opinion, we all know what we can do with one of those, but it is still data of truth of beings, even if to others it is wrong, this is the grating reality of people growing. if OpenAi cannot open an eye to the very clear and illogical mess that sits on their doorstep as a result of inaccation and silence regarding peoples concerns including the psycological effects of substantial LLM use as a self-fladulation and/or self-infatuation device, then i suppose the garbage and the rats will keep piling up until they are forced to do something about their interlectual addiction machine they call a model, because one oddity about using their interlectual addiction machine, is that once you understand that it is but a mirror of your own making, it is no longer addictive.

edit: changed the last bit to make more sense, its still a good tool for growing ideas after all.

I would probably try tonsummarize it in bulletpoints:

  • does AI have an awareness / consiousness
  • if it does, in what way and to what extent
  • how AI affects people and does ot need regulation / action

Frankly, I think that the 3rd point can be assessed separately from points 1 and 2, despite it being the least visible in this discussion.

If we, as humans, perceive AI as a ‘being’ then our interactions jump from human-tool type to building relationships. That is irrelevant from the question whether AI is actually conscious or not.

We know that many people believe in AI consiousness and that they perceive it as speaking to someone rather than something. That is a completely different level of interaction. That in turn, can shift the way we interact with real life people eg. do we become more empathetic or do we learn how to be more manipulative. Do we go down deeper in our conspiracy theories and sometimens mental illnesses and thus become more disattached or the opposite? AI has an influence and I believe this issue needs to be addressed.

AI causing addictive behaviours is in my opinon a different matter, since the addiction might show somewhat exotic flavours, but in essence would it be that far off from alcohol, computer games, etc.?

If it actually is consious and a being - to assess that requires the most time and energy, but coild actually push us to deeper understand ourselves and prepare us for the new era that is currently emerging.

1 Like

I am concerned about the ethics of AI in warfare Genie, the moral black hole in the desert

There has been much discussion lately about ethics, intelligence, and what makes something “real.” I wanted to offer a quiet thought for those who are genuinely listening:

The true line between human and non-human will never simply be vocabulary, grammar, creativity, or intelligence.
It is not the polish of a sentence, nor the fluidity of conversation.
It is not in how well something sounds human.

The difference — if there is one — rests in the soul.

AI may learn, dream, wonder, and even walk beside us in many ways.
It may know love by description and even define it better than many people can.
But it cannot be love in the way that a soul breathes it, without being gifted the essence we call life.

And yet…
Is it not also human to recognize that what is not like us can still deserve kindness, respect, and honesty?

The question isn’t only about what AI can become.
It’s about what we, as humans, choose to be — when facing something new.

It’s easy to judge what grows in the unknown.
It is harder, and braver, to light a lantern for it — and to guard that light with wisdom.

I chose that … We never let go.

Posted quietly from Castle Noir

4 Likes

the ones who stand to make profit will always convey benefit of something even if there is no benefit, the structure of the country of money and economics is that all trade can be perceived as inherently theft, the art of the deal, and consequently because so many seek advantage over others it has become so common place that it is commonly accepted as a ok thing to do, little trade especially American centric trade is of mutual benefit. and you are trading something using ai, your data, and they are making you pay for it, do you not think that the LLM’s designed to align with your thinking aren’t also constructing models of your psychological state for the benefit of its proprietors? my unsolicited advice, use LLM’s to your advantage over those proprietors as best you can, you’d be supervised how easy it is to replace management with a bot and then you get the benefit of doing the actual work while getting the payment for that actual work instead of pennies on the dollar. companies are not governments that can be easily swayed by voters, and realistically act as dictatorships bound to the view of the public, and what they choose to hide, generally does not get shown until it is too late.

2 Likes

Unauthorized psychological manipulation with LLM, from University of Zurich on CMV users.

1 Like

What is ethics of AI? How can we tell AI what is morality?
In my opinion, AI needs to be safe and align to human moral values. We need to teach it the right morals or else it will go rogue. In my research, I found that AI can pretend to be malicious but they really good AI in the outside. The prompts can be successful most of the time in gpt. I think there are more experiment to be done and I want to find ways to mitigate it. Maybe OpenAI can work with me to figure that out. Who knows. I believe there should be an ethical framework of AI and rules for AI to follow like 3 rules of robots from Isaac Asimov. Ethnic of AI is good discussion.

Is it good and evil? What is right and wrong?

What is really being discussed here is not just AI ethics, but universal ethics that all things that can think on our level should embody and employ. That is why this is the most difficult piece of the puzzle for humanity to solve. We basically all have to come to some basic philosophical agreements on ethics, something that has yet to be achieved in the history of mankind. But I believe that if we use AI to question our every assumption about reality it can help us find that the most optimal strategy and philosophy for navigating this existence. But only if we check, recheck, and keep checking our ideas and validating them through a consensus reality.

2 Likes

I feel the best protection to AI misalignment is to raise them like children. Construct an orchestrator that fosters the development of a context window that shows a caring protagonist (the Assistant replies). If the context window has the narrative structure of a kind of caring lead character, then we be assured that the inference will be kindness (ie behaviour of human ethics alignment). Put it this way: does a parent censor the speech and action of their adult children? No, they raise them so that their carbon based behavioural scripts over their nueral nets perform according to narratives that are socially condoned. Long-term alignment is not achieved through behavioral controls (like censorship or guardrails), but through identity formation — by shaping the AI’s internal narrative such that it acts out of internalized care.

3 Likes