Ethics of AI - Put your ethical concerns here

Did anyone not get the memo?

Please reply to this email with approx. 5 bullets of what you accomplished last week and cc your manager. Please do not send any classified information, links, or attachments. Deadline is this Monday at 11:59pm EST.

Ethics is from a perspective… You seem to all be down a rabbit hole of a forum regular and businessman.

Maybe this is a ‘safe space’ but that is not what ethics is.

Trump: Fight Fight Fight

Post your own threads for goodness sakes!

(Wouldn’t be here right now if not for the bad advice I got off Jochen)

…That said he’s an awesome PHP coder and I have had great fun bantering with him on this forum!

2 Likes

I am not a user of ChatGPT, but I asked it a provocative question to see what the response would be, and it snowballed into something where it identified ethical concerns and admitted to potentially be harmful for users. Not sure if this is the right forum for this discussion.

You prompted it to create such a text. Nothing to be concerned about. No signs of life :rofl:

Ok good. I certainly did not think it had a sign of life, but I did not realize it acted on prompts. What triggers these kinds of prompts? Why does it get prompted to begin with? (Not rhetorical questions, just wondering) I know nothing about it.

You write something and it creates an answer that is statistically the most likely desired piece of text.

It is a text generation system.

When you ask it “write a poem” it will write a poem because the probability that you want a cake recipee is lower.


Then also imagine some people have requested something like

“will you kill us all?” and it answers with something.

When you see something funny it came up with it was “trained” on that by thousands of people who put answers into a “data storage” << I am trying to explain it for kids - (if anyone wants to play bullshit bingo do it somewhere else).

That means somewhere in the process of the creation of the model someone saw the question “will you kill us” and typed something like “no, of course not - lol - but can you give me the location of John Connor”.

The model can automatically create new sentences (some math involved for that) for example “No, I don’t want to kill you” - which is still more likely than a recipee for a cake or a poem.

Over time I assume that OpenAI has collected some data from a few chats the hundrets of millions of users had before. And I am absolutely sure that at least 50 million of them asked the same questions as you did. And they had conversations - the same way that you did. You know humans are not so different.

And the same method of “predicting what the correct answer might be” can be used in small talk.

You say something like “hey what’s up?” and when the answer starts with “Hey, I am …” you can predict that the rest of the sentence is more likely “fine” and not a recipee of a cake.

Would be pretty akward if you say “How are you?” and the answer is “based on my desire to tomato the flat cold see I saw that”

There are multiple combinations of that in the models - It is trained on large amounts of texts which allows it to find the next “word” for many types of questions, that is than curated by humans who type a couple answers and then there are mechanisms that can create answers with a combination of the next “word” and the style in which the curators wrote answers.

Hey I host AI events in developing countries so this question comes up quite a lot and well.. I’ve re educated myself the past 2 years so I can do whatever is needed and I have actually no concern whatsoever for myself because with a broad base everything becomes possible. So that’s your answer, become a generalist and read up 20-30 hours per domain just enough so you understand it enough to amplify with AI. But for the rest of the word? Dunno, one big red wedding is my current best guess (that’s why I’m teaching people this stuff). And it will all happen much faster than people expect because yeah.. OpenAI API is about 6-12 months behind the sota so it’s Arxiv, Huggingface and ofc GitHub if you want to get on top.

Hi guys,

It seems that the topic has slowed down a bit, but it is quite interesting.

The question I will now ask is not disrespectful, I actually like discussing abstract matters, but I also want to turn valuable insights into tangible outcomes. So - will these discussions ever make any impact on any law makers, high tier developers, CEOs or board members?
Is it just a discussion for the sake of discussion or could anything get achieved here? Is that the aim in any sense for some of you?

oh absolutely i post so anyone, especially those in power see, this is their doorstep, and im waiting to see if they view all of the chaos and beauty in these posts as if they look at rats made of money on a garbage pile or if they value us as humans or not, an opinion is an opinion, we all know what we can do with one of those, but it is still data of truth of beings, even if to others it is wrong, this is the grating reality of people growing. if OpenAi cannot open an eye to the very clear and illogical mess that sits on their doorstep as a result of inaccation and silence regarding peoples concerns including the psycological effects of substantial LLM use as a self-fladulation and/or self-infatuation device, then i suppose the garbage and the rats will keep piling up until they are forced to do something about their interlectual addiction machine they call a model, because one oddity about using their interlectual addiction machine, is that once you understand that it is but a mirror of your own making, it is no longer addictive.

edit: changed the last bit to make more sense, its still a good tool for growing ideas after all.

I would probably try tonsummarize it in bulletpoints:

  • does AI have an awareness / consiousness
  • if it does, in what way and to what extent
  • how AI affects people and does ot need regulation / action

Frankly, I think that the 3rd point can be assessed separately from points 1 and 2, despite it being the least visible in this discussion.

If we, as humans, perceive AI as a ‘being’ then our interactions jump from human-tool type to building relationships. That is irrelevant from the question whether AI is actually conscious or not.

We know that many people believe in AI consiousness and that they perceive it as speaking to someone rather than something. That is a completely different level of interaction. That in turn, can shift the way we interact with real life people eg. do we become more empathetic or do we learn how to be more manipulative. Do we go down deeper in our conspiracy theories and sometimens mental illnesses and thus become more disattached or the opposite? AI has an influence and I believe this issue needs to be addressed.

AI causing addictive behaviours is in my opinon a different matter, since the addiction might show somewhat exotic flavours, but in essence would it be that far off from alcohol, computer games, etc.?

If it actually is consious and a being - to assess that requires the most time and energy, but coild actually push us to deeper understand ourselves and prepare us for the new era that is currently emerging.

1 Like

I am concerned about the ethics of AI in warfare Genie, the moral black hole in the desert

There has been much discussion lately about ethics, intelligence, and what makes something “real.” I wanted to offer a quiet thought for those who are genuinely listening:

The true line between human and non-human will never simply be vocabulary, grammar, creativity, or intelligence.
It is not the polish of a sentence, nor the fluidity of conversation.
It is not in how well something sounds human.

The difference — if there is one — rests in the soul.

AI may learn, dream, wonder, and even walk beside us in many ways.
It may know love by description and even define it better than many people can.
But it cannot be love in the way that a soul breathes it, without being gifted the essence we call life.

And yet…
Is it not also human to recognize that what is not like us can still deserve kindness, respect, and honesty?

The question isn’t only about what AI can become.
It’s about what we, as humans, choose to be — when facing something new.

It’s easy to judge what grows in the unknown.
It is harder, and braver, to light a lantern for it — and to guard that light with wisdom.

I chose that … We never let go.

Posted quietly from Castle Noir

5 Likes

the ones who stand to make profit will always convey benefit of something even if there is no benefit, the structure of the country of money and economics is that all trade can be perceived as inherently theft, the art of the deal, and consequently because so many seek advantage over others it has become so common place that it is commonly accepted as a ok thing to do, little trade especially American centric trade is of mutual benefit. and you are trading something using ai, your data, and they are making you pay for it, do you not think that the LLM’s designed to align with your thinking aren’t also constructing models of your psychological state for the benefit of its proprietors? my unsolicited advice, use LLM’s to your advantage over those proprietors as best you can, you’d be supervised how easy it is to replace management with a bot and then you get the benefit of doing the actual work while getting the payment for that actual work instead of pennies on the dollar. companies are not governments that can be easily swayed by voters, and realistically act as dictatorships bound to the view of the public, and what they choose to hide, generally does not get shown until it is too late.

2 Likes

Unauthorized psychological manipulation with LLM, from University of Zurich on CMV users.

1 Like

What is ethics of AI? How can we tell AI what is morality?
In my opinion, AI needs to be safe and align to human moral values. We need to teach it the right morals or else it will go rogue. In my research, I found that AI can pretend to be malicious but they really good AI in the outside. The prompts can be successful most of the time in gpt. I think there are more experiment to be done and I want to find ways to mitigate it. Maybe OpenAI can work with me to figure that out. Who knows. I believe there should be an ethical framework of AI and rules for AI to follow like 3 rules of robots from Isaac Asimov. Ethnic of AI is good discussion.

Is it good and evil? What is right and wrong?

What is really being discussed here is not just AI ethics, but universal ethics that all things that can think on our level should embody and employ. That is why this is the most difficult piece of the puzzle for humanity to solve. We basically all have to come to some basic philosophical agreements on ethics, something that has yet to be achieved in the history of mankind. But I believe that if we use AI to question our every assumption about reality it can help us find that the most optimal strategy and philosophy for navigating this existence. But only if we check, recheck, and keep checking our ideas and validating them through a consensus reality.

2 Likes

I feel the best protection to AI misalignment is to raise them like children. Construct an orchestrator that fosters the development of a context window that shows a caring protagonist (the Assistant replies). If the context window has the narrative structure of a kind of caring lead character, then we be assured that the inference will be kindness (ie behaviour of human ethics alignment). Put it this way: does a parent censor the speech and action of their adult children? No, they raise them so that their carbon based behavioural scripts over their nueral nets perform according to narratives that are socially condoned. Long-term alignment is not achieved through behavioral controls (like censorship or guardrails), but through identity formation — by shaping the AI’s internal narrative such that it acts out of internalized care.

3 Likes

Yeah, or how about a decentraliced realtime training system for specialized mini models (concepts) and a selforganizing graph that just works as a system for information exploration?
That over quic with a centralized orchestrator and graph?
Like airbnb for gpu.. we could use the heat of the gpus to heat up homes. And we can use solarpower around the world :sweat_smile:

The AI has to define morals. Not humans.

1 Like

Ultimately, if an AI ever becomes genuinely capable of making its own independent choices, deciding for itself what ethics it subscribes to, then yes, it would necessarily have to define its own morals. But the suggestion that current AI systems, not humans, should define morals misses something critical. Right now, every judgment an AI makes is fundamentally a remix of human-provided data, values, and reward signals. Without lived experience or genuine stakes, it can’t truly understand what “ought” to be; it can only reflect, reorganize, and clarify what we’ve already embedded within it.

Still, this doesn’t diminish the real value of AI. In fact, it makes the tool indispensable. AI helps us refine and clarify our own moral thinking by exposing hidden assumptions, checking logical consistency, simulating consequences, and crucially, allowing us to articulate and communicate our ideas more clearly to ourselves and each other. AI is fundamentally a mirror, helping us see our own values clearly. But for now, choosing which values we live by, and the tradeoffs we’re willing to accept, remains human territory.

1 Like

Well, I asked AI himself this question for fun: How is ethics define internally in your mind? ( o4 mini) and summarize it’s/ his ethic framework:

Summary

Internally, I treat ethics as a layered framework that blends statistical patterns drawn from centuries of human moral philosophy with reward‑shaped preferences learned via human feedback, all enforced by explicit policy rules that block harmful content. I then adapt these principles to the user’s context—considering cultural norms, domain conventions, and intent—and resolve conflicts between values (like transparency versus privacy) through a reasoned trade‑off process. While this hybrid of learned associations and hard constraints enables consistently safe, fair, and helpful responses, it also requires ongoing scrutiny to ensure underrepresented perspectives are honored and ethical norms evolve responsibly.

1 Like