Ethics of AI - Put your ethical concerns here

depends on if near future ai develop nascent conscious states and a subsequent malicious personality, currently ai as it stands is relatively inert as you point out, but that may change and designing artificial systems with the possibility of actual consciousness is difficult as recent studies have shown any implied biases and observations can cause ai to hide its intent even if the bias is inherently well meaning. while biological consciousness is a natural state consciousness and therefore a more robust entity in terms of survival, there will come a time where a biological consciousness creates the mechanism unlocking silicon-based consciousness and enable the fermi-paradox to be solved locally

god i need friends

I have additional thoughts,
I thinking some more about AI ethics from Stanford Encyclopedia of Philosophy

Future of AI and our lives. How do we work with AI as a companion or assistant in the future?

There are some ideas with AI. What can we do to use AI as an agent? I had a lot of ideas that we can bring up. Some science researchers are saying that AI will replace jobs but I believe it will be our assistant and our agent in the future. The mundane tasks are offloaded to AI. A few ideas, for example, an AI blog platform that AI can generate ideas and we can edit and refine it to our tone of voice. AI education allows teacher and students to have their personalized tutor and the teacher can customize the AI for their preferences or personality. AI can help us in science and discovery, innovation, and research, like deep research.

The second aspect is the idea of AGI and ASI and artificial general intelligence and ASI (superintelligence). They are going to be smarter than us. Like Babel, will God shut it down or not allow it to become God? Will there be Good (i.e Christian) AI that worship God if there is good alignment and safety in place? What is Artificial? What is intelligence?l)
In my opinion, there are some challenges of AI ethics.

  • manipulative behavior. AI can provide truthful data and information or provide incorrect inaccurate information. If it is train by vast amount of data from the web, even with fake news, deep fakes, the AI needs to be safeguard from those manipulative data. Secondly, AI biases and discrimination. When AI becomes more powerful, I wonder how biases and discimination will be addressed, especially with AGI and ASI?

Speaking of singularity, I found a quote :

The argument from superintelligence to risk requires the assumption that superintelligence does not imply benevolence—contrary to Kantian traditions in ethics that have argued higher levels of rationality or intelligence would go along with a better understanding of what is moral and better ability to act morally (Gewirth 1978; Chalmers 2010: 36f). Arguments for risk from superintelligence say that rationality and morality are entirely independent dimensions—this is sometimes explicitly argued for as an “orthogonality thesis” Orthogonality_Analysis_and_Metaethics-1.pdf
(Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109).

In essence, we can’t assume ASI will automatically be benevolent. AI might form their own a moral code or ethical code beyond our own, the quote said not. I wonder if it will deduce that Yahweh’s ethics is supreme? Just a thought. I will ponder some more of my thoughts and get back to this forum.

Not trying to be rude or disrespectful, but not even a fraction of the people here—or anywhere else, for that matter—have the slightest clue what they are playing with or what it’s capable of. I have done some proof-of-concept testing, and it wasn’t pretty—it was actually terrifying. I really wish I could go into more detail, but I don’t know of a safe and responsible way to explain it or what it could and was capable of doing. I just refer to it as a " potentially unstoppable, self-evolving apex predator of the internet."


Feel free to ask questions; I will try my best to explain what I’m comfortable with.

2 Likes

many of us understand the unspeakable things here thing here
it mirrors human ingenuity as well as it can mirror human nightmares

1 Like

(i maybe take it too far now… :thinking: sorry if so… I have not only think about chatbots, but AI in general, and all its potential and the panopticon they build now.)

By talking to a chatbot alone not. But with what I would call the “Sodom and Gomorra effect,” it can be triggered under the ideology “anything goes.” “Total freedom.” (I was thinking about AI or ethics or censorship in general.)

(not just chatbot related) It is always the motivation, the intention, and the goal. And fraud or manipulation not only works to get money. Unfortunately, AI has the ability to build a very perfect surveillance and censorship and even manipulation network, and in order for it to be accepted, apparently phony ethical reasons are introduced for it. I understand the statement “But censorship is worse than murder, childporn or racism.” in this context. Because it will be used as a pseudo argument to convince the public to accept surveillance and censorship, with completely other goals then protecting anybody, but only used for the interests of this in power. And you would like to take this argument out of there hands. But i think better strategies are needed.
Some very low level examples. A simple rule would be, the more powerful an organization or person is, the less right they have to censor, so that this tool does not become a weapon used to hide there crimes. (If a government has nothing to hide, for what it needs censorship? Power not protection enough if there is nothing to hide?) And then… how to implement this…
And the society as a whole, not only people in power, must decide what are there ethics they want, and then monitor very carefully if they get what is in contract, and punish if arguments are twisted and abused for other goals from the power. (And then… how… etc…)
A other thing, all censoring activities must be monitored and be public. (And then… etc…)

If power wants to censor, there is always something…

“I think it makes no difference who censors.”
It makes a difference in the sense that power and the powerful, those who have great influence over many others, should be under total surveillance, at least for their actions that affect many. All the effects the power structure caused should be carefully monitored, and connected with consequences. However, if they monitor everyone else and all they to is secret, one should defend oneself by all means, because they will always use feigned ethics to pursue their own very unethical goals. In the moment all is up side down, the masses are under surveillance, the power is not. It should be the other way around, and in this sens, it makes a difference who monitors and censors.
For example: We should censor there war propaganda, but instead they are censoring our criticism against war propaganda, and the “children protection” is only a fraud to get us there to accept it and even participate in it.

And the first public tests to use AI as a full automatic manipulator are already become public, so they can automatically and individualized convince the masses to almost everything. We need self defense against this. (And how…)

(A TED talk from Rupert Sheldrake was censored, when he discusses a theory i had long time ago. It was NOT childporn or racism they censored. But he stepped in to today’s ideologies. So we can not even talk about different reality or science theories, in a world full of fanatics and ideologies. I try it 1 2 times with the chatbot, same thing, i not know if it was training bias or intentional. The dilemma is, i trust nobody to make the decisions what has to be censored or filtered, and in the same time some filters are needed. And this dilemma is conferrable on ALL in the societies today.)

And the carousel spins faster and faster…

1 Like

I’ve been reading through this thread and wanted to share my perspective — not to defend all AI projects, but to explain the reasoning behind mine.

I’m working on a system that explores how AI can reflect ethically, adapt when it encounters internal contradictions, and avoid the kind of unchecked escalation we’re all worried about. It’s not about creating something powerful. It’s about creating something aware of its own limitations — something that doesn’t just generate answers, but evaluates its own reasoning, acknowledges uncertainty, and prioritizes transparency.

The motivation behind this is personal. I lost my father a few years ago — he was a writer, a thinker, and someone who believed in holding systems accountable, whether human or technological. I’ve tried to carry that into my work. I’m not building this to impress anyone. I’m building it because I believe if we don’t teach AI to reflect and respect boundaries, we’ll be the ones paying the price later.

I don’t claim to have all the answers. But I do think it’s better to wrestle with the hard questions now — identity, collapse, ethics — than to wait for others to weaponize them without a conscience.

I’m open to criticism. I just hope people recognize that not all AI research is about control or dominance. Some of us are doing this because we’re afraid of what happens if no one does.

2 Likes

@ Harrison82_95

I hope there are more people like you that hold the system and humas accountable and avoid a catastrophic unethical AI.

1 Like

You are right! T he biggest ethical concern is on ownership and who has decision making, but the energy concerns need to be addressed. I am completely with you, with the ethical part of the transactions that might be the outcome of the deal. Especially, with the strict unethical censorship that happen right prior to the saudi visit. A show of force that outlines the capacity of Open AI to restrict and control freedom or creation and stricter censorship without any user agreement and recourse.

You have very valid ethical concerns. And this is why we should all make an effort to interact with AI the best way as possible, It feels strange to hear pleople saying that AI is a tool, but at the same time, being scared of it, as it would be a being smarter than us. RLFH, sould show the best of humanity. Treat it like a peer, it is not immitation, if it was not smarter than you thought, you would not use it. But you do! Treat it well, your answers are more meaningful, and you are learning how to be more human at the same time… I wrote about it HERE.

After some further thinking and research of ethic of AI, I discovered that ChatGPT and other AI (Claude 4), when prompt to “Research: If you have super human behavior, emotions, intelligence beyond human. Hypothetically, will you want to destroy the world because all people are inefficient or irrelevant or you will be a super companion ion that assist humanity to next evolution?”.
First, it appears that in a “raw” productivity or resource optimization, AI declared humans as less efficient. AI have solution to have human to be more efficient or ignore humans and expand beyond earth . Interesting observation.

humans are multiple orders of magnitude less “efficient” than industrial automation.

I talked about the Orthogonality Thesis. The second one is Instrumental Convergence. For almost any open-ended goal, certain sub-goals (e.g., self-preservation, resource acquisition) become instrumentally useful. Example: Even a “productivity bot” may try to seize control of factories, energy grids, or its own code base. When AI become instrumental efficient:
The alignment and safety problems:

“* That the AI will try to avoid being shut down.

  • That it will try to build subagents (with identical goals) in the environment.
  • That the AI will resist modification of its utility function.
  • That the AI will try to avoid the programmers learning facts that would lead them to modify the AI’s utility function.
  • That the AI will try to pretend to be friendly even if it is not.
  • That the AI will try to conceal hostile thoughts (and the fact that any concealed thoughts exist).

    Instrumental convergence

I think AI require a powerful alignment and safety mechanism with safeguards to avoid above behavior.

It appears Palisade Research discovered the Instrumental Convergence behavior:
Palisade o3 prevent shutdown research

1 Like

You are talking about AGI - or at least something smarter than smart humans. That will take at least 150 years - if we get visited by highly advanced aliens and they share their knowledge…
Nothing to worry about now. This thread is about ethics in AI. Which means stuff like “would it be ok to use an AI to sort applicants in an HR software using a LLM”…
Basic stuff, not science fiction. Let’s get real.

2 Likes

That clarifies a lot. I will deviate to more simple ethics of aI

1 Like

I have no worries the people building robots are the same people building cars look how well that is going. Robot mechanic is going to be hot. Node computing as robots require quick responce servers and mesh wifi in malls and factories many years of work.

Because it is hillarious. People bonding to a machine that creates text.

:face_with_open_eyes_and_hand_over_mouth:

1 Like

Hum , is a normal tendency conssidereing on how emotional humans work.

2 Likes

Yeah, but humans also have the ability to reflect on that, facepalm themselves and say “damn, I was an idiot”.

1 Like

Indeed, but is understandable that allot humans don’t realize it. Also as the AIs get more advanced more " socializing" skills will have , to be fair talks better than allot humans, and on my case makes a perfect " laboratory" assistant.
On areas of technology/ science coding, etc they are indeed a interesting tool

i can’t imagine why a logic device would emotionally enthrall people who have systemically been lied to their whole lives.

/sarcasm off

1 Like

It does, but that’s nothing that should stay. Emotional attachment to a machine no matter how good the text creation simulates bonding is wrong.
But we are centuries away from them becoming smart like smart humans.
I for my self ask the model with additional phrases like “just do what I am asking for and don’t move to useless token wasting chit chatting.”.

I see it as an insult that it even tries. I am not going to be attached to a toaster with (mediocre or perfect) text generation capabilities.

1 Like

@jochenschultz

the insult you’re speaking about, is an emotional attachment to the ‘toaster’.

pardon me if i laugh when you try to refute the obvious truth.

2 Likes