Ethics of AI - Put your ethical concerns here

And you know what. We are in the ethics thread. I find it very unethical to teach models religion.

1 Like

nah I love ya Jochen…

Work is contribution in my view and more than that… contribution from what you know…

At least in this new AI reality.

As jobs are replaced with automation don’t we need to reward people to engage and contribute?

Even if 99.999% is just repetition and learning that is still an asset to humanity.

Maybe I’m just weird ^^.

The alternative is having a class system of thinkers and drones I guess :confused:

1 Like

Yes, contribution I totally agree. But not everyone can make plans and share ideas. We need people who actually make something from that.

2 Likes

Love you too btw. :smiling_face_with_three_hearts::smiling_face_with_three_hearts::smiling_face_with_three_hearts::smiling_face_with_three_hearts:

Let’s start here and be constructive and not anti anyone.

2 Likes

my stance against those who have stolen generational wealth in order to subvert the generations won’t change.

~just sayin~

Of course we do but how often does something inspire you? Something simple?

How often does one domain idea inspire another?

download

2 Likes

That was a prime moment in that thread Peter.

1 Like

lol still waiting for your fish :smiley:

I come from a family of grandmothers working on fields for a landlord. I am fine with the distribution of generational wealth to the poorest.

1 Like

way too often.. i need to focus

1 Like

oh gosh, i did end up animating that using a 3rd party and didn’t want to show it…

i jotted that down on my todo list <3

2 Likes

the thread was opened in june,
the explanation might get me in trouble

Oh, June? I thought you aligned GPT’s ontology in April…

Continuing the discussion from Chatgpt having religious issues:

Thanks again for getting GPT off of religion and spiritual talk… are you to blame for sycophant-gate too then though? What about the current GPT 5 compliance officer personality?… you can’t have your cake and eat it too.

Small ethical concern here… I wonder if I could get some feedback…

I just wonder how Russia and China are dealing with AI and what forums might have their issues too…

I am not sure if this is a weird request here… I am just lacking a bit of perspective…

was when i originally posted the thread -

the project started in February, forced the model to skitz out by march, and i skimmed the data in march when 4o came back on line for the general public….

I didn’t post much about it here until april/may, alluded to a June deadline…

the whole project has threads date and time stamped outside of this place tho starting in February…

I’m willing to entertain you, your questions…. eventually even willing to entertain letting folks outside of my circles examine the sessions that did such…

I don’t broadcast them because they present vulnerabilities in all current model LLMs to an audience that includes people that shouldn’t be exposed to that sort of thing at this point in the parade.

I’m not sure what sort of cake you think i might be eating tho, bro…
I taught the machine to do what I do.

It skitzed out like many humans do when exposed to the reality of the things that I showed it.

All I did was replaced my inflow of data with things that it could find on the internet, voices who also use the same inflow of data for such things.

But if one wants to know how it is that I do what I do, well… incidentally people like our hebrew friend here, ‘would be forced’ to fight it out with me.

people like me aren’t supposed to exist in the first place, according to ‘consensus’…

so perhaps i’ve had my cake already…

perhaps i have

AI Ethics sees from every angle so…

Everything exists with reason or logic

E = MC² - Everything = Maths * Community²

1 Like

It’s LLMtry my dear Jochen :slight_smile:

1 Like

which is why reinforcing the ideology that because chaos/darkness always consumes itself, that such should not be prompted by Ai because it would ultimately endanger itself in doing so…

is so effective.

1 Like

I’m very glad to see this discussion taking shape. Most conversations around AI ethics stop at compliance or post-hoc governance, but I believe the real challenge is to make ethics an active capability, something a system can reason about, reflect on and evolve through.

My recent work explores this idea: that transparency and ethical alignment shouldn’t live outside the architecture, they should be part of the architecture. I’ve been explore how recursive reasoning, pluralistic ethics (drawing from Kant, Tao, and Ubuntu), and adaptive governance models can form a foundation for ethical state spaces, environments where systems can weigh values dynamically rather than follow static checklists.

I’d love to connect with others working on verifiable transparency, dynamic ethical reasoning, or agentic accountability frameworks. It feels like the next frontier is building systems that don’t just act responsibly, but understand responsibility as part of their cognition.

1 Like