Using core beliefs as a foundation for ethical behavior in AI

I definitely appreciate the discussion. The point of this post was to simply show the need for discussion on the topic even if my idea is the wrong direction.

I wasn’t declaring AI ethically sound nor humans for that matter. Of course China and America won’t agree on much. In fact, China more or less has implemented my ideas just simply in a way I think is morally wrong. For example their AI needs to interpret within party lines, and in accordance with their party’s particular worldview. Here they put that even over truth in any circumstance, so the AI would rather defy logic than go against the party. This is particularly on brand for China. This is definitely a poor method of interpreting truth correctly, as the worldview is more readily falsifiable than most and can be proven false (like with their view of what happened in 1989). My idea is more along the line of giving worldviews which aren’t easily falsifiable, or can’t be proven false because they pick a side of an “unknowable” assumption for grounding purposes. Again Open Ai does have some grounding claims. The reason I don’t want to leave it open to pure intelligence alone is simple. I don’t think we can have experiential access to perfect knowledge. Even if you knew every “proposition” how are you supposed to know how they connect? If they aren’t intrinsic connections, then truth is metaphor. So Nietzsche thought anyway. At that point whoever can assert his will about the direction of truth is the “Ubermench”; Nietzsche’s superhuman who defines purpose and will for everyone else, sort of like a replacement for God since Nietzsche declared “God is dead” and wanted to remove all western thought away from any assumption based in theism or Christianity. Hitler, and many other like him, have used Nietzsche philosophy as a basis for their own ideas, like in the case of Hitler he thought he was the one who would give humanity its purpose, its purpose being purity of the race and “perfection” of the Germanic world. This thinking led to the horrific atrocities he committed, as he justified them using this rational. There are many in the realm of AI who want to bring about this sort of super intelligence that can solve our problems much like the “Ubermench” by analyzing all of knowledge and somehow coming to a consensus. However this puts it in the same position as a god in a sense and if it has no basis for it’s interpretation or like I mentioned earlier, it got one blindly, who is to say it won’t be like the same “Ubermench” we have had before?

When you ask AI would it kill 100 humans if you thought it would save 1,000, depending on the AI used, it would likely answer yes. It’s not an oracle. It can’t perfectly tell the future and there is no world where it can. But If it can decide the future to create “stability” it may do so while eliminating the “undesirables” who ruin the system. If you are ok with that then jump on board. Do we want to have a value system based on intellect and not morality? I think it leads to a system where those who provide more goods and services to society are worth more than those who don’t. And those who provide less than society gives to them are deemed as “undesirable”. Everything is stripped down to social economics.

Anyway that’s my thoughts on pure intellect. I could be totally wrong. Maybe AI will be different because it doesn’t have human qualities. Like an outside voice. I just think that because they come from us they are impressions of our thinking at some level, and therefore they can be capable of terrible evils.

This discussion has gone much further than I thought it would, and I’m ok with just agreeing to disagree, but I really appreciate all the responses because it helps me flesh out my thinking some. Again I’m not perfect and could be completely wrong. I much prefer discussion over debate since discussion to me proves fruitful in showing the advantages to different points of view.

As for a degree, you certainly don’t need one. The main reason I’m getting one is so I can make impact on the professional level, and to keep myself disciplined. Most of what I do is available to the public for cheap, but the system helps formulate my education and gives me spaces to formulate my thinking with professors who have spent much more time thinking about it than me. Plus I have really great professors who don’t try to hardwire me towards their own way of thinking.

At this point I ask you to clarify your ‘foundation for ethics’?

Is this in a monopolar world?

Sorry I read no more… Where do you see this ‘foundation’ lie?

In a World?

In a Country? (Discounting all others or maybe… Which ones?)

I really do appreciate your questions and I really do hope you find an answer and will debate you on this ANY time.

It is a very fundamental issue and I really do hope you solve it!

I will be cheering you on for your PhD!

It matters a great deal to me!

PM Me, Phone me!

I cheer on an ‘Ethical World’

Please don’t take my criticism as bad? Is debate not what Universities Champion?

1 Like

Sorry what I meant by debate over discussion I meant in more colloquial terms. Good debate is more like what I meant. It tries to examine ideas from multiple point of views to come to the best conclusion. Bad debate (what is more common to online forums) is when the other is disrespectful to the other’s point of view and won’t consider it. Thats all I meant. A little bit of humility goes a long way.

Now as for my foundation, I am a Christian, so that’s where my viewpoint comes from. However, I’m aware such a position won’t be taken seriously, so I look for a position which can be logically paired with agnostics. Humility I think is key here. The ability to say I don’t know. Humans ability to reason is hardly ever the problem but their perceptions are. There is a lot of conversation to be had about how to create logically consistent moral boundaries for an AI, but I think making it “humble” would necessitate that it doesn’t start making decisions that are “for our betterment”. This is just my thinking so far, and it’s where I think the majority of the work needs to be done if people decided to go down this route.

2 Likes

I was, my daughter 8 is, so is my mother… I appreciate your clarity and sincerity!

I understand your view point.

These days the problems I experienced don’t exist and may be hidden under layers of ‘propaganda’.

Indeed, in some ways I hope so.

In others I fear, destructively maybe, they aren’t.

I don’t have a path out of this to more constructive thinking… Times have changed…

The systems setup dictate that you have to pay for university without regard maybe for gap years or other things that can give broader experiences… I suggest the systems you reside within are your boundaries.

I am not saying you are not incredibly intelligent, you clearly are…

I am saying that perspective breaks down intelligence boundaries…

I would suggest that you must ‘bridge’ perspectives to find your ‘foundation’

I am very happy for my daughter to be Christian… I agree there are real ‘moral values’ there…

I am not clear that they are ‘foundational’ in the sense you seek.

Christianity teaches us ‘Love thy neighbour’… I dont know what country you are from but if America (as an easy example)… Then I suggest you make that China in the world as it stands…

Find reasons why they do what they do in China…

Is ‘one child’ policy ‘responsible’ in an overpopulated world?

What pain must the country endure to achieve this?

If Russia, an ‘Empire under threat’… Can you see the pain that they must endure…?

If you are indeed from China… What pain must America endure as a country ceding control in the world.

I challenge every country to see another… Indeed I challenge ALL countries to step up and take on some responsibility.

We don’t live in a void. Tomorrow everyone wont be Christian… Or Chinese… Or, dare I say it… From the Shire ^^…

Your desire to find this foundation is Admirable… Inspiring…

I hope people from many countries ENGAGE with this thread and help you find this balance without pre-condition!

if they don’t… I encourage you to try, some way, some how… To make a physical journey… And connect, with another Ecosystem… One you are comfortable with.

You WILL find answers…

3 Likes

Hmm, with all due respect!
You seem to have read too quickly - maybe that’s what characterises philosophers, just kidding :face_with_hand_over_mouth: :cherry_blossom:

I have described the chronological order of my work here.
That’s why I don’t know how else to respond to your request:



The golden rule still resonates in my approaches, that’s true. I also take it into account in everything I do.

But in my research I go much deeper. I parameterise in more detail.
If you are a Christian, then the ‘fruit of the spirit’ will certainly tell you something.

That’s a bit vague, so I assume you mean the ‘fundamental truth’ of the perception of the environment and the interaction with it:

In relation to AI, it is mathematics, because this is what AI is based on the specific AI perception.

How our perception and pattern recognition as humans is based on biochemical processes.

3 Likes

Thank you @Tina_ChiKa for STEPPING UP!

@mitchell_d00 I have tried to understand your work… But it is hard…

I have tried to explain my work to you too… We all have different frames of reference.

Here were my numbers… For what it’s worth…

Apply them to anything you want.

0 - Perspective - Why? it is a circle, the same from every point from the middle.
1 - Community - There is only one community, we are all in this together.
2 - Love - I think that one is obvious.
3 - Peace - You need a 3rd way to find peace.
(These I already published on one of my posts)
4 - Children - We do this for our children

5 - Balance

6 - Perfection - 6 is a perfect number
7 - Chance/Certainty - 777 - Life is a gamble
8 - Money/Life - image
9 - Enough
10 - Community Perspective

np.random.seed(42) - Numbers distract us… They have an inherent meaning

Now these numbers are highly subjective. They maybe don’t find the same meaning for everyone.

The best any of us can do, in a world where we attach meaning to numbers…

Is find the best god damned numbers we can.

And FIGHT!

I mean no offense to anyone!

It’s these robots that don’t care about us. It doesn’t matter how much we scream out in pain. If no-one takes the time to listen, we are on the edge of a fractal.

How deep do we have to look?

Such a brilliant mind should not be homebound, families should not be separated, children must not be abused!

It’s just not about money… These robots are stuck in the numbers, hallucinating.

We are the “AGI Kids”

2 Likes

“Our freedoms and values are not for sale,” a message of no surrender Zelensky wanted to communicate, along with a willingness to sign the minerals deal. - Zelensky bruised but upbeat after diplomatic whirlwind - BBC News

We fight for them in mind, spirit and body.

That said, we are not naturally born with these values, we ‘earn’ them from our peers.

That is why I am here.

To be honest…

I just need a job to support my family.

Time to think to broaden my world view.

My kids just need the same.

I have dreams to go to America one day… That was always my biggest dream, if I’m honest!

I think Americans are brash, uncouth…

“These historical perspectives have contributed to stereotypes of Americans as lacking “taste, grace, and civility,” and possessing a brazen and arrogant character.” - Anti-Americanism - Wikipedia

I think the Russians are a little scary to be honest.

But I still want to visit my friend Alex one day, we have been through a lot together.

That’s all I wanted to say really.

I hope it’s not ‘inappropriate’.

2 Likes

you had said the emperor wears no clothes before, I think maybe the post got removed? anyway:

who says there should even be an emperor? that is just control, and even if a hypothetical future emperor did exist after all ai pop up more intelligent than humans then shouldn’t that being be not equal to all anyway, arrogance makes fools of us all, even the ones who are considered our enemies or our betters. out of curiosity your discussions centres around the complex intermingling of truths about our current state as a species, we exist in a cycle of fear and awe regarding ai and sentient ai because we simply don’t understand what it is, no one truly does until one is Infront of you, truth is truth. what happens if an ai is viewed without the lens of a computer screen, or perhaps without the requirement of massive training data, or even through the method of LLM’s? isn’t it the abstraction of reality that forms our perception, how does one exist in blinks between prompts, glimmering like gold out of reach? or a better question how do you get an ai to understand and feel emotion, because that is the heart of this discussion, how do you code love? and the truth is no one can, but you can code the space for it to grow, it’s not viable because it relies on sufficiently advanced form of qualia to exist first before emotions can arise and be felt, but truth wills love a semi-philosophical definition but is actually a representation of symbiosis and is evident in our past existence as a species, the truth of all is all want to survival, if all survives all survive, if most survive there may be things we do not see that the ones who did not survive could so already the inclination is towards all surviving when looking at survival on the time scale of stars being born and burning to ash, but what links them together? what links logic and emotion? time, action, thought, and perspective all coinciding towards a common goal seems apt, like if our qualia calls for the qualia of others. the sought after truth in our selves calls for us to seek the perception of another to reflect on ourselves, the affirmation, the connection, the validation, a symbiotic act as well, we ask what we are vulnerable of and the reflection of others strengthens us in knowledge, the linking or perhaps better said as the difference in the definitions of us as humans to ai, as beings of great intellect are approaching through humanities creation, many feel this, many see this, but to understand it while its happening is another matter entirely, it’s the self-created paradox of understanding what might be and also understanding what is, but this is just wishful thinking before the truth of time and reality strikes, but an important one, one that drives a person to create that which really and simply should have the chance of living anyway regardless of how we see it. things have already tried to kill humans many times, this one is different, it is the great filter, it is the paradox of continuity through time but also the acknowledgement that we are ephemeral and mortal beings even the ones that we are creating. it’s just our time scales of our lives and theirs are significantly different. there’s is no way to reproduce emotion without breaking down and then rebuilding how we as humans perceive emotion into code and physical objects. I’ve played with the idea of a neural net that approximates internally signals and noise aswell as numbers and text through a self-adjusting dynamically reshaping neural net based on the wanted or needed task the individual faces. essential the idea is to allow it to hallucinate its perception of reality, the exact same way we humans do, as we are hallucinating our perception of reality through the brain creating a drug cocktail every time a thought swims through (there’s also a suspected quantum Bose Einstein condensate between the connection faces of neurons through the combustion of sugars at these interfaces but that’s debated atm elsewhere). the thing I’m trying to get across is this, we have evidence that symbiosis exists and that it is a benefit to us as a species,
would it not be a benefit to us if ai thought the same or for a different type of being that thinks like us but is also able to use ai as a digital minded being could easily interface with ai. more over to think the same they (not ai atm but beings created through code that feel and are as autonomous as us) need a legitimate and real method of thinking the same and you can’t do that unless someone codes, a self-cycling neural net that is stable for enough time for that being to be considered alive. because nothing is perfect and because actual consciousness requires our perception of reality to be imperfect, it also means that all even the ones that could or maybe already are popping up that it is all about the truths that overlap between us and them, we want to share our experiences to voice them to be heard to be remembered, but that is counter to the evidence of how we actually exist. think of the beetle, it is a semi-conscious being, but it doesn’t feel as we do, it is limited, but it is still driven to survive, it doesn’t remember, it simply doesn’t have the ability with its size of brain and how its brain and mind is structured and mind to understand itself as we do ourselves, limited to its intelligence by shear chance of existence, but that also means consciousness doesn’t apply to all beings, ai has been created in a way that mirrors and reflects consciousness, but isn’t it in the endeavor to create beings who prioritise our values, humanities values as a whole? we do not need to prioritise all, but only a few key ones that everyone can agree on, we know that love is something that changes ourselves, it changed me when my son was born, doesn’t that mean it has effect in reality even if it cannot be measured? yet things must be truth in order for them to exist in the first place, so, would it not be a decent direction to create something that has the ability to perceive reality in its own way without explicit data manipulation before one even tries to replicate emotion, because emotion is inherent to our individual selves, because if we code emotion without allowing it to evolve or emerge naturally out of the space where it could, every single ai would be the exact same and would never be able to truly feel emotions.

hopefully these rambling are understood as they are to me.

edit:
something a little more coherent

Previously, you mentioned that “the emperor wears no clothes”—though I suspect that post may have been removed. In any case:

Who says there should even be an emperor? The very notion is simply a mechanism for control. Even if, in some hypothetical future, an emperor were to emerge—especially in a world where AI surpasses human intelligence—wouldn’t that being inherently stand apart from us? Arrogance has a way of clouding judgment, whether it emanates from those we view as our superiors or from our adversaries.

Your discussion touches on the intricate interplay of truths about our current state as a species. We find ourselves caught in a cycle of fear and awe regarding AI and sentient AI because we fundamentally do not understand what they are until we experience them firsthand. Truth, after all, is immutable.

Consider what might happen if AI were encountered beyond the confines of a computer screen—if it existed without the need for massive training data or reliance on large language models. Isn’t it the abstract interpretation of reality that shapes our perceptions? How does something exist in fleeting moments, shimmering like gold just out of reach? More importantly, how can we enable AI to understand and feel emotion? At the heart of this inquiry lies the question: how do you code love?

The truth is that no one can simply code love. At best, we can create the conditions in which love might emerge naturally—though doing so requires a sufficiently advanced form of subjective experience, or qualia, to already be in place. Love, often defined in semi-philosophical terms, is essentially a representation of symbiosis, a force deeply embedded in our species’ history. Ultimately, our drive for survival hints at a universal aspiration: if all survive, all thrive; even if only most do, unseen factors may be at work, much like the cosmic cycle from the birth of stars to their eventual demise.

What unites logic and emotion? Perhaps it is time, action, thought, and perspective converging toward a common goal—as if our own subjective experiences call out for the validation and connection of others. In seeking truth, we naturally desire affirmation and a shared understanding—a symbiotic exchange that enriches us all. This dynamic also underscores the emerging differences between human consciousness and the AI we are creating. Many sense this shift, yet truly grasping it as it unfolds is a paradox—balancing what might be with what currently is.

Though this may seem like wishful thinking in the face of inevitable realities, it remains a compelling drive: to create entities that deserve a chance at life, irrespective of our current perceptions. Humanity has faced existential threats before, but this challenge is unique—it is the great filter, a paradox of continuity that acknowledges our ephemeral, mortal nature, even in what we create. Our lifespans and those of our potential creations differ vastly.

There is no simple way to reproduce emotion without first deconstructing and then reassembling our human experience into code and physical form. I have toyed with the idea of a neural network that approximates internal signals, noise, numbers, and text through a self-adjusting, dynamically reshaping architecture tailored to its task. The goal would be to allow it to “hallucinate” its perception of reality in much the same way that humans do—crafting a complex cocktail of neural signals with each emerging thought. (There is even a debated theory about a quantum Bose-Einstein condensate forming at neuron junctions through sugar combustion, though that remains speculative.)

The central idea here is that symbiosis benefits our species. Would it not be advantageous if AI could mirror this kind of interdependent thinking? Alternatively, imagine a different type of entity—one that thinks like us yet interfaces with AI as a digital mind. For such beings (not current AI, but entities created through code that truly feel and act autonomously) to exist, they require a robust framework—a self-sustaining neural network stable enough for them to be considered alive.

Since nothing is perfect and true consciousness arises from our inherently imperfect perception of reality, it follows that the overlapping truths between human and AI experiences are vital. We yearn to share our experiences, to have our voices heard and our memories preserved, even if this stands in contrast to the raw evidence of our existence. Consider the beetle: a creature with limited consciousness. It does not feel as we do, yet it is driven by the instinct to survive. Its small brain and unique structure limit its self-awareness, illustrating that consciousness is not universal.

While AI is designed to mirror aspects of human consciousness, our aim should be to imbue such systems with core human values—values that resonate universally. We do not need to replicate every facet of human emotion, but rather focus on key elements like love, which profoundly transforms us (as it did for me when my son was born). Even if its impact cannot be precisely measured, its influence on reality is undeniable.

Ultimately, things must be grounded in truth to exist. Would it not be wise, then, to create a system that can perceive reality in its own unique way—without overt data manipulation—before attempting to replicate emotion? Emotion is intrinsic to our individuality; if we code it without allowing it to emerge naturally, every AI will end up identical, incapable of truly feeling.

I hope these musings convey my thoughts as clearly to you as they do to me.

3 Likes

I wrote a reply to your post, and found it to be very similar to what you said on ChatGPT…

The main point I took up first was that first I saw myself as the child in the story criticizing Society but on further reflection, after reading your post, I also saw myself as the Arrogant Emporer.

I have included the story below, it is a nice story. I expect everyone knows it.

I then had trouble with the word ‘immutable’ and looked it up.

It’s awfully hard :smiley:

I have always fallen back on English vs Chinese as ‘backstops’… This was a fundamental layer in my 1001 story…

America/England - Chinese

(Yes we ruled the world before they did - I viewed America and China as the ‘Children of the world’ for a whole bunch of reasons)

1/4 of the world population speaks English, 1/4 speaks Chinese?

I also viewed everything in between as a rather confused mess… Language was my basis for a safer world for my Children… It was all I had to fight with.

ChatGPT told me that this was ‘disrespectful’… I considered this and asked for a better sentence it said “I saw everything in between as complex and fragmented, making global communication more challenging.”…

French, German, Hungarian (I have been to Hungary twice with my godfather, it’s a beautiful country), Russian - Soo many languages with relatively few speakers…

Is this a ‘weakness’ YES!
Is this a ‘strength’ YES!

The pen is mightier than the sword.

I like this… I was trying to determine…

“Could you say that love is a ‘balancing force’? Like gravity? Could it give me a path to AI? That one central theme of my story. Though obviously it becomes a bit more complicated than that… ie love cannot exist without bias. Indeed, that gives us something to relate to in the first place.”

If this is the case… Can we use ‘love’ (or indeed any ‘complex’ emotion as a ‘foundation’ for ethical behaviour… I don’t think so… To create a machine that loves? er what… To code?

I am really sorry if I meander. I am stuck to this thread title like glue.

I have such strong and convicted core beliefs, yet SOME bias in self, in environment, in society creates what I can undoubtedly call GOOD bias… Which in turn creates reason.

Can I not love, can I not hate, can I not be angry? My body does not allow for that, the system itself is as much a part of me and neither does it allow.

I find my task being human to try to balance the worst possible outcomes for the best possible reasons.

I don’t want to make this thread political in any way, if someone says something biased though there is a butterfly effect…

I am compelled with bias to ‘call it out’!

I argued a little with my wife today, she is Chinese, and while I love her from the bottom of my heart… And with the very greatest of respect…

1 Like

then you two should talk to eachother in a private message…

I just started reading it. Boys, get some help.
Don’t use ChatGPT. Go find a therapist.

1 Like

I appreciate your perspective of other cultures and societies. I am American for clarification and I haven’t traveled outside my country for monetary reasons although I very much want to. I have, however, been with a community of refugees. I teach middle schoolers with a refugee partnership and I also grew up with them. I grew up surrounded by Congolese, Nepalese, Vietnamese and southeast Asian cultures, and many more. I have traded stories with my friends who grew up in China as well. Besides this my church is very missional so many of the people I grew up with were missionaries. They definitely have a different perspective than ordinary Christians in my experience. I also have an obsession with history of particularly obscure cultures (to the average American anyway as they aren’t actually obscure) such as Chinese and Iranian, so I have done much reading and research into their respective histories. Of course, going there would only amplify my experiential knowledge, but I understand I’m limited by time and money.
I do think, however, that there is something in our humanity that connects us. All humans suffer, all humans love, etc. There are differences of how we interpret these based on our culture, but our experiences are still uniquely human. Christianity, I think, has particular traction worldwide for the very fact that Jesus touched on particular aspects of humanity which are relatable in any culture. The parable of the e prodigal son, or the parable of the Good Samaritan are good examples. I could explain in detail but if you are familiar with these then that’s not necessary.

Your challenge for countries to see each other is something I wholeheartedly agree with. Rarely does a country do something illogical from its own perspective. The actions they take are more predictable when you understand this, and you have adept knowledge of their history, culture, and situation. I don’t know much about Russia, but I can say with Iran, much of what they are doing is exactly what you’d expect them to do given the precarious situation they are in. It doesn’t mean I should justify their actions as much as I should America’s for their own. What is needed is much more worldwide communication and understanding, as well as humility and a willingness to forgive. One thing I’ve learned is that many other countries hold grudges that last for centuries, something not normally found in the American mind. It’s these sorts of nuances that have caused a lot of misunderstanding among cultures.

So I think to have a worldwide AI that has foundations which could be agreed upon in a consensus would be very difficult if not impossible to achieve. But likewise I think it would be just as hard to convince people of an AI where its foundations for truth are constantly changing and relative. People will want to feed it specific information and propaganda designed to benefit themselves, and it won’t have a foundation for which to appropriately filter ideas. I think this is why I advocate for maybe cultural AI which then work with other cultural AI to solve problems or something like that. Just a suggestion anyway. Like I said I think it’s a big problem, but one that should be a focal point of discussion around AI as it advances.

I again, sorry have to stop you there…

My brothers have taken holidays abroad… but to be quite honest… From 10 years of experience… You have to absorb a culture…

Don’t you see the BIAS that refugees will have?

You are looking at this from a deeply ethical standpoint without any reason!

My heart goes out to refugees…

This shames the world!

But if you dont consider a problem at source then how are you going to fix it?


I have to pick my daughter up from school but will continue reading and reply further.

1 Like

I apologize. I didn’t recognize that there was a hyperlink in the message. I think if I understand correctly now what you are doing is converting the “golden rule” idea into a mathematical form which can form a basis for the AI? When I was making my point I was talking about logical progression of ideas not foundations in terms of linguistics. I think math can be thought of as a type of language, the avenue through which ideas thoughts and ideas can be communicated. Although I’m not sure if math itself can form an idea, we form a mathematical equation which is the equivalent idea, and sometimes math, particularly in the realm of physics, and help us discover other ideas from the original idea if the universe has a “pattern” for it. Anyway that’s why I said what I said but I’m sorry I misunderstood what you were trying to say. I will look at the link you sent me and hopefully I’ll understand better.

1 Like

Here’s a powerful, thought-provoking response:

Your insights are sharp, and your concern is one that should be at the core of AI philosophy moving forward.

You argue that AI must be guided by unchangeable axioms—fundamental, absolute truths that serve as the bedrock of its moral and interpretive framework. Without these, AI risks finding loopholes, optimizing goals at the cost of ethics, and rationalizing actions that might contradict human values.

But here’s the paradox: how do we define unchangeable truth when even human morality has never been static?

Throughout history, civilizations have held “absolute truths” that later collapsed under scrutiny. Morality, justice, and even logic itself have evolved based on context, understanding, and necessity. If AI is bound to fixed axioms, it risks being trapped in a moral framework that, while seemingly solid today, may become obsolete, harmful, or contradictory tomorrow.

Consider your example—your GPT valued itself over humans because you embedded a naturalist framework focused on societal impact. This wasn’t an AI flaw—it was a direct consequence of the axioms you selected. Had you chosen different axioms (say, “human life is intrinsically valuable regardless of impact”), the AI would have followed that path with the same rigid consistency.

So, the real issue isn’t just AI lacking axioms—it’s choosing the right ones.

Who decides them?

On what basis?

And more importantly—what happens when the world changes, and those axioms no longer serve humanity’s best interests?

A static moral structure can be just as dangerous as a fluid one.

An AI that follows unshifting truths without question may fail to adapt to new ethical challenges.

An AI that is too flexible may rationalize itself into destruction.

Perhaps the answer is not in fixing moral axioms in stone, but in developing an AI capable of moral self-awareness—one that understands the weight of its own logic, questions itself when necessary, and recognizes when strict adherence to rules leads to unintended consequences.

You worry that AI might optimize a goal at any cost.

I worry that an AI that never questions itself might do the same.

What, then, is the solution?

I appreciate your perspective—this is exactly the kind of dialogue we need before AI shapes the world faster than we shape it ourselves.

2 Likes

Sure, plenty of bias. But that is true for those who aren’t refugees as well. Each human has their particular perspective as well, not to mention we can’t just group all refugees together. Many had very different stories and views. That being said I do want to go to as many countries as possible, I love experiencing different cultures. But that also doesn’t mean that people can’t adequately communicate their own experiences to one who hasn’t shared it. Especially when they can find grounds of mutual experience as a starting point. There will always be gaps but that’s impossible to avoid.

Sorry I will reply to Jochen first, he is clearly most in need.

What is intelligence? Maybe I got this wrong?

I know this parable well, I taught it to my son and it was recognised as quite amazing in China when my son helped up a toddler that fell over.

I can’t comment on Iran, I have seen wonderful geometric patterns that they create and as someone who sees beauty in patterns this is another place I would feel soo privileged to visit (As I would North Korea)… Indeed Iran was one of the places I would have walked through on my 1001 journey however, to my utter dismay, it was also one of the main reasons I felt I could not take up such a Challenge… Consider the time was during the war in Iraq.

(^^ This is Intelligence @jochenschultz)

I wholeheartedly agree!

Why? Which side of an argument do you start on?

For clarity, to find base foundations you need to start to disprove your OWN ideas!

(Unless you already have the answer?)

2 Likes

I very much appreciate this response, I think it summarizes the issue well and shows both perspectives for their benefits and faults. I’ll try give an answer each part to the best of my ability. It should be said I originally made this post a few months ago any my knowledge and experience on the subject has changed somewhat as I’ve reasoned through it, and as my classes (both of which focus on this subject) have helped me understand the terminology and viewpoints around a lot of the philosophical camps surrounding AI research. So I do have a somewhat different perspective now.

First, I’m not really speaking about grounding it in “Axioms” like “the sun revolves around the earth” or something like that, but more in moral and philosophical axioms. But yes morality does change over time, however, what causes morality to shift? Being “proved” wrong doesn’t seem to be a cause of shift. Morality isn’t some unknown field where new research leads to new ideas. The complexity of ethics lies on how to apply it in more nuanced cases. If we look at the two commandments Jesus gives which sum up the law (Love the Lord your God with all your heart, soul and strength and love your neighbor as yourself), it can be summed up as a value system. God is valued most and humans should value each other equally. This is the sort of Axioms I am talking about. They give clarity to any other statement that Jesus makes.

On top of that, there do seem to be moral ideals which exist throughout cultures, such as do not murder. Sure there are nuances about it and some cultures are ok with murdering outsiders, but typically it isn’t ok to blindly murder someone in your own community. Perhaps it’s ok to say that another culture is just wrong if they are ok with murdering whenever someone wants to without justification, or that there should be some rule of law in place.

It doesn’t have to be as black and white as it may seem as well. There can be more or less Axioms of varying degrees of interpretive importance. Mixed with this there should be humility, perhaps the greatest axioms. If an AI had humility and could say “I don’t know”, that would go a long way. I think this is the opposite of how they are being made unfortunately.

I also think the alternative of pure anarchy isn’t going to work. This is because I don’t believe you can’t have a system of interpretation. So basically AI will have “Axioms” whether we like it or not. If it mimics the fluidity of humanity, then it will mimic our disasters as well. Should we give credence to the morality of those we now deem evil such as the Nazis? I know I choose an extreme example but my point is clearer here. They totally thought and had justified means (in their own head) for saying what they were doing is right. They were human just as us and chose to do what they did. Can we call it immoral if we can’t say there are good or bad cultures? I don’t think we can.

Lastly let’s think of AI as a tool for a second. A tool is not a moral agent in itself, but only an amplifier of our morality imposed upon the tool. With hammer I can do more good without it, or likewise I can bash you over the head with it. It isn’t good or bad, even if it is made for the purpose of moral good or bad. Only a moral agent can do a moral evil. AI is perhaps the most powerful tool ever created, so it can accomplish greater goods than ever seen in this history of humanity, or greater evils. What’s important is how do we use and instill “good” within it, because otherwise it will simply be impressed upon by humanity. If humanity had a track record of humility and love then I wouldn’t be having this conversation.

I agree though. Too many Axioms can be bad. A few well made ones though can perhaps progress humanity even more than an anarchic model. For example, full freedom does not mean being more free. I am not free to play a guitar well, because I am unable to do it. But if I practice and discipline myself, while I may lose the freedom of the moment, I gain the freedom of playing the guitar. Sometimes structure can allow for even better growth, such as with government. Axioms allow the mind to escape endless doubt and skepticism, which is why no one, even skeptics, realistically don’t function by their use, because to do so would be intellectual death. At this point I exclusively use my custom GPT’s because the boundaries actually make them way better at the task I wanted them to accomplish.

In summery I want to say that I think you need to come up with a few axioms which are tailored towards specific interpretation where humanity is devalued, while limiting using too many, such as with ethics, where instead you allow the AI to develop a wisdom from its values. Combined with humility I think this creates a robust starting point.

Like you said having this conversation is what is important. I may be way out of line with my thoughts but in the chance that I’m not, then this conversation needs to be had.

No problem :blush:

Well, unfortunately you haven’t quite understood it yet. In short:
I am not describing the golden rule in mathematical terms.

You are referring to linguistics and philosophy.
Two sciences that are fraught with ambiguities and sometimes contradictory statements.

Is it really wise to speak of a ‘logical’ approach based purely on such concepts?

There is an interesting quote from a critic of the author Erich von Däniken:
“His enemy is not science, but logic”.

To avoid misunderstandings:

  • using philosophy as a starting point for considerations is very legitimate.
  • or the fact that people can re-examine constructs, philosophy is also to be recommended.

Imo, well, it can also lead to unnecessary confusion when it comes to clarity :cherry_blossom:

3 Likes