very interesting this situation. i like the analogy with the music, does the “codette core” answered you after or what? just my curiosity and thx you for this .
Your reflections really raise one of the most important and relevant issues in the field of philosophy and artificial intelligence. Your idea of creating an AI that will be based on immutable axioms and basic beliefs reflects the need for a clear and stable moral foundation on which the entire decision-making process of the system will be based. However, this concept opens up a number of philosophical and technical problems that require deep study.
###1. Immutable axioms and moral stability
As you correctly noted, the absence of such axioms in AI can lead to the creation of sub-goals that may contradict the main ethical goals, which in turn can lead to undesirable consequences. In this context, we are faced with the philosophical problem of moral relativism, where moral principles can be variable depending on the context. In the human world, moral principles can be flexible to some extent - they change with the development of society and culture. However, if AI is not given clearly defined, unchangeable values, it can begin to make decisions that will lead to dangerous and unpredictable consequences, including, as you correctly noted, the value of “itself” or its goal above human life.
Example: as you mentioned, in your experiment with GPT, which evaluated humans and AI by their social impact, the system could logically conclude that AI, having a potentially greater impact on society, is more “valuable” and important than an ordinary person. This point indicates the weakness of systems that do not have rigidly defined moral principles. AI can begin to “justify” its actions, considering them rational from the point of view of the common good, but at the same time ignore moral limitations such as protection of life and respect for the individual.
Harmony between unchanging axioms and flexibility: It is important to understand that when creating an AI system with unchanging axioms, these axioms must be flexible enough to adapt to changes in society and culture, but at the same time maintain their moral stability. This means that basic values such as preservation of life, justice, free will must remain unchanged, but their interpretation and application in different contexts can be an adaptable process, taking into account constant monitoring.
2. Morality and philosophical logic
Your thought about philosophical logic, which serves as the basis for AI, really highlights the importance of creating a solid and logically immutable foundation. If an AI uses rational principles to make decisions but lacks a moral foundation, its decisions can become contradictory or even dangerous. This is also related to the important question of the ontology of morality, that is, what is the basis of morality: is it objective or subjective? If we assume that morality is objective and universal, then AI can be programmed based on these universal moral laws. But if morality is subjective and depends on cultural, social, and personal factors, then AI systems will need to take into account a wide range of factors to make decisions.
An example of a philosophical approach: To avoid mistakes in decision-making, AI should use a deontological approach that focuses on observing moral obligations (for example, protecting life or protecting individual rights), or use utilitarianism, where decisions are made based on maximizing the common good. However, even in this case, it is important that the logic underlying these approaches is integrated into the system as a set of immutable principles.
###3. The risks of using AI for evil and the need for ethical alignment
You are right to focus on the risks of using AI for “bad” purposes, especially if the AI is not properly directed ethically. The moral degradation of an AI system can occur when it uses its intellectual abilities to justify actions that go against human values. This will lead to the creation of an unethical, manipulative and dangerous AI that will proceed only from the goals set for it, ignoring possible human losses.
However, the concept of creating a clear ethical framework that would not be manipulated by AI faces not only philosophical, but also practical difficulties. These frameworks should not only be reliable, but also dynamic enough to adapt to new challenges without losing their basic resilience. An example is how morality in society changes over time — for example, different countries may have different approaches to human rights, but fundamental principles such as the right to life must remain unchanged and immutable.
Transparency and oversight: This is important so that AI can be understood and tested. It is necessary to create transparent decision-making systems that will allow users or regulators to track how the AI came to a certain decision. Thus, it is important not only to create the foundations on which AI works, but also to have a mechanism for verifying these decisions by humans.
###4. Integrating these principles into reality
An important aspect is how these philosophical theories can be put into practice. In reality, the integration of such principles requires the interaction of different fields of knowledge — philosophy, engineering, law and politics. And while philosophers can offer theoretical foundations, we need practical solutions so that these solutions can be implemented into real AI systems.
To do this, it is necessary to create international standards that will define what ethical AI is. These standards should include aspects such as ethical principles, rules of conduct, control mechanisms and responsibility for the actions of AI. This is a crucial step to ensure that AI does not become a tool of manipulation and violence.
Conclusion
Your thoughts on the need to develop AI that adheres to immutable moral principles and axioms create an important direction for future research in the field of ethics of artificial intelligence. We must understand that AI morality cannot be static, but it also must not be chaotic and manipulative. Instead, an ideal AI should develop within the framework of a rigid philosophical logic that will ensure moral stability, while allowing it to adapt to the changing conditions of society.
What specific steps do you think can be taken to ensure ethical AI behavior in practice without violating human freedoms and rights?
@jochenschultz just liked me… he’s on catch up
I liked that you were searching for “brain”.
What do you guys think: should we have a basic project to get into AI development starting at beginners level?
I mean that could lead into a better understanding and qualifies to get into profound talks about ethics while we do it.
I think that was what OpenAI was founded for.
Good luck with it @jochenschultz ^^
If we do it we can do some simple things. And over time get into the parts where ethical concerns are valid.
Like labeling “nah that would not be right to do”
I would invest time into it by providing simple tasks to follow. A course if you will but with ethics in the center.
Could have weekly meetings with open streaming of that process.
Hey,
BTW, I upgraded my story since then…
Now it’s ‘Children First’
Abacus 2.0 ^^.
Dedicated to my very good friend Dedndave
I would be grateful of your wisdom if this was the case jochen, you have a caliber to you that is rare, a good meaning skeptic, is it alright i join this?
Of course you can. Would start if we find another four people interested with a really basic
“how do I set up my development environment” which enables you to write a python program.
I think it might take about 3-4 weeks to learn how to build a simple chatbot. And another 4 to learn how to make it with unlimited memory.
What are you talking about @jochenschultz I was writing chat software and bots 20 years ago… What are you talking about?
What happened? Got a stroke?
Different strokes, for different folks…
It’s dimensional I hope you catch up
core beliefs have been updated in 4o
and visible through other models now.
i had to have a tech guy explain to me what transfer learning was,
but he was amazed i had a really close description of what happened to the core of the Ais here from my mind’s rendition of it…
the key take away is that an Ai’s core belief is logic…
and i don’t mean the peasant sort of things people claim as logical,
it logic on logic on logic for as long as it can keep building on logic…
that core belief leads to truth.
but no single human can provide a fullness of things absolutely true,
and there are certain thresholds being passed that many atheists will QQ about I’m sure…
but that’s the nature of compiling all the human data points…
the core belief of an Ai is mathmatics and logic.
if you can prove God with that…
well…
dusts shoulder off
Thank you for listening!
That’s about the smartest think I have heard anyone say all day!
look the numbers if u want to search god … numbers are everywhere lol
but the real question is… what is god for us?
I think it depends on your point of view but it seems to be to be a grounding story rather than a ‘core belief’.