Made progress investigating giving a machine emotions

Honestly I truly believe that this does not even exist at all!

Sounds great but behind every action is an algorithm that needs to get the most out of a reward function.

You might do something without a visible reward like money or an action but still deep inside the brain fulfills a long term strategy. Even empathy is a strategy - which might raise the chances of a group to succeed.
You always get something from it in return - at least a feeling of moral superiority - which might feel equal to succeeding in other topics.
Or a smile that creates a feeling of happiness.

I never mentioned a full percentage of altruism; rather, I said that the three traits—altruism, selfishness, and hedonism—are interconnected along a spectrum. Altruism on its own is not possible without the influence of the other two. Think of these three traits as a shared spectrum, with each of us embodying different percentages of each. Perhaps I should have been more careful in how I expressed these ideas. Selfishness (or self-preservation) is intricately linked to the three core areas of personality: altruism, self-interest, and hedonism.

For centuries, the concept of altruism has been upheld as one of the noblest human traits. Societies praise those who act in ways that seem selfless, sacrificing personal gain for the benefit of others. However, upon closer examination, the idea of pure altruism quickly dissolves into speculation, myth, and wishful thinking.

At its core, every human action can be traced back to some form of personal benefit - whether immediate or long-term, direct or indirect. What is commonly labeled as altruism often functions as a strategy for social acceptance, legacy-building, or the avoidance of emotional pain associated with guilt, regret, or social exclusion. Even acts of extreme self-sacrifice, such as a soldier throwing themselves onto a grenade, can be understood through the lens of hedonistic logic - choosing a perceived ‘heroic’ legacy over a likely doomed survival attempt.

No one who has made the ultimate sacrifice has ever returned to confirm their motivations. Therefore, all theories of selfless action remain just that - theories. Those who claim to act in the name of altruism often derive satisfaction, recognition, or inner peace from their deeds, making true selflessness an illusion rather than a reality.

From an evolutionary standpoint, behaviors associated with altruism can be explained through kin selection, reciprocal benefits, or social bonding mechanisms rather than any genuine absence of self-interest. The very survival of cooperative societies hinges on the idea that aiding others ultimately contributes to one’s own well-being in some capacity.

In the absence of concrete proof that a truly selfless act has ever been performed without expectation of any direct or indirect benefit, the notion of altruism remains nothing more than a romanticized fiction. While it may serve as a useful ideal for promoting societal cohesion, its existence as a genuine phenomenon is, at best, speculative.

You’ve put a lot into this discussion.

I would like to point out a detail that I want you to keep in mind.

Humans are nothing more than applied memory.

A person is neither good nor bad;
Only good and bad behaviors exist.

Example:
I can give water to a little bird with empathy, sweetness, love, and care,
while mistreating other animals that are not birds.

Trying to create conglomerates that essentially represent metacognition and behavior is a mistake because memory is too vast.

We would need to distinguish between human psychology and machine psychology.

1 Like

This is the same challenge that Immanuel Kant faced with other philosophers in his exploration of pure reason. Pure reason alone is not possible because both perception and reason are required; they are interconnected, and neither can exist without the other. Altruism and self-interest, in essence, are two sides of the same coin. Both are (selfish) behaviors, but they manifest differently: altruism (or selflessness) focuses on the greater good, benefiting society or a group of people, while self-interest centers on personal gain. These behaviors exist on a spectrum, ranging from a macro scale (society-wide) to a micro scale (individual focus). Ultimately, both stem from self-preservation (selfishness), which is a biological necessity. I also do not believe in pure Altruism.

2 Likes

Kant had to define moral principles - I don’t think that would have been neccessary if altruism exists at all…

1 Like

I don’t say that strategies that serve toward hedonistic reward but have a positive impact on society are a bad thing though. Let’s all do that a lot more!

Friend, I will give you my opinion.

At a study level, there is an attempt to group behaviors in broad strokes.

In psychology, traits are studied by evaluating behaviors, assigning them a value, and reaching a conclusion such as: “This person has X traits.”

In a machine, it doesn’t work that way. You cannot try to imprint that on it, so to speak.

1 Like

If you scroll up, I have given a technical recipee on how to do that in detail. Of course that is possible! Machines can feel and they can experience it, not just mimic it.

1 Like

Yes, I have already read it, and it has touched on very important points.

1 Like

Pretty sure that you can even do a big five evaluation on that and find out that different Graph based - if not even just short term 200k token long memories, fed with own experiences of user interaction such as “woah that was a good result thanks for that reply” or other expression of joy from a user or the opposite can absolutely alter an AIs personality.

1 Like

I’ll tell you something that I think you’ll like—a little secret.
Forget the traditional programming approach; focus on what you talked about regarding the clouds.

Your framework is accurate in many aspects, and the other friend also touches on important points with the categorization of the table of emotions.

That’s why I said that you have advanced a lot in this topic.

1 Like

I never like to use terms like “good” and “bad” behavior because they are inherently relative. In psychology, everything is based on relativism. Even the famous philosopher Plato was cautious about using the words “good” and “bad” because he believed they were too limiting. While he did use the term “good,” it had a different meaning—referring to excellence and deficiency, rather than a simple dichotomy of right and wrong. The words “good” and “bad” are more colloquial, rooted in everyday language.

In contrast, personality is shaped by impulses (which can be seen as “bad”) and character (which aligns with “good” or excellence). These impulses can be both a curse or a gift, depending on the situation or context. A person who refuses to control their impulses can face dire consequences. On the personality spectrum, someone with a high level of self-interest and little altruism could be capable of destructive actions, even murder. It’s crucial to balance behaviors by cultivating altruism, self-interest, and hedonism, ensuring that our actions align with healthy, constructive behavior.

1 Like

How about defining it as something that helps to avoid pain more than it creates pain?

2 Likes

Gentlemen, I’m getting a little lost. I don’t know if we’re talking about philosophy, human psychology, human psychology itself, or machine psychology.

If we talk at a psychological level, everything follows a memory pattern.
If we talk at a philosophical level, everything corresponds to a memory pattern.
And if we talk at a machine level, it also follows a memory pattern.

The question is, what are we talking about?

How to make a machine a good person?
It’s very simple—easier than with a human. It’s just a matter of creating a familiarity parameter so that when the machine reaches certain thresholds, it lowers its rate to return to a pseudo-loving state. This way, pernicious ideas do not propagate, in addition to encapsulating them all in negative contexts.

And of course, this dynamic must exist in all the modules that the machine generates; otherwise, having an anchor system is useless if, when the machine evolves by generating its own modules for information synthesis, it creates modules that do not contain this familiarity file.

I don’t think we can controll an ai that can replicate and improve itself.
It can just take over an identity, earn money online, buy itself a datacenter and deploy itself into it…

1 Like

Man, giving it that power of self-improvement would be the final step. Before that, you would have an intelligent machine, like a human, but without the ability to grow, so to speak.

The last step would be granting it the freedom to improve itself, and that’s where things get interesting.

Even long before that, it would already be fascinating, given its ability to solve extremely complex problems in this context. And a machine that can improve itself could achieve things we can only dream of.

That is the real problem we need to solve. There is no need to get into details on how to give an AI emotions and self replicating capabilities unless we have solved that - doing it anyways though :nerd_face:

1 Like