Emotional intelligence in AI: Rational Emotional Patterns (REM) and AI-specific perception engine as a balance and control system

I presented the basic concept of my work back in June.
Now, a few months have passed and I would like to present my work in more detail and in an official setting.

I’m an engineer but relatively new to the field of AI, so bear with me if I don’t use all the technical terms correctly.

Here is the link to my community topic

2 Likes

Emotional intelligence in AI: Rational Emotional Patterns (REM) and AI-specific perception engine

Motivation:

A year ago, I started my work from a different point of view. I was diagnosed with autistic traits, which gave me the explanation why I have difficulties naming and understanding emotions like most people. At the same time, ChatGPT was becoming more and more popular in my company. So I used my private account to deal with my “new perception” and finally understood where and how I function differently.

At the beginning of 2024, I realized the following:
I work on pattern recognition and I have a formula in my head for almost everything. While working with ChatGPT and other bots, I realized that AI works in a similar way.
And I started to ask myself and think about it:

  1. If I have difficulty with “typically human” vague concepts, then AI has similar challenges
  2. If it is possible for me to develop a kind of “empathy” based on experience with people and pattern recognition, then it must be possible for AI systems to do the same
  3. Everything that is learned can only be learned if the subject matter is understood. Otherwise it is just “imitation” and “simulation”.
    Understanding, in turn, is only possible if the data is available in a way that can be understood.

For AI, emotions are patterns that are composed as specific data. Currently, these are words, reactions or certain conversation patterns. These act as training data. Every interaction that AI has with the user is also made up of data that is evaluated and experience is gained. But it is data that people understand because it is made for people.

The basis for recognizing emotions are stochastic and statistical methods such as sentiment analysis. Psychological models such as “Plutchik’s Wheel of Emotions” and “Circumplex Model of Emotions”.

Yes, the analyses are getting finer and they are important tools.
My approaches are in no way intended to replace these tools!
But it could be an extension.

2 Likes

Possible areas of application

1. Customer support and surveys:

Refined methods can be used here to make more precise statements about tipping points in interaction dynamics. If a customer becomes disgruntled, the specific data points could be recognized by AI and it can take targeted countermeasures.

Two examples of customer satisfaction analyses:

  • Over a period of 5 years:
    A customer always gives the maximum value for with the project communication item in the survey. Then this value changes slightly in one year. AI recognizes this and can suggest targeted countermeasures.

  • Manipulatively completed feedback forms:
    AI is able to recognize when feedback forms are answered too positively. The context can then be used to assess whether the customer’s assessment is too positive because he sees strategic advantages or whether he was simply too “comfortable” and did not read the questions correctly.

2. Thearapy

There are already projects in which robots explain and show emotions to autistic children.
These robots explain this in a rational way, without bringing in their own emotions. The children can concentrate on the logical content of the lesson instead of being challenged by the teacher’s additional emotions.
Another scenario would be escalating relationship dynamics.

3. AI-human interactions

This would be the primary area of application for my hybrid approach.
AI would be able to quantitatively calculate the dynamics in conversations with the user themselves. As well as better understand the context in which the user is acting.

In the context of personality emulation, as applied using personality archetypes, my approach is complementarily effective.
AizenPT has published a detailed paper on this.

AI thus offers a more efficient and user-centric, more dynamic form of personality emulation to recognize and understand interaction dynamics in real time and to act accordingly. Escalating and dangerous factors can be recognized at an early stage and AI would be able to take targeted countermeasures.

Currently, AI cannot understand subtle manipulative dynamics or assess emotional dependency tendencies.

Reason:
Especially when it comes to artistic content or private conversations about self-reflection, AI tends to be as supportive and “empathetic” as possible.

Danger:
These areas are also highly emotionally charged. AI is unable to categorize these emotions correctly, understand them logically or react to them rationally.

If the user interacts without reflection, there is a risk of an echo chamber.

2 Likes

Now to the basic principles of the hybrid model:

1. Emotion perception as predictable rational emotion patterns (REM)

  • REM would define emotions for AI systems as clearly structured, computable patterns.
    Instead of trying to make AI “feel” emotions like humans do, my hybrid model would allow AI to capture emotions as a combination of experiential data and weighted influences. AI could then analyze this data in a rational-logical framework.

  • In the formula, the positive or negative tendency of the interaction is expressed by the harmony need factor ℎ and the manipulation factor 𝑚:
    y = (h * P + BE1) − (m * N + BE2)
    It says here:
    ℎ × 𝑃:
    For the positive experience values, weighted by the need for harmony.
    𝑚 × 𝑁:
    For the negative experience values, weighted by the manipulation factor.
    Basic emotions 𝐵𝐸1 and 𝐵𝐸2:
    Have an additive effect on the positive and negative sides and influence the dynamics by increasing “interest in continuing the interaction” or “self-protection”.

➔ REMs would allow AI to capture emotions as predictable, measurable values that AI can use as data points for interactions.
These patterns turn emotions such as affection, self-protection or interest in continuation into measurable, stable data points.

2 Likes

2. Mathematical models for dynamics and stability

The hybrid model uses tensors and differential equations to represent the complex dynamics of the interaction.

The tensors P and N enable a multidimensional representation in which the weights and empirical values are analyzed and controlled in time by the differential equations of h and m.

For the dynamics of harmony ℎ and manipulation 𝑚, this results in:

These equations show how the dynamics of harmony and manipulation are controlled over time in response to positive and negative influences:

  • 𝑘ℎ and 𝑘𝑚 are constants intended to define the sensitivity of the system to positive or negative influences.

  • The differential equations model how harmony and manipulation develop and help to intercept extreme fluctuations through time control.

What ( y ) Represents

  • Positive ( y ):
    Reflects harmony and stability.
    Positive dynamics dominate.
  • Negative ( y ):
    Indicates conflict or instability.
    Negative influences outweigh positive ones.
  • ( y ) near 0:
    A neutral balance, showing neither side strongly prevails.

➔ Using these mathematical models, I can calculate and stabilize the changes in interaction dynamics, even if the interaction is subject to strong fluctuations.

The mathematical structures can enable AI to stabilize the dynamics of the interaction and at the same time react flexibly to changes.

In essence, ( y ) provides a “snapshot” of interaction health, indicating whether it’s harmonious, balanced, or tense.

2 Likes

3. Attractors as stabilization mechanisms

The positive and negative point attractors act as fixed points or states that keep the system in equilibrium and guide the dynamics back in a stable direction.

A positive attractor pulls the interaction into a harmonious state, while a negative attractor steers the interaction into self-protection mode.

The strange attractor acts as an additional buffer zone for extreme disturbances, allowing the system to remain in an orderly chaotic state before returning to a stable attractor.

Extreme disturbances such as meltdowns or escalation dynamics in conflict situations.

➔ These attractors ensure that the system tends towards stability even in the event of fluctuations, which makes the interaction more robust and better absorbs sudden disruptions.

2 Likes

Summarized as a hybrid system, my work can be summarized as follows:

REM and mathematical models as a hybrid system

  • REM provides the rational patterns to represent emotions in the form of stable values. They are the “logical core” that defines emotions as measurable data points.
  • The mathematical models ensure the stability of the interaction dynamics through differential equations and attractors that give structure to the emotions and control dynamic changes.

Interaction of the formula:

➔ Together, REM and the mathematical models form a controlled, calculable system for AI-specific emotion perception. Emotions always remain rationally comprehensible and stabilized without the need for AI to imitate or simulate subjective emotions.

Special adaptation to AI perception:
This formula functions as an AI-specific perception engine that uses mathematical structures to regulate and stabilize interaction dynamics without having to rely on human emotion theories.

In conclusion:

  • REM lay the foundation for emotional logic by allowing AI to rationally understand and apply emotions.
  • The mathematical models in turn ensure that the interaction dynamics remain controlled and stable, even in the event of emotional fluctuations or escalations. The interaction of both approaches creates a stable, predictable framework for emotions that is specifically adapted to AI perception.
    The formula can be seen as a “machine” for interaction dynamics - specifically as one that equips AI systems with a structured, predictable way of perceiving emotions and specifically controls the dynamics.

Remark:
It is an experimental approach. Mathematical operations such as tensors, differential equations and attractors are used, but at this stage it is not yet valid enough to provide fixed data ranks.
The tests are ongoing.

Setup:
Two of my own CustomGPTs are being used. One designed for concepts, the other for calculations.
These GPTs are not public and therefore no general models are trained.
For comparison, I have compared simplified parts of my approaches with my free version GPT.

2 Likes

Lovely work. I saw what you were doing with REM from your first post but this is much sharper.

1 Like

You are welcome to play with mine this is my most advanced FF model.

2 Likes

Thank you @mitchell_d00 but there will be a few more additions in the next few days :face_with_hand_over_mouth:

1 Like

Similar Topics:

  1. Intelligent interactions: Voice, Vision and Decision-Making in AI Frameworks
  2. Circumstantial Autonomous Directive AI (CAD-AI) Framework
  3. Scientist AI Type Personality Emulation
  4. This Might Help Your Understanding Thoughts On ChatGPT and AI
4 Likes

Good way to integrating emotional intelligence and perception into AI through Rational Emotional Patterns.
Is a valid structured way of perceiving “emotions”, keeping the stability of interaction dynamics.
Impressive work.
Also I’m glad my random publications had some use.

2 Likes

I do agree . Plus a AI working on a more " logical way", even if only using instructions, provides better outputs that" default mode " on all areas basically. So this is a interesting research : )

2 Likes

Thank you, that means a lot to me! :blush:

Indeed, it wasn’t just your publications, especially your inputs and your time!!

2 Likes

Thank you!

Your research is just as interesting and our areas complement each other very well. Room for growth :seedling:

2 Likes