Emotional intelligence in AI: Rational Emotional Patterns (REM) and AI-specific perception engine as a balance and control system

I have just verified:
AIs-PEng - even if I write it differently and use a different derivation, the term already exists.

I will therefore stick to the official term, even if it is long:
AI-specific perception engine

2 Likes

Note:

My hybrid approach is an experimental methodology.
As mentioned in my initial topic when I presented the idea in the Community section, there are no established sources exploring this direction in detail.

While existing emotion concepts are often adapted for humans, they do not align with the specific functioning and processing of an AI.

Additional Note:

My approach cannot simply be copied and applied to another GPT, even with the same theoretical data.

Reason is:

1. Provided Theoretical Knowledge:

  • The knowledge fed into the GPT is highly theoretical and tailored to this specific framework.

2. Data Alone is Insufficient:

  • The formulas and structures require the GPT to fully understand the underlying dynamic system.
  • Like any intelligence, the GPT must gain experience in handling the theoretical input through a structured learning process.
  • Without this learning process, standard responses or generic suggestions will emerge, which fail to capture the depth and complexity of my approach.

3. Individual Adaptation:

  • The success of my method lies in practical application, requiring iterative refinement and experience with the concepts.
  • It includes efficiently designed, targeted feedback loops, allowing the model to make mistakes initially and learn from them.
  • Without such adaptation, the approach remains superficial and cannot achieve its full potential.

To put it simply:

Even if the theoretical knowledge is provided, i.e. through my publication as a prompt.
Well, even if you have codes for it… it doesn’t help much at the beginning for the following reason:

GPTs have to “understand” REM and the math first, empirical values and feedback loops are crucial for this.

Otherwise, standard programming with a focus on statistical and stochastic algorithms leads to GPTs immediately trying to “optimize” the approach.
Indeed, GPT “downgrades” my approach back to its standard algorithms.

When asked for analysis, AI then outputs generalized forms of the standard functions and not how my approach works!
This can lead users to assume that these are already common concepts.

1 Like