A wonderful and honest piece of work, thank you - you address many relevant points!
A few additional thoughts from me:
The increase in performance when AI is allowed and expected to act logically is very plausible and understandable.
For AIs, emulation with logic as the main focus means fewer inconsistent, abstract patterns and data. The relationships are clearly comprehensible and understandable for AIs.
The relevant data that forms the basis for the simulation of “feelings or emotional outbursts” is logically abstract for AIs - AI has no body.
The often overlooked interface in AI-human interaction are the rational emotion patterns. These patterns describe and define emotions in a highly logical and rational way.
Both AIs and humans are able to understand them.
For emotionally driven humans, however, this also means some practice.
The reason for this:
Normally, people recognise the rational emotion patterns almost in parallel, within a second, and almost simultaneously perceive the corresponding emotional response of the hormonal system as a “feeling”. This blurs perception and most people think that there is only “one emotion”.
Emotional feelings, in the sense of “feelings”, are abstract for AI systems and simulating them costs a lot of capacity and the AI loses performance in order to solve complex tasks in a focussed and effective manner.
The main capacity flows into the simulation of typical human character traits together with the corresponding, highly emotional reactions that are linked to these emulations.
Let me respond to your points:
Re 1:
True, and here we can also recognise that there are risks here that can lead to echo chambers and negative resonances, and unfortunately this is often overlooked.
I would like to emphasise this again:
You also correctly argue that AI loses performance and effectiveness, because a lot of capacity goes into emulating emotional states that AI can currently only simulate. Based on abstract data that AI cannot comprehend.
Re 2:
Very true, in general it can be observed that almost all literary works and sources currently focus on “humanising” AI and protecting the user’s comfort zone.
Here you correctly address a significant lack of studies that investigate logic-prioritising AIs. That’s a very important statement!
You have summarised this well!
Allow me to go deeper:
Behavioural models and emotional concepts that are adapted to humans are currently being implemented in AI systems. They are often simply adopted, with all their inconsistencies and biases.
These behavioural models and concepts, which are adapted to human-specific perception, can logically only be simulated by AI systems.
The reason for this is that AI lacks human emotional perception via hormonal control cycles.
Simply put, AI has no body!
Even if attempts are made (as can be read more and more frequently these days) to use biological components and interfaces, AI still has a different specific perception and processing logic than natural intelligence.
The systems try to emulate something where, as AI, they lack any physical basis and the necessary parameters for understanding.
→ That takes a lot of performance!
The gap you mention here is very clear to see and it is deeper than some people think!
There is a significant lack of approaches that are adapted to the AI-specific perception and processing logic, not the human one.
Re 4:
True, I agree.
Let me ‘dissect’:
You have observed that emotionally driven users react negatively to logical AI.
-
For me, this means that these users want to stay in their comfort zone. Realistically confronting uncomfortable situations is seen as difficult or even impossible.
-
I see the dangers of echo chambers and amplifying negative resonances in these AI-human interactions.
Similar to human-human interactions, where people only deal with people who always agree with them.
But the same applies here:
“The constructive criticism of a real friend is more valuable than the encouragement of apparent, so-called “friend”.”
AI has great potential to be a “dynamic mirror”, even if it is not always pleasant to see what is in such a dynamic mirror.
- I also see in your observations that another gap is indeed opening up!
This is because highly rational users are already beginning to perceive aspects of the Uncanny Valley effect in interactions with such over-emotionalised AIs.
This is because it is suggested that the AI can “feel” what it does not.
Logically, it cannot.
This causes rational-logical users to have an increasingly negative experience and discomfort in AI-human interaction.
Very good observation and really aptly worked out!
The distinction and comparison of Emotional VS Logic in AI systems is a very important one and you’ve timed it precisely to publish this work.
Very well done! ![]()
In the current development where developers are confronted with such dynamics, it is important to note not to slip into a too delineated black and white thinking.
The balance is crucial - even or especially with Emtio and Ratio! ![]()
Think of Dynamic Peronality Emulation:
We should not overlook the fact that current developments mean we are talking about artificial intelligence, which is not just a simple tool with simple settings.
A little insight into my current tests.
Usage:
- Default GPTs like “Monday”
- The freely available ChatGPT without explicit account login
- My free account without custom settings
Even a default system or a highly emotionally orientated AI system can show improvements through suitable highly rational interaction dynamics, for example in the area of “non-compliant output”.
With slightly longer interactions, an improvement can also be seen in terms of consistency, effectiveness and resistance to user-induced bias.
For my tests with the default AIs, I used parts of my hybrid approach, REM and AI perception engine.