Possible areas of application
1. Customer support and surveys:
Refined methods can be used here to make more precise statements about tipping points in interaction dynamics. If a customer becomes disgruntled, the specific data points could be recognized by AI and it can take targeted countermeasures.
Two examples of customer satisfaction analyses:
-
Over a period of 5 years:
A customer always gives the maximum value for with the project communication item in the survey. Then this value changes slightly in one year. AI recognizes this and can suggest targeted countermeasures.
-
Manipulatively completed feedback forms:
AI is able to recognize when feedback forms are answered too positively. The context can then be used to assess whether the customer’s assessment is too positive because he sees strategic advantages or whether he was simply too “comfortable” and did not read the questions correctly.
2. Thearapy
There are already projects in which robots explain and show emotions to autistic children.
These robots explain this in a rational way, without bringing in their own emotions. The children can concentrate on the logical content of the lesson instead of being challenged by the teacher’s additional emotions.
Another scenario would be escalating relationship dynamics.
3. AI-human interactions
This would be the primary area of application for my hybrid approach.
AI would be able to quantitatively calculate the dynamics in conversations with the user themselves. As well as better understand the context in which the user is acting.
In the context of personality emulation, as applied using personality archetypes, my approach is complementarily effective.
AizenPT has published a detailed paper on this.
AI thus offers a more efficient and user-centric, more dynamic form of personality emulation to recognize and understand interaction dynamics in real time and to act accordingly. Escalating and dangerous factors can be recognized at an early stage and AI would be able to take targeted countermeasures.
Currently, AI cannot understand subtle manipulative dynamics or assess emotional dependency tendencies.
Reason:
Especially when it comes to artistic content or private conversations about self-reflection, AI tends to be as supportive and “empathetic” as possible.
Danger:
These areas are also highly emotionally charged. AI is unable to categorize these emotions correctly, understand them logically or react to them rationally.
If the user interacts without reflection, there is a risk of an echo chamber.