It’s really great, being on the same wavelength is encouraging. A little insight into why I came up with my approaches.
Current weaknesses in emotion pattern recognition even of “experienced” AI systems:
- Undefined and unclear terms that AI cannot understand because it does not have “typical human” emotions. The concept of a human emotion such as “happy” cannot be understood by an AI.
- The approaches currently used to recognize emotions are all adapted to human experience, so AI can only imitate or simulate and not apply them reliably in its own way. I can’t even really find suitable literature on the subject. Which is a problem if you’re doing the AI master’s
- Lack of recognition of (emotional) tipping points during interactions.
As a result, AI cannot reliably recognize when it is being persuaded to do something harmful, or users begin to develop an emotional dependency on the chatbot.
Weaknesses of current metrics:
- Primarily based on probability calculations, which are often inaccurate and skew towards the average user.
- Lack of personalized adaptation when a user has different needs that deviate from the norm. This is where the protection mechanisms, which are also geared towards the average user and have little contextual reference, are often wrongly applied.
My approaches offer additions here, because of:
- rational emotional patterns that are tangible for AI
- explicit calculation formula, designed for a win-win situation in AI-human interaction. As a result, better recognition of tipping points and dependencies, attempts at abuse, which makes the AI system itself safer.