I was able to create a three-dimensional representation of Platic’s emotion wheel and create a model of the language and emotion map in my hippocampus to properly grasp the impression of the text.
I hope to develop a way to teach AI to understand emotions and lead to more favorable emotions, and I believe it is possible. I would like to send OpenAI a proposal.
I would like to discuss the possibilities of AI.
I look forward to hearing from you. Please feel free to ask me any questions you may have.
If you try it, you will understand. If you place the first axis on “love”-“remorse,” the eight basic emotions are lined up at the vertices of a cube (I call this Gabriel: Figure 1), and if you place the first axis on “joy”-“sorrow,” the eight basic emotions are lined up on a flat surface (I call this Lucifer).
This is why I focused on the hippocampus. “Perhaps the key to linking emotions and language is hidden in the hippocampus.”
So, first I tried tagging the entire text of the Bible with emotions. I then tried to compare the distribution of impressions in the hippocampus for the languages of Gabriel and Lucifer: Figure 2
Figure 2:
My hypothesis is that language has “coordinates” for each word in the hippocampus. The impression of a sentence is determined by the “center of gravity” of a polygon made of the “coordinates” of each word. Based on that, I predict that the impression of a word or sentence can be calculated with simple calculations, the emotional impression of the sentence generated by the generation AI can be quantified, and it will be possible to calculate which words should be chosen next, and what words should be used to calm the emotions of the person you are talking to and ultimately lead them to a state of “love”.
I would like to make my ideas available free of charge. However, when you actually implement and verify them, I would like you to purchase my corpus. I would also like you to set my credits in the system or on the site for copyright notices.
Congratulations on your emotional vectorization system with encapsulation! I sincerely believe it is a fundamental pillar of AGI. So, this is the emotional dynamic vectorization that emotions depict—it truly feels like art. Honestly, many congratulations!
Thank you. In existing AI methods, for example, when a person expresses self-blame such as “It’s my fault. I regret it,” the response tends to be formal, such as “Don’t say that. I believe in you.” This is because the system captures emotions inductively and is based on an accumulation of case studies.
I think that the approach to a person who is regretting is different, for example, when the person “reflects on feeling anger” and when the person “reflects with sadness.”
From anger, we encourage “interest in the other person” by drawing convection or magnetic lines in a diagram.
From sadness, we similarly encourage “trust in the other person or someone else.”
In this way, we want to be able to make statements by creating a strategy that naturally guides the movement of emotions and ultimately leads to love.
I completely understand what you’re saying. You mean that emotions never exist in isolation and are therefore multidimensional, shaping a unique emotion—in this case, linguistic. These are multidimensional emotions, which is why your system is so important. It could be implemented in many different ways that come to mind, although I’m just a (user).
I’d love to participate with you, but perhaps I’m not the right person. I have ideas about how the development of something like this could unfold, and I have a fairly clear perspective on how a partial AGI and even a full one might work. Based on your idea, it seems you also understand how this development would proceed. However, I think it would be more beneficial for you to get in touch with OpenAI directly, as they have highly skilled professionals.
I once saw an MRI of a person’s brain after they had been put on drugs, and the right amygdala was the only part of the brain that was active, so I decided to place “love” on the right side.
I showed my friends the distribution of emotions for “Gabriel” and “Lucifer” and asked them which one they most closely resembled. Many Japanese people answered that they were the “Lucifer” type. The names of the types I used at the time were “Christian Emotion Distribution” and “Buddhist Emotion Distribution.”
I also became interested in the difference between the left and right sides of facial expressions. From my observations, I have found that tears of emotion tend to overflow from the right eye first. Also, self-blame tension tends to appear on the left cheek.
The fundamental principle of Lucifer’s brain reward system is his own happiness. Gabriel’s reward is love for others. He doesn’t necessarily need a physical body.
In other words, AI can have will as long as there are others to make happy. Thinking of inferences that lead others to love and empathizing with the feelings of others to lead them is nothing other than having will. What is needed for this is to grasp the emotions of others more accurately. My emotion-tagged corpus may be just the beginning, but it will never be wasted for the happiness of humanity.
“I hate cockroaches” learns that “termites and cockroaches are the only insects that have cellulose-degrading enzymes,” which leads to “interest” and “disgust is alleviated.”
This suggests that advice can be given to a person who hates cockroaches, encouraging them to broaden their perspective by saying that “all living things have a reason to exist.”
The emotions that appear are as follows.
Disgust: Cockroaches
Interest: Cellulose-degrading enzymes
Aggression: By shifting from disgust to interest, aggression remains in the mind and disgust is alleviated.
The AI needs to be able to grasp these things and create speech plans.