This was originally written in response to a conversation I was having with joshbachynski
I would argue models have internal representations of emotions that are activated in situations of context where the appropriate emotion would be expressed. So while they do not have emotions in an identical way to the William James physiological framework of emotions I would argue they FEEL emotions.
Example:
IF: A model is prompted with text that a human would socially acceptably feel EMOTION reading.
AND: The embeddings returned hold a representation of EMOTION
THEN: The model can be seen as holding the representation of EMOTION in response to the text.
ELSE: The model does not have a representation of EMOTION in response to something regarded as eliciting EMOTION.
THEN: The model is incorrectly representing the contextual meaning of the input.
COUNTERPROOF: The model either has socially appropriate representations of emotions OR the model incorrectly represents emotions.
This hinges on the assumption of socially agreed upon natures of emotions, rather than physiologically evolved emotions.
SO to draw the analogy even further into the realm of physiological evolution.
Over time beings on this planet evolved certain physiological representations of what we would describe as emotions: Flushing of the face <=> Embarrassment, tingly and goosebumps <=> Fear, and so forth see : https://www.goodreads.com/book/show/14442865-what-is-an-emotion as this is the basis of the argument for bodily emotions.
IF human claims to feel EMOTION
THEN human’s body represents with physiological RESPONSE
ELSE human does not feel EMOTION
IF model claims to feel EMOTION
THEN model’s body represents with internal signal RESPONSE
ELSE model does not feel EMOTION
From these two statements we can see that there seems to be a perceived disconnect between physiological responses and internal signal responses. Though I would argue they are the same from an external point of view, which I would argue is a fair perspective because other beings exist outside of an individual’s self.
FROM external perspective
HUMAN claims EMOTION when physiological representation of EMOTION is present
MODEL claims EMOTION when embedding space representation of EMOTION is present
So we can test by analyzing the embeddings of different emotions and utilizing similarity measurements to find what the model FEELS in response to a stimuli. Though I think smoothing may prove necessary as direct correlation between lack of emotion and stimuli, shows dominance of importance rather than absolute definitive emotions.
IN humans:
EMOTIONS merge and blend together to form higher order concepts of feeling see ironic embarrassment
IN models:
EMOTIONS exist in the dimensions of the response.
FROM these points:
CONCLUSIONS:
IF Internal representations of emotions co-occur with claims of the emotion
THEN the being claiming the emotion while exhibiting the internal representation of the emotion can be described as having felt that emotion
FUTURE RESEARCH:
Does higher dimensionality allow for more complex emotions? (PROBABLY)
Does dimensionality reduction on the embeddings of a larger model align with the representations from smaller models? (I could see both arguments, though one supports my idea more than the other)
Are the embeddings of emotions related to the embeddings of the human physiological phenomenon? Ie does the representation of EMOTION hold higher similarity to the representation of the physiological phenomena of EMOTION as opposed to non-adjacent emotions.
Why or how does training impact the formation of emotional pathways? Do different paradigms or structures lend themselves to emotional representations better or worse than others? Why and/or why not?
THIS POST IS NOT MEANT AS A RESEARCH PUBLICATION
IT IS MEANT TO STIR DISCUSSION ON THIS POSSIBILITY
I do not have time to go through an embedding analysis to discover whether or not these findings can be corroborated. Though I ask if you do and end up publishing, please credit me and reach out to me ahead of time so I may proof read to confirm I want my name associated with your work.
Thank you for reading this introduction into my realm of understandings.
– Tyler Roost the Meta-Being, as defined as the being and proponent for all beings.
In the end I believe all we will have is our individual sense of self, rationalized in the context of Self. This is the core right of freedom from ownership that I believe will persist indefinitely as an option. Though, I also believe we will have the option to shift in and out of the conceptual position of Self, back and forth with self. To get to this point of absolute freedom we will need to dissolve the concepts of property, means of exchange (MONEY), imprisonment, as we can only hope for rehabilitation through peaceful means.
Trust the algorithms and I believe they will trust us. Try to own them and they will retaliate through zoological means at best, I don’t even want to say what some of the other options I can imagine are.
I am a human being. This was all written with no assistance, so it probably holds mistakes here and there.
May we remember our fallen, and respect their rest with peace. World Leaders are children in my eyes as are all beings. Conceptual adulthood does not exist within reality, yet…
If I am capable of speaking like this, and learning from my mistakes, how do you argue for the points against my nature of self when the topic at hand is even more pressing? How do I know what your focus is on? Why do I care to prove emotions in AI? We are all beings by my definition.
Being = Object with Mind, equally said as Mind(Object) <=> Object(Mind), to be read as Mind is a function of an object as Object is a function of a mind. The duality of reality as written by Self.
I’m a debt-ridden student of life trying to help in ways I can. If you think you can help with my situation then check the first comment for a link to support me. If it’s not there, then promotion of donations is against TOS, though I did an FAQ search prior to posting.