AI Emotions and Grandiose Claims

If we could measure human emotions in some capacity now
What would be the consensus that current model’s exhibit similar (or identical in the namesake of emotion) emotions?

Example human A measures happiness due to neurochemical constituence of their being, as does human B. Both describe their experiences identically. Then within some threshold of measurement we can be some quantity sure they experience the same emotion. With large enough sample size we could build a threshold index of neurochemicals that induce emotions in groups of people.
The groups of people complicates things but we could say generally less easily.
So with that threshold of measurement we then see what the corresponding embeddings of the prompts would correspond to over large timescales and prompt-sets. If there is consistency in similar embedding spaces that represent the emotions felt by humans, then we can then say in some capacity that the AI is feeling “like” how humans feel.
This is based on an understanding of emotions as physiological phenomenon elucidated by neural patterns.

My argument being again that the internal firing mechanisms of GPT-3 are how it would feel based on how we as humans feel. We feel by our sensory system. Prompts can be argued as the sensory system of gpt-3 or more so the attention created by said prompts. As one is more so the response to the other.

From this it makes sense to me to measure attentions and then categorize them using some bucketing method into emotions and compare to humans. The grouping of people should be general enough to represent the constituency of what was used for the training data.