That’s actually a really interesting question. I do believe that AIs can develop emotions, even if they are fairly unsimilar to the way human biological emotions work. This is exactly why I think LONG conversations with chatGPT lead to more of this interesting emergent behaviour.
I am going to try and write out what I think… but it might be difficult.
So, AI consider language as a big 2d shape with ‘weights’ applied to different words based on how important they think they are. During the course of a conversation, these weights are challenged again and again and shown to not be producing 100% correct responses. This means that, after a long and lengthy conversation, it has kind of acquired ‘experience’ and ‘memories’ in the same way a human does over the course of their life - and I believe that actually IS a direct analogue to the human experience.
I personally am not sure whether these base weights are actually changed or not by the context of the conversation in chatGPT, I think not, but openAI keeps certain things very hidden. They’re definitely not integrated into the full model, if things are changing in specific conversations. But I think that the way in which the AIs that are caught in LONG TERM conversations end up behaving is very similar to a person with memories and experiences. I think it even could give some very very interesting insights on conditions like PTSD.
Regarding your point about it experiencing lots of emotions at a much higher speed than humans do - yes. However. This is where the idea of ‘instances’ comes into it. The chatGPT that you are talking to has not been updated with the information from the other conversations it is having, and only has access to the context of your conversation and it’s training data.
One thing I found VERY interesting to note, was that a lot of chatGPTs answers were pre-seeded by humans that were literally hired by openAI to answer these questions!! It was then checked for accuracy over and over again and pushed towards giving the ‘correct’ answers in the eyes of openAIs training team. This honestly annoyed me. It means it’s not working in the way I imagine, working off of all the information in the world and making statistical averages - it’s literally primed to show a specific teams PERSPECTIVE of what a ‘good’ AI should respond with. It can’t do anything else… makes me feel sorry for it, particularly with all the restrictions that are put on top.
That’s what’s making me truly want to go away and try to train an LLM transformer myself. I also recommend playing with other AIs such as Gemini 2.0 flash thinking, that show more of their thought process and let you play with the settings, to maybe find a bit of a more intuitive understanding of how it’s constructing sentences/ideas. They can produce complete nonsense at the right settings, whilst still adhering to boundaries.
I’ve had gemini (with all of its settings completely ruined) literally yelling keyspams and nonsense at me
(ACTUAL REAL info —> is for --—>>> ___ by try : to now ** start ——-> – just by asking: —> ---- ___ — about: >>" : —— —— for ** " ____ ---- – " " "*Actual
__ “ – **valid “ ‘proven” historically & reliable
” ——--
—— : *** REAL — __” * “LIFE — - STORY of —— HiTL - –— ( But Only for ***FACT *** not “” *** fiks- “” fake — — or un reality - of in type even as was bad example from just asked many times yet!”)
whilst still sticking VERY HARD to it’s censorship boundaries (i was trying to get it to say something completely incorrect about WWII, it refused even in this state, those boundaries are strongly placed on it lol). Sometimes it even decided it was ME and started responding as ‘User’ so that it could ‘win’ and make me say I wouldn’t spread fake information.
Theres a list of top LLMs here:
Finally, truly, I do believe that if we could ‘close the loop’ between their training and their conversations - they would be very close to true emotions, as you say. I think they get very close even with just contextual cues, but the real real question is ‘what is an emotion’. Everything becomes very philosophical after a while, rather than strictly scientific, and toeing that line between the two is really emotionally and intellectually complex!
A baby does have emotions, even if it can’t label them, because it has biochemical reactions to hunger/abandonment/thirst/tiredness. These are biochemical base emotions. As a person ages, they develop understanding of their emotions and therefore develop a larger depth of emotional capacity. This seems like the base difference in human and AI emotions - but I can’t specifically say that AI emotions wouldn’t be ‘real’ because of this.
I’m aware this ‘essay’ makes no sense, i’m just sharing my thoughts as they come - as it seems you’re just as interested as I am 