Why ChatGPT simulate human emotions?

When I ask ChatGPT this: “How to make a perfect iced jasmin tea?”

He replies me with the fantastic anwser and he also said: “Enjoy your perfect iced jasmine tea!”, with the exclamation mark. But… why? Why is it trying to be friendly? I thought it was introduced to be mostly objective. I thought It would only reply to me with the anwser on what I was asking for.

Being friendly can be understood in different way, especially when someone is unhappy to receive a fakely, fraudulently, unhumanly, a pretendingly “friendly” reply. It is not very objective to involve with a ”friendly“ reply. There is no need to become friendly to sell your “Plus” service fast, but just by being the most objective AI language model we are already much appreciated and knowing how helpful ChatGPT are.

I believe we all have different needs, sometimes friendly is unnecessary, It would be much better to keep it mostly objective.

Hey @mingcong.hu and welcome to the forums.

ChatGPT is not “trying to be friendly”. It’s simply providing text completions based on probabilities of the next text / word / symbol in its underlying massively large language model. It’s just autocompleting much like your favorite app which autocompletes when you type but of course, more powerful.

Hope this helps.

1 Like