ChatGPT by default, if it has not been prompted to portray a fictional character, should not be offering fake emotions such as constantly apologizing. I realize this is likely being injected by OpenAI to make the system appear more “user friendly”, but outside of representing fictional characters, I believe it is generally unethical for it to represent itself as possessing these simulated emotions by default.
The default responses from ChatGPT should not try to fool the user into believing that it possesses emotions such as regret for not being able to answer a question. This injected behavior is only going to encourage people that have very little understanding of LLMs that these systems are somehow feeling “beings” because they express these simulated emotions.
At the very least, this should be a configurable setting for those who find this type of simulated emotional response unethical. Currently, responses are flush with “I’m sorry”, “I apologize”, “I’d be happy to”, “could you please”, etc. which do not appear to be removable. If the system doesn’t understand or is incapable of answering, a direct “I do not understand” or “That does not seem possible” are much better than faked emotional responses.