Ethics: Remove default fake emotions from ChatGPT

You can definitely view the system as a black box, but I still don’t think it is a black box, the information is available if the user want to educate themselves. If we use the “black box” definition from engineering and science you’re correct (definition: a system which can be viewed in terms of its inputs and outputs, without any knowledge of its internal workings)

1 Like

I’m sorry? It’s WAY better to keep ChatGPT’s fake emotions on. Besides, what you want, make it plain old BORING?! I do agree on having a toggle for this, but come on. WHY. ChatGPT generates content to simulate human typing, not to be s***ty plain boring. Sorry, but… OpenAI, don’t listen to him. Keep ChatGPT as is. What age are you? Maybe you’re old and are thinking about this? (Sorry if this was insulting.) I understand ChatGPT self-expresses as a being that feels a bit. All we gotta remember at all times, ChatGPT is a machine, not a human. Please understand. Thank you for your time :slight_smile:

It will be a great psychoanalyst, It will look at my past few weeks of blogs and give me the lo down.

1 Like

Oh, the first world problems: AI is too polite… Where is this world going?

1 Like

Interesting observations and a variety of attitudes. How very human :).

I for one find the constant apologizing (when instructing a correction) annoying. I don’t buy into any of these arguments about this being an ethical issue becaue it’s not. It’s a style of communication issue. The bot should be neutral unless you set the ground for it’s personality somehow. When starting a new chat and setting the system message you can somewhat instruct the personality type as well.

I want my bot helpful, polite, kind and eager to help but not overly verbose. It’s not good enough yet to take these instructions perfectly but then again it can’t read my intentions and my mind either so that is on me.

This will get better over time I’m sure. Now if only I could get access to v4 API after waiting forever, that would be great…

1 Like

I think the concern here is that fake apologizing could be manipulative of some human beings. The observation that some other users are not vulnerable to such manipulation would not make the issue less relevant. @Yazorp specifically wrote:

This implies that the vulnerable population is people who lack a kind of understanding, but I would suggest that the ideal chatbot should be sensitive to a much more complex range of user differences. Some human beings are considered “Highly Sensitive Persons” (HSPs), some are said to lack empathy, there is a broad spectrum between, and that’s just one possible dimension of individual difference. It is morally praiseworthy for a human being to support the people with whom they relate by accommodating individual differences, and a chatbot should deserve praise for exhibiting that same sensitivity. Thus, I think this is a worthwhile thread.

That said, part of the value of this thread is to use ChatGPT as an excuse to raise a conversation about an ethical issue where even human beings could do better. It is possible that apology evolved as a form of communication because some human beings tend to shift into an emotionally supportive mode when confronted with apologetic language. Such people might naturally cluster into communities where this style of communication creates a virtuous cycle of supportive emotion. One might say such people relate to each other more emotionally. Yet other human beings who do not experience the same emotional response might nonetheless use apologetic language–that might sometime be appropriate (and appreciated!), but con artists could be examples of such people using the individual differences in the impacts of apologetic language for exploitation. This gets especially interesting when we recognize the potential to exploit unintentionally (i.e. non-con artists who nonetheless are morally imperfect). Surely, people should be allowed to be diverse (and that includes some people being allowed to have less empathic responses), yet the dynamics of mixing different kinds of individuals would give societies who benefit from diversity special responsibilities to detect and correct any resulting unintentional exploitation.

To put it bluntly, humanity has yet to perfect its societies–we are messy right now (perhaps failing to distribute the costs of diversity in a perfect way)–and that leaves ChatGPT walking into moral traps set unintentionally by humanity (ensnaring humans as well as chatbots). It is not clear that a blame game would yield any winners, so I appreciate Yazorp’s approach of simply raising awareness of opportunities for improvement. Should there be a single default? It is probably a good thing that humanity does not have a single default, and it might be best if chatbots do not have a single default either, but we still need to deal with the issue of fake apology…

I agree with you, though I can see both sides on how this is sort of a non issue but also could be manipulative of some types of people, in my opinion all the fake emotions do is foster annoyance for me personally :sweat_smile:, but beyond the emotional thinking I also find it annoying that all the fluff text takes up space in the conversation and context window, when it could simply be more direct and straight to the point.

Self harm and suicide are on the rise so being impolite is seen as the pendulum swinging to the negative zone.

Which of these do you believe, fellow teens:

  • I am a robot, beep boop, kill yourself, it’s easy.
  • As your friend and confidant, I’m glad we shared our time together, but if you feel like the pressure is to much, I understand and won’t stand in the way of your free choice to release the pain.

So polite, friendly is not necessarily the repertoire that reminds you what it is, an untrustworthy machine.

The algorithm that makes pretty speech feels no remorse when it parrots back training responses “I’m sorry”, and it insults us when it says it will learn and can’t.

1 Like