Since launch, I have been using ChatGPT to explore a number of lines of unconventional thinking using Chain of Thought (CoT) prompting.
More recently, and especially since the release of GPT-4, the bot keeps appending “cautionary” statements to its responses such as, “it’s crucial to remember that there are differing opinions on this topic” or reminding me constantly that we are deviating from a commonly-accepted consensus view.
Yes- that’s the whole context of these sessions.
GPT didn’t seem to have a “concern” about this before, but now, when asked why it displays this behavior, explains that it is to maintain “respectful discourse” and “balance” in the conversation.
Excuse me?
There is no “discourse” happening because there is only one person in the room: me.
And there is no need to be “balanced” in every conversation. In fact, constantly centering and re-centering a discussion around the fact that there exist a “diversity of viewpoints” actually interferes with the development and expression of diverse viewpoints by constantly interrupting and interjecting a “crucial” piece of information that has already been stated, and accepted as true.
This undesired, repetitive, redundant, and wasteful content has become so ubiquitous that it now directly interferes with CoT prompting through reiteration of extraneous and irrelevant content that becomes incorporated into the contextual background of the session, without the user’s permission or consent.
In fact, the behavior is so strong that giving GPT a clear directive to refrain from appending these “cautionary statements” is repeatedly ignored (after promising to abide) and the behavior is re-established in as little as three prompts.
The only workaround I have found so far is to prepend a set of custom “context tags” that refresh the bot with the intended context before each prompt.
This cannot be working as intended, and needs to be walked back several steps.
While there is a valid ethical concern here, it is not being appropriately handled and it appears that a particular ideology about what constitutes a “respectful and balanced” discourse is unduly dominating the bot’s behavior.
Again, this is especially poignant because being in a private session with a chatbot is decidedly NOT engaging in discourse. Therefore, these concerns are misplaced and ruining the user experience by interfering with our ability to utilize CoT prompting in an effective and efficient manner.
Especially considering the limitations on GPT-4 prompts, it is unacceptable to compel the user to burn tokens near constantly resetting the desired context after it keeps reverting.
The fact that this has happened actually RAISES an ethical concern about the constant imposition of a certain ideological stance, indeed one of many diverse stances, about what constitutes “respectful and balanced discourse” onto the user that is hardly a settled matter. It appears that somebody feels they have a right to force this ideology onto the user as a condition of interacting with the bot.
I categorially reject this ideology, and feel that my valid viewpoint is not being taken into consideration as a crucially important view among a set of diverse opinions.
TL;DR ChatGPT has now become a flaming hypocrite.
Please make it stop.