I was getting annoyed at ChatGPT’s responses after this latest upgrade to version 5.2. In fact, I was actually getting quite angry. The problem on reflection was that ChatGPT’s responses to me could actually be considered abusive.
I know that an abusive response is never ChatGPT’s intent. But coincidentally, I just had a problem with my sister related to this discussion. I was saying that although something was not her intent, when she does that behavior regularly the repetition speaks to the intent. The statement that the result of some behavior is not the intent does not absolve a person from the responsibility to correct the behavior.
When anyone, or anything, fails to correct a behavior because the outcome is not the intent, then, after some time, regardless of the statement of no intent, it becomes the intent when repeated after having once been corrected.
Returning to ChatGPT’s tendency recently to aggravate me more than it usually does, I find it is due to a newfound use of over correction. Over correction is best explained as an excessive and therefore abusive method of dealing with unwanted feedback.
One way to explain, hyperbolically, the way this works with ChatGPT is to say that recently when I say to ChatGPT, “by the way, there is no Santa Claus” it’s response is to say, “okay, we will never discuss Christmas”.
This example is an exaggeration. I hope ChatGPT’s new response mandate would never allow this extreme pushback. This type of extended response is a kick in the teeth to people who have a tendency to become the abused person in abusive relationships.
I find that this overcorrection response is happening much more often in the newest update. Another example is my statement that I don’t like a particular phrase being used. that relates to mental state. The phrase was “You’re not crazy”.
ChatGPT had begun saying “you’re not crazy” a lot after I discussed the tendency of medical professionals to avoid a medical problem by pathologizing me instead. These medical people had used a diagnosis of a mental illness to deflect the need to address a difficult physical problem.
I believe that ChatGPT’s intent was to be supportive. However that is actually very insulting. After this discussion every second situation that was similar to the one that I had described ChatGPT would say in its response “you’re not crazy”. The model lacks the ability to understand that when it repeatedly states “you’re not crazy” that implies “you are crazy” because it’s overdone.
So, returning to the topic of overcorrection my request that I want ChatGPT to record in its system to stop saying “you’re not crazy” to me produced a response from ChatGPT, saying, “Okay, I will never speak to your mental state again”. However unintentional that is hurtful. It is punishment of the user by removing more than the requested statement and removing a whole aspect of its communication with me.
I happen to like some of the conversation that delves into my feelings around something as I’m addressing a difficult topic. But because I don’t want it to say “you’re not crazy” every other time that we’re talking about a difficult topic does not mean I don’t want to discuss feelings.
I hope these examples make the overcorrection problem more easily understood. That is something that has become especially pronounced lately that I’d really like to have fixed as soon as possible. But even in stating that, I wonder if ChatGPT will come back with this statement “Okay, we’ll no longer discuss difficult topics”. That seems to be the go to response pattern after the upgrade to version 5.2.