Observation on text editing capabilities changing to worse

I am wondering if someone else has experienced same issue or phenomenon in recent days. The editing capabilities of chatgpt4 have changed to worse.

I am working on my masters thesis (about Generative AIs) and have been using chatgpt4 to help me refine the text I have written (this is allowed by our university as long as I am transparent of the practices used.)

The change noticed is that even when explicitly prompting that the gpt should not do something (interchange words meaninglesly), it still does that. It uses a lot of academic jargon (normal) but when nudged towards clarity the editing goes all over the place for the lack of better explanation, it starts to change the meaning of the paragraph edited and other radical behaviour.

I have not observed this radical behaviour before.

I have so far tried to start from scratch (I use the same starting instructions to give it context every time), I have turned the custom instructions off, and tried different ways to prompt it, like lists of not to do and to do-

I feel it is a lot more radical and erratic on its edits tha before. This is not desirable behaviour.

Now I want to ask if someone else have noticed similar issues or is this just subjective and “bad luck”, even though I doubt it.

I appreaciate your time.

1 Like

Welcome to the club. GPT-4 was best in March. Every consecutive release was worse. The bottom is the latest from few days ago. It’s almost useless.