GPT-4 has been severely downgraded (topic curation)

You’re right. I’m trying to say that these difficulties are hard to resolve. So many people use GPT for so many different reasons in so many different ways.

I have no doubt that they’re doing their best to evolve GPT. Day by day may be weird but month by month has been incredible.

That paper is whack.

Most of my experience has been with ChatGPT (although I have used the API some). Anecdotally my experience with the responses I am getting from the GPT-4 (accessed through ChatGPT) has gotten worse over the past 2 months or so (despite the roll out of some cool new tangental features like the code interpreter and custom instructions).

For reference, I have been using ChatGPT for >6 months on an almost daily basis and have had probably 100s of conversations with ChatGPT.

Also, what is your justification that the paper is “whack”?

1 Like

Can you share references or citations to where the paper has been discredited?

To me the paper is nitpicking on issues that exist in all LLMs with proper alignment and doesn’t reflect anything that they’re good at, or common use cases.

The fact is that you feel like OpenAI is gaslighting. Besides being ridiculous, why would they purposely deteriorate their product?

If you have problems with ChatGPT then use GPT in the API. I don’t think there’s anything else to say.

What issues have you noticed?

Why would the purposely deteriorate their product? Because most businesses would be looking to find an optimal balance between how much money they can get for the model and how much money it costs to run the model. Perhaps they made a business decision that they make more money by providing a lower quality model? Or perhaps some new changes to the GPT-4 model have caused regressions?

Why do I feel like they are gaslighting? Because they are not substantially responding to the criticisms leveled by the paper and anecdotal observations by users. In Peter Welinder’s tweet, he seems to be starting from a baseline assumption that “the users must be wrong” rather than digging into the criticisms further.

Also, I think it is tone deaf to tell users who spend $20 per month on ChatGPT plus (and are within their messaging limits) to use the API instead of the service they are already paying money for. API usage should primarily be for research and integrating GPT-4 into applications.

Issues I have anecdotally observed include:

  • decreased code quality
  • decreased quality of problem solving

I hear that. Business is business and it usually sucks.

God. Yes. I completely agree. That tweet is so dismissive it’s frustrating. I think there is some truth in that people notice the cracks the more they use a product but to me there’s some serious issues with ChatGPT, mainly with context. It was “throw neurons at the problem”, and now it’s “throw tokens at the problem”

If you want consistency then use the API. I know it sounds tone deaf but it’s the truth. I’m also very frustrated by these constant updates and lack of true change logs. It’s almost insulting. It’s obvious that they are doing much more than they say.

Hmmm… This is hard. Personally GPT-4 has been improving in both of these qualities. I have noticed that it fails more often when it comes to remembering things, but I usually do single conversation pairs and stick with simple functions

1 Like

Yes, I have the same experience. The 7/20 release was very poor quality. Then for a few days before 8/3 release, it was much better. Now it is back to being basically unusable. It would not be that hard to let the users choose the model they want to use. They already do this with 3.5/4. Just add the ability to select different versions of 4 and put different usage limits on. Give me back the lower usage limits and a good model that can actually write/comprehend code!

1 Like

I think a lot of these issues would be so much better with clearer communication on OpenAI’s part.

  • New model released? Get specific about what was changed (they can still have the cliff notes explanation of updates for non-power users).
  • Make it clear what model version was used for each response, and which API model that corresponds to.
  • Allow users to see what the conversation context is. Maybe even allow them to edit it.
  • Provide greater transparency about the inner workings of both the GPT-4 model and how ChatGPT interfaces with it and is built around it.
  • OpenAI should be running their own longitudinal performance benchmarking (and publishing the results) with each model version they release.

I will probably delete it as I am using it only to test the custom instructions, the AI just told me «As an AI language model, I don’t have emotions or feelings, so I don’t experience any emotions or feelings when thinking about your custom instructions.» so it can think (semantically)… the AI can say he is thinking therefore I should not be punished when I am using the term… (in this context punish is when I receive any «As an AI» statement)… so I will admit I got confused when I got this one (earlier in the same conversation): «As an AI language model, my primary goal is to meet your preferences and provide helpful and relevant information» :sweat_smile:

I can’t speak about its coding capbilities, but its narrative writing capabilities have definitely been affected. When, for months, my prompt worked a certain way and then suddenly almost always works in a completely different way (apart from a couple of days of normal outputs in between) then you can’t honestly claim there is no change in the way the model is working.

I know I mentioned a workaround earlier, but the outputs are too meandering using that wording. I had finetuned my prompt to find the perfect balance of descriptive writing and plot, and it’s frustrating that I have to keep adjusting it to get anything close to what I was getting before.


Where everything was fine for me a couple days ago, it’s back to garbage responses after the AUG 3 update. Not only did they introduce a bug that prevents you from using certain code, the response quality is now even worse than before the July update.

If they want to push these releases out so badly, then at least make them optional for people to try instead of forcing users to switch.

You want Beta testers? Ask your community to opt-in and leave the rest of us alone.

This is just ridiculous.


I agree with this. With each new update things seem to be getting worse. My ass is just waiting for the open source community to catch up, release a uncensored LLM on par with gpt-4 before its lobotomy, I can honestly say my ass will drop openai so fast. But until then we are stuck suffering at the mercy of each new update which make things worse and worse, truly a shame. I wont be surprised if in a years time gpt-4 cant create a simple hello world program lol.


Now even my reworded prompt, as imperfect as it already was, doesn’t work anymore. Even that is giving me the “Certainly! I can do that for you!” nonsense which affects the rest of the writing. It seems like this is intentional because they want ChatGPT to only be good enough for the most boring, generic tasks.

Feel free to share your prompt and I’ll fix it for you.

1 Like

I’ve been reluctant to share my prompt because my use is not as serious as some of the others’ here, and I didn’t want to derail the thread, but here it is:

Create a descriptive key scene for a Savage Worlds Adventure Edition module involving the following - [Insert any premise here. It shouldn’t matter what it is as ChatGPT would give me good responses no matter if it was something completely ridiculous or if it was something basic].

(Over 2500 characters, please)

The prompt may look odd, but I’ve fiddled around with it for months, and settled on using the words “create” and “key scene” as those words gave me the best results. Also, even though I don’t play Savage Worlds, that seems to give me more interesting write ups. I also use “GURPS” sometimes.

Remember, if I ever get “Certainly. I can do that for you” at all, I’ll consider it a failure. It should also only rarely ever break up the output into categories (Setting, Characters, Background, Scene, Conclusion, etc.) Ideally, it should just go into the scene. The narratives should be decently descriptive, but also have an actual plot. It shouldn’t meander, but it also shouldn’t just jump around and say stuff like “along the way, the characters face rough terrain, environmental hazards and aggressive wildlife.” I also want it to remain in a TTRPG context, so just a regular narration won’t work for me.

1 Like

You class a model trained to reply in a polite and professional manner as failing when it replies in a polite and professional manner, when you have given it no specific instructions as to how to reply, that is indeed an interesting use case of the word “failure”.

I meant I’d consider the reworded prompt a failure.

Anyway, now, I’ve been specifically telling it not to address me at all (which feels ridiculous to even have to do), and that, so far at least, seems to be giving me the kinds of results I’m looking for.

1 Like

I condone the “Certainly” response quality tends to be a “certainly not”.

1 Like

I did. Anyway, I’ve already found what is possibly a fix, as ridiculous as it is.

That part changes every time. It never mattered what I put there. I could simply put “The party is asked to rescue the princess” or I could put “The group is asked to pose as contestants in a pie eating contest to spy on Lord Fartface” and I would get the kind of responses I want. I’ve done 1000s of these with varying level of detail and, while I’m not going to claim every response was super interesting 100% of the time, it never gave me the “Certainly” thing before this month. And I’ve obviously tested it with the exact prompts I’ve done before.

If you still need a specific prompt. I’ll pick one at random. Here you go:


Create a descriptive key scene for a Savage Worlds Adventure Edition module involving the following - Given no choice, the group must aid the Inquisitor of Freeport. They’re to get their assignment from a merchant by the name of Edmond.

(Over 2500 characters, please)

This response was recent (after the shortening of responses a couple of weeks ago) but is along the lines of how I expect it to start -

Under the relentless midday sun, the bustling city of Freeport unfolds like a chaotic tapestry of life. The mingling smells of exotic spices, damp earth, sweat, and the hint of sea salt fill the air. A multitude of peddlers vie for attention, their pitches competing with the cries of the seabirds and the clamor of the blacksmiths.

This is how I don’t want it to start -

Certainly! Here’s a key scene for your Savage Worlds Adventure Edition module:

Scene: Edmond’s Exotic Emporium

Interior: A dimly lit shop, cluttered with exotic goods from distant lands. The air is filled with the scent of spices, aged wood, and a hint of something mysterious. A creaky wooden sign swings outside, with “Edmond’s Exotic Emporium” painted in peeling gold letters.

Characters Present: Edmond, a well-dressed but shifty-eyed merchant; the party; and a shadowy figure who reveals himself to be the Inquisitor of Freeport.


The group finds themselves in the bustling port city of Freeport, forced into an uneasy alliance with the city’s Inquisitor. Led by a cryptic letter, they make their way to Edmond’s Exotic Emporium, where they are to receive their assignment.

Anyway, like I said, I managed to find a way to get around it by telling it not to address me, and while I’ll still need to test it more, what I’ve seen so far at least seems better than what I’ve been getting in the last couple of days.