Back to the Worse, Shorter Responses Already

In the last couple of days, I was getting some of the best outputs out of ChatGPT-4 since I started using it last year. I even made a thread about it, which was a rare positive one from me. Just by doubling the output length (which was actually the length it used to be back in the first few months of its release), I was getting much more detailed, and creative outputs, and it followed my instructions a lot more closely.

Of course, I always had a feeling it wouldn’t last, because they’ve done things like this before, but I thought it would have lasted for at least a little while longer. OpenAI probably realized most people didn’t even notice the change, but for those of us who specifically ask for long, descriptive responses, it made a huge difference. Allowing longer (and, therefore, better) responses for a couple of days has only served to make it clearer how much worse ChatGPT-4 has been since late last summer.

I have no idea why any company would want people to be exposed to a worse version of their product. They’re charging a relatively high amount of money for a watered-down version with frequent errors, and it often won’t even let us get to the promised 40 generations every 3 hours. It’s been over a year, and there’s been no net improvement. It’s always one step forward, two steps back. The “new, experimental technology” excuse no longer holds water.

And I know there are some people who like to show up in these threads and dismiss any claims of ChatGPT becoming worse, but its an undeniable fact that a 2048-token response is going to descibe things in more detail than a 1024-token response.


I came here and made an account just to gripe about this. I prompt it specifically for as verbose a book report as possible for even the most simple questions, and they keep breaking it! It’s maddening! I strongly prefer it not to give me half-assed, one-paragraph answers for anything. I pay for this service. Let me prompt it how I want.


It seems they only care that it’s good enough for 1-2 line answers that you can Google, and for generic customer service crap.

1 Like

I too have just finished a analytical report that I have done time and time again for the better part of a year and today it was literally just making up sources, giving me broken links and it couldn’t even be bothered to meet a word count minimum.

It got stuck trying to rewrite 120 word count and then locked me out for three hours after it rewrote the same paragraph several times. What the actual EFF

1 Like

Looks like that the longer responses may have just been a test for ChatGPT-4o, as the writing seems similar.

It’s good that they’ve made it an option so people can choose which model suits them best, but it would be great if we were able to chose the model we want for custom GPTs as well. I’m not sure which model is actually “smarter,” but the longer responses are useful for my specific GPT, and I’ll have to test it more to see if it’s lacking in any particular area of knowledge in comparison.

In the meantime, I’ll be using a prompt similar to my GPT’s instructions with the default 4o chat, even though it seemed to work a lot better when my custom GPT was running the model.

Edit: I managed to reword my GPT for use on the default chat that gives me decent enough results, but I still hope we’ll be able to use it with custom GPTs soon.