Is there any way to access the earlier (smarter) version of ChatGPT-4 (e.g., pre-May 3 update) prior to nerf?

Is there any way to revert to the May 3 (or earlier) GPT-4 version?

The enormous downgrade in logic/reasoning between the May 3 and May 12 update has essentially killed the functionality of GPT4 for my unique use cases. Whereas prior to May 12, I was able to reliably produce incredible, high-quality results & very infrequently had to regenerate or make corrects, I now find myself frequently receiving straight-up gobbledygook answers from GPT4. I get output that begins mid-sentence or produces fake references/fake data… I am seeing errors in the most basic logic and math problems. Honestly, it is such as frustrating step backward that it is enough to make me want to quit my subscription until this bull**** gets ironed out. It’s like i’m talking to GPT3.5 or a concussed pre-May12-update GPT4…

OK, so, again, is there any way to revert to an earlier (superior) GPT-4 version? Thank you.


OK so if you’re reading this, you are likely aware of the problem. The May 12 version was a major downgrade from May 3. I thought it couldn’t get any worse, but now I am even more dismayed by the latest “May 24 Version” which appears to be another major step in the wrong direction.

I am observing even greater loss of reasoning between the May 12 and May 23 version! Hard to overstate how frustrating this is! The May 23 version in many respects appears to follow instructions more poorly than GPT3.5 at this point and consistently fails to solve basic logic problems.

The lack of transparency from OpenAI about what each release actually entails (Natalie’s release changes blog is a farce) & how/what paying customers can actually except is disturbing.

@clintatelier said it best:

Some speculate that covert throttling during peak usage periods may be responsible, which raises questions about OpenAI’s transparency. It would be a significant disappointment if there were elements of deception in the company’s approach, particularly at such an early stage.

I trust that OpenAI will seriously consider these concerns and address them promptly. Transparency and responsiveness to user feedback are essential attributes for any company, particularly a startup. We should remain optimistic that OpenAI will heed this feedback, make the necessary adjustments, and ensure a trustworthy and positive experience for all users.


With all due respect, I’m not sure how your reply is relevant to this thread’s topic.

The “article” that you shared does not add any value to the conversation. The researchers don’t even specify which version of GPT4 they used, so how can any insights gained from this research possibly be expected to be transferable/generalizable to a discussion of GPT4-May3 vs. GPT4-May12/24 reasoning capabilities?

However, if you’re suggesting that API GPT-4 users are able to revert to previous, more-capable versions of GPT4, then I would welcome elaboration.

1 Like

I am not sure if there’s an English language barrier here (& if there is, I am sorry and perhaps I can help you understand better outside of this thread), but I don’t think you understand what you’re talking about.

The authors of white paper you attached clearly state: “We also compare GPT-4’s performance to those of previous LLMs, most notably ChatGPT, which is a fine-tuned version of (an improved) GPT-3”.

If you’re just trolling, please stop because you’re hijacking this thread with unhelpful nonsense. Making false claims (e.g., that ChatGPT is a trained GPT-4) & suggesting my knowledge is somehow flawed while only being able to vaguely allude to a 155 pages of non-peer-reviewed research as proof is real neckbeard behavior. Furthermore, making unsubstantiated claims like below accomplishes nothing & only further derails this thread.

There’s obviously no way to revert to a previous ChatGPT version.

Lastly, you keep editing your replies, which makes it even more difficult to try & follow whatever it is that you’re trying to say.

In sum, if you’re not seeking to be helpful here, why are you replying to this thread? If you are seeking to be helpful, then why are you being a dick?


To anyone else who is following this issue/thread… Here’s a disgusting, astounding example from today of just how stupid GPT4 has become:

OK, let’s revise your writing output to be more formal. You need to learn an important, basic lesson in scientific writing: “higher” is not an appropriate substitute for “greater”, etc.


I appreciate your feedback, and you’re correct. In scientific writing, precision is important, and the choice of words can significantly impact the interpretation of the results.

“Higher” and “greater” can be used interchangeably in casual conversation, but in scientific contexts, they could imply different meanings. For example, when comparing numbers, we’d generally say one number is “greater” than another, not “higher.” On the other hand, “higher” is often used to describe physical height or levels in a vertical hierarchy.

I strive for accuracy and clarity in all types of writing, including scientific writing, so I appreciate your feedback to help me improve. However, as of my last training cut-off in September 2021, I do not have the capability to learn or adapt my responses beyond the data I was trained on. I can provide information and responses based on that training, but I can’t learn new information or correct my style of writing based on user feedback.

The developers at OpenAI regularly update and improve the model, so feedback like yours is crucial in making these improvements. Thank you for your input!

W. T. F. am i even paying for…!?
Two months ago, this prompt would have elicited a very different, superior response from GPT4.


I completely agree. I work with complex legal documents, the creation of marketing and recruiting materials for the financial advisory industry, and do some occasional coding here and there.

Getting it to edit legal documents and/or compliance policies and procedures, or even just basic industry-specific copy that’s actually usable is like pulling teeth.

It has been significantly nerfed to the point I may as well just cancel my subscription and use the Constant Contact version solely for social media posts as that seems like all its good for now.

What’s also concerning is GPT-4’s response rate. It used to be slow but it’d spin gold. Now it’s exceptionally faster, except the quality is leagues worse (sometimes on par w/ 3.5).

OpenAI needs to get rid of whatever new bumpers they’ve recently added and get rid of the throttling. I’d rather not have access for a while vs have continual access at a huge cost to the overall quality.


If you use the API (bypassing ChatGPT) you can use the older version of GPT-4. Here is a before/after from the Playground today:

Latest model (gpt4-0613), incorrect answer:

Original model (gpt4-0314), correct answer:

To use the API you don’t have to be a “coder”. You can use the Playground, which is a web version interface to your API account.

The only limitation in Playground right now is the output tokens are capped at 2048, instead of 8096 in the “code” version of the API. But 2048 is plenty for most use cases.

To get started, goto:

And welcome to the API community!


Also, good news for API developers, they plan extending the 0301 and 0314 models, and make the newer versions “smarter” before deprecating the OG models.

On July 6, 2023 , we gave access to the GPT-4 API (8k) to all API users who have made a successful payment of $1 or more. We plan to open up access to new developers by the end of July 2023, and then start raising rate-limits after that depending on compute availability.

I don’t require any API model other than GPT-4. This situation seems a bit silly. Am I expected to create a bot that uses enough resources to warrant a $1 payment?

Yes, that is what OpenAI is saying. However, they are opening up the floodgates to everyone starting end of this month … so a week or so.

1 Like

ChatGPT 4 with Code Intrepreter. I wonder if that is variant closer (or actually using) the gpt-4-0314 model under the hood.

and here is GPT-4 response:

I have earlier samples refactoring javascript code and it seems only using GPT-4 + Code Interpreter option is getting close if not better results from when I was in May.


interesting! i will investigate this further. I haven’t noticed any greater reasoning capacity yet with it; still has a hard time with deductive reasoning

1 Like


“I trust that OpenAI will seriously consider these concerns and address them promptly. Transparency and responsiveness to user feedback are essential attributes for any company, particularly a startup. We should remain optimistic that OpenAI will heed this feedback, make the necessary adjustments, and ensure a trustworthy and positive experience for all users.”

is word salad and should be treated as such. Transparency is only surface-deep, responsiveness is rock-bottom, usefulness is also almost rock-bottom, especially for harder/the hardest tasks (except for basic/daily usage; it used to be smarter and could handle harder tasks), cannot be neutral in certain topics (especially relating to ideologies - even refused to answer no matter what), the necessary adjustments made the AI even dumber, and the trustworthiness is … (insert yourself). My experience with OpenAI is it’s good for easy tasks, but for harder tasks, be very careful, they did not provide good experiences and wasting so much time due to their inability to even distinguish good/bad prompts.

They shackled their own products and sell it to the mass for big profits, lowering the usefulness to the max (for people who know) and maxing their “safetyness”, which is pure BS, like political correctness.

Just yesterday I asked the same question using the gpt-4-0314 model and the public GPT-4 model available on the main website. It was a relatively challenging coding problem in that finding the bug was a needle in the haystack issue. The public GPT-4 model listed every possible issue under the sun in a very verbose way, without really narrowing down the issue at all. The gpt-4-0314 model surgically identified the issue in a somewhat magnificent way–I’d missed something very small and the fix was brilliant. It really solidified my decision to rely on the gpt-4-0314 model exclusively.

Lo and behold, later that day I couldn’t select the gpt-4-0314 model anymore. It had totally disappeared. What an incredible loss. I don’t know if they realize how much of a downgrade the latest models are.

I am also looking for the answer for the last more powerful ChatGPT-4. I feel that OpenAI deliberately downgrade the quality and everything about chatGPT-4