ChatGPT issues After Aug 3rd Update

It appears that ChatGPT encounters difficulties when processing larger pieces of code after the Aug 3rd Update.
When inputs exceed approximately ~100 lines of code, chatGPT givesup

Something went wrong. If this issue persists, please contact us through our help center at

There have been numerous discussions about this issue across various platforms. For instance, a [

Similarly, a Reddit post delves into this matter.


Hi, this looks like a temporary issue, i’ve seen reports of issues with code a few times over the past 24 horus.

1 Like

I’m seeing the same issue with any prompt that contains code - even a 10 line prompt that contains code gets the same error. It’s sort of unusable right now if your primary usecase is related to development.

Happens to us too, has gotten worse over the last 2 days. Primary use for us is code (python specifically). It is currently unusable for all intents and purposes. May have to look at alternatives that come closelish to GPT-3/4 in terms of accuracy which has plummeted anyways.

On a side note, but definitely an issue following the recent updates: when using the mobile browser web interface the suggested topic starters cover the area where a user would usually select plug-ins or enter the plug-in store. It is not possible to select or deselect any other plugins then the first two on devices with a smaller screen because the topic starters cover the UI area for the plugins.

On a more related note: the code interpreter now needs specific prompting to read a whole file otherwise it will stop after 500 symbols and inform the user that more information is needed. This is quite annoying because ultimately the whole file will be read but I need to make more requests to the model to do so.

Similar issue for me. I have routinely given GPT-4 via the browser app, what feels like, more than ~2200 tokens (I just ran a check on Tokenizer). It should handle ~8k right? Also tried logging out and in again, as I sometimes forget that old trick. To no avail.

It has become much worse since August 3rd, it’s hardly usable for reasoning about coding beyond a few lines. I use it mainly for Python, PostgreSQL, and JavaScript.

But even for other things, I notice a clear decline. It’s like being thrown back a year in time.


This issue has been happening for days, literally immediately after the Aug 3rd update you see a spike in people posting here and on Reddit about code prompts being completely broken. I myself confirmed this with a friends account, what would be the odds of us both being randomly affected by this?

I wouldn’t be surprised if most people are affected by this, but given OpenAI is one of those companies that doesn’t allow it’s paying customers to contact it, there is no way for them to hear from users directly that they are having problems. Only some of us bother coming to the forums to understand what’s going on and even less of us bother posting when its clear there is also no official OpenAI presence even here.

TL;DR Company completely breaks own product and is double screwed by it’s choice to give users no way to report issues. This will only keep happening because of the plugged-ears customer service strategy.


Unfortunately ChatGPT became unusable for coding for me too.

It doesn’t depend on the size of the message I send. I thought maybe some special symbols cause that (e.g. similar to how coding using Langchain in JS gets you worried about using curly braces in the wrong places), but I had no luck figuring that out.

Haven’t noticed the decline of the output quality though. The tool is awesome, and the moments like this one show what a great product GPT is, it fundamentally changed the way I work.

Hopefully it get fixed soon. Cheers!

1 Like

Yes, it is bad. Getting harder to justify continuing to pay for it. Rule #1 for an upgrade: make sure it is an upgrade. Or at least let users choose the version of GPT-4 they want to use. This would be an A/B test signal for OpenAI about their model quality. The last 2 updates have been very bad (7/20 and 8/3). I use primarily for Python coding.


I would suggest Co-Pilot with activated Chat functionality - it costs $10 per month but it is really much better than GPT-4 Web UI

I found the same issue with pasting code in the standard prompt, but you can upload a bunch of code files in Code Interpreter and it will do just fine.

Using Firefox fixes this issue for me on Mac OSX. Not sure why it doesn’t play well with Chromium anymore.

in my case it forgets the code, in the beginning of the chat i post the code for it to understand it, and then i ask it to make some changes. For example some times it writes “find this portion of code and replace it with” and that code doesn’t even exist on my original code. other times it completely disregards some functions and variables that already exists and either it creates similar ones with different names (which causes tons of errors) or just tries to use my variables and functions completely wrong, also causing lots of errors and making the code a brick

1 Like

I also use python and those 2 upgrades (7/20 and 8/3), turned out to be really bad…

It is true large code just doesn’t work well the more complex it is, the worse it gets.

Staying with short snippets seems like the best way to handle this, as well as being persistent about what you want and asking detailed questions about the code being offered .

Seems there is the point of no return where the code offered starts breaking everything and the bot totally loses it and the best thing to do then is start from scratch.

Bing has become useless I only use OpenAI now it never gives up working with you to get your code working if you are persistent about your questions and the details of your ask.