I’m seeing the same issue with any prompt that contains code - even a 10 line prompt that contains code gets the same error. It’s sort of unusable right now if your primary usecase is related to development.
Happens to us too, has gotten worse over the last 2 days. Primary use for us is code (python specifically). It is currently unusable for all intents and purposes. May have to look at alternatives that come closelish to GPT-3/4 in terms of accuracy which has plummeted anyways.
On a side note, but definitely an issue following the recent updates: when using the mobile browser web interface the suggested topic starters cover the area where a user would usually select plug-ins or enter the plug-in store. It is not possible to select or deselect any other plugins then the first two on devices with a smaller screen because the topic starters cover the UI area for the plugins.
On a more related note: the code interpreter now needs specific prompting to read a whole file otherwise it will stop after 500 symbols and inform the user that more information is needed. This is quite annoying because ultimately the whole file will be read but I need to make more requests to the model to do so.
Similar issue for me. I have routinely given GPT-4 via the browser app, what feels like, more than ~2200 tokens (I just ran a check on Tokenizer). It should handle ~8k right? Also tried logging out and in again, as I sometimes forget that old trick. To no avail.
This issue has been happening for days, literally immediately after the Aug 3rd update you see a spike in people posting here and on Reddit about code prompts being completely broken. I myself confirmed this with a friends account, what would be the odds of us both being randomly affected by this?
I wouldn’t be surprised if most people are affected by this, but given OpenAI is one of those companies that doesn’t allow it’s paying customers to contact it, there is no way for them to hear from users directly that they are having problems. Only some of us bother coming to the forums to understand what’s going on and even less of us bother posting when its clear there is also no official OpenAI presence even here.
TL;DR Company completely breaks own product and is double screwed by it’s choice to give users no way to report issues. This will only keep happening because of the plugged-ears customer service strategy.
Unfortunately ChatGPT became unusable for coding for me too.
It doesn’t depend on the size of the message I send. I thought maybe some special symbols cause that (e.g. similar to how coding using Langchain in JS gets you worried about using curly braces in the wrong places), but I had no luck figuring that out.
Haven’t noticed the decline of the output quality though. The tool is awesome, and the moments like this one show what a great product GPT is, it fundamentally changed the way I work.
Yes, it is bad. Getting harder to justify continuing to pay for it. Rule #1 for an upgrade: make sure it is an upgrade. Or at least let users choose the version of GPT-4 they want to use. This would be an A/B test signal for OpenAI about their model quality. The last 2 updates have been very bad (7/20 and 8/3). I use primarily for Python coding.
in my case it forgets the code, in the beginning of the chat i post the code for it to understand it, and then i ask it to make some changes. For example some times it writes “find this portion of code and replace it with” and that code doesn’t even exist on my original code. other times it completely disregards some functions and variables that already exists and either it creates similar ones with different names (which causes tons of errors) or just tries to use my variables and functions completely wrong, also causing lots of errors and making the code a brick