Was There a Recent Quiet Update or Something?

So I haven’t been on here as much lately, mostly because after a few updates since my last post about abbreviating code, ChatGPT-4 (Plus) has been working great and had noticeably improved.

I could get it to output near 200 lines of code with the right prompts and even keep it from getting stupid if it went over what it could output with one “continue generating” click.

For the most part, all was pretty good.

NOW over the past few days, something has changed.

  1. The continue button is now gone, so if I’m generating code I have to tell it to continue manually, which it actually seems to lose what it was doing and won’t pick up where it left off.

Example, if it cuts off in the middle of a function mid line and I tell it “continue from” and start with the beginning of the function, the code often doesn’t even match the partial function it did the first time and combining the code often has stupid errors related to formatting.

  1. It’s gotten like pulling teeth to get it to generate more than 70 to 90 lines of code, it’s actually WORSE than when I complained about it summarizing code before.

  2. Downvoting and regenerating the output seems to make it get even worse. Like I told it to make sure every variable was accounted for in the output with the list of variables, and the types of outputs I got ranged from it deleting variables so there was less of them to check or it would just lie or it would output complete garbage code just to have the variables in the output but with nonfunctional code. I started with 119 lines of code in that example that I was attempting to have it check because it had done something crazy that kept putting out stupid errors. I have a pretty good set of prompts that I’ve used for a long time now that usually gets it to output the entire correct code. When it output 90 lines of code and I knew right away it cut stuff out, so I downvoted it, didn’t follow instructions, the next output was 77 lines of code. By the time this ended it was outputting like 64 lines of code, ignoring the instructions more and more every time.

  3. The code it’s generating when I’m giving it specific tasks is often bad, like way more than it’s ever been since the days of ChatGPT3.5, and what’s worse, when you give it the output for the error it produced, it becomes stupid with amnesia combined. I’ve tried it so many different ways, including the code in the prompt or not, adjusting how I instruct it to debug, reminding it of the chain of events, it can’t do it. Instead of addressing that it made a mistake and figuring out where it made the mistake, it will just reply with long lectures about the theoretical causes of these kinds of errors like it’s giving me a philosophy lesson on the error.

  4. No matter how much I tell it to be concise, respond only in code, etc., none of my prompts to get it to output just the code work anymore, so I’ll get like 2-3 lines of stupid code suggestions inside a wall of meaningless rambling text completely unrelated to what I told it to do.

So here are my questions:

  1. Is there a known update I can’t find on these forums? Like why did the continue button suddenly go away and why has it gotten so stupid all the sudden?

  2. I really don’t have time to deal with this crap, so does anyone know if this is still the case with Teams?

  3. Is it possible one of the settings they changed has impacted this and I need to adjust something?

I don’t have stable access to enterprise which I’d prefer, but if teams at least functions like ChatGPT did a week or so ago, then I’d pay for two accounts and shell out the $60 a month or whatever just to have it properly functional again.

Currently, this is absurd. It’s irritating how unstable this model is where for a little while it’s good and reliable, then it becomes a complete reject. Sadly, Gemini Advanced is still a total joke and I went to try Claude 3 but was banned just logging in and can’t get any sort of support getting it fixed.

I use this a lot in my work lately and I just need a little bit of freaking stability.

1 Like

Ive been using the free version and it had indeed gotten “dumber” from this side. To the point where something as simple as the 5th digit of pi is repeatedly seemingly randomly generated. Supposing the rest of the equation it was running was correct (I honestly didnt check as bring wrong about something so simple turned me off to the entire exercise) I had to directly correct it with the 5th and 6th digit for it it (allegedly) compute with them. And considering the same equation with the 6th and 7th digit yielded a larger number, Im guessing it’s effectively fudging everything.

It also randomly generated register addresses (with no mention of making them up, or simply not providing them with a commented line stating ‘fill in with correct registers’ as it used to) for a block of boiler plate code I asked it to produce so I didn’t have to spend an hour writing it myself (yes im lazy).

As to why? Well, again at least on the free side, I reason it has to do with the sheer number of people abusing the system. Meaning all of the various projects that leverage the interface either in open git hub code or closed source code that uses chatgpt as a backend. Plus all of the worry over the job replacement and vaguely unethical uses, like legal briefs and resume vetting, code generation, who knows maybe even airplane assembly instructions (the bot didn’t include tightening those bolts ergo hole in the side of plane, as an example), basically people looking to get a leg up abd putting far too much faith in the LLM chat bot assuming it can do their job for them without understanding what it is, how it works, and that it is indeed fallible without notice.

Maybe they’re doing it to sour the milk for that low hanging fruit and itll grt better again. Maybe they’re doing to to market chatgpt 5. Who knows.

Either way, I can definitively say they have indeed made the bot dumber. And it is terrible now. Damn near unusable. But if it were more usable itd be abused like it was being…its why we can’t have nice things…

1 Like

Yeah, but I’m a subscriber paying them, so I’m not certain that would apply unless they’re looking to get rid of Plus subscribers.

At this rate, I’d pay for two accounts and Teams if it just restored the previous functionality or offered any more stability.

Losing the continue button for code generation suddenly is annoying, but not nearly as annoying as how desperately it tries to fit everything into one prompt now. If I request something that should be 130 lines of code, it’s going to shrink it to like 35-85~ or so and ignore most of the instruction, making the output useless. It’s like pulling teeth again to get it to go any higher.

Honestly, the last time I did a post about abbreviating prompts, it wasn’t nearly this bad. It’s also gotten so awful that it not only makes really stupid mistakes, but if I keep providing the code and “hints” telling it to check somewhere else or that it’s “not that, it’s something else” or whatever, it will never figure out stupid mistakes, like it has totally become useless for debugging code.

Three different times yesterday just that I remember, it would skip declaring a variable, but use the variable in a way that created a nil value. I tested it just to see if it was even capable and it could NOT find that error or output a prompt that didn’t have the error. It could be that it would cut the variable out of a module that the variable was passed as a parameter to the module or whatever, the point is stupid stuff like that it is doing over and over again and it can’t detect or realize that it did such a thing. Give it the code and the output (while in the same or a new chat) and ask it to debug the error, even provide before/after reference code, and it won’t notice, even if the reference code has the variable correctly declared and the output cuts it out, it can’t find it.

ChatGPT-4 has become completely stupid and useless for code debugging, even the most basic stuff, and the output has become trash.

Sure, there’s all these excuses people make about fluctuations, but I call total BS, because this happened EXACTLY at the same time that the Continue button was removed.

THAT isn’t a coincidence, that’s intentional, and they very clearly did something bad and have no intention of bothering to acknowledge it.

1 Like