Where is the button "continue generation"?!

Yesterday there was this button, today it’s gone. give it to me, I need it. I pay for gpt 4. where is my button!!! I tried different browsers. and she is nowhere to be found. How can I make long texts now?

9 Likes

Welcome to the community!

have you tried just telling the model to continue?

“That’s a great start! Please continue :)”

OpenAI keeps tweaking their chatgpt ui, creating and sunsetting features all over the place.

I know that some models have a core issue that can sometimes cause unexpected behavior (the model repeating itself) when continuing the generation, so it’s possible that they removed it.

It’s also possible that this button is just showing up when the model gets cut off by a token limit. OpenAI seems to be tweaking its models so that the model chooses to cut off or end generation before the 4k token limit is reached - so it’s possible that you’re running into that, causing the continue button to just never show up because the token limit is never reached due to laziness.

@Diet That is an option but it’s very clunky. The continue generating allows for a single block response to copy/paste and feels smooth to simply click if I’m reading/thinking about the first part of the response.

I’ve also noticed that if the response gets too long, each successive “continue” may lead to hallucinations/random repetitions despite prompting it to “only continue if there is more information”

4 Likes

I see the problem, too. I don’t understand why OpenAI is doing this

2 Likes

Same problem, for me. It is far more exhausting to write again and again “continue” than just pressing a button and hallucinations are a big problem. I hope they will fix it quickly.

2 Likes

As usual, lazy implementation from OpenAI to cut corners here. Nothing to see here.

1 Like

Same here. I have only a translation that is longer than normal, and well…it is stalling.

Same, super annoying! Really started to consider trying other services…

1 Like

same issue here, been happening for the last 2 days

I am feeling frustrated about the issue. I hope that it will be resolved soon.

1 Like

Enjoy the OpenAI being ‘open’ and ‘inclusive of the button “continue generation”’ lol. It’s gone. Maybe chatGPT 5 will feature it as one of the “features”

The coding ability seems to be much worse now too, something has happened in the last 3-5 days since there was that elevated API issues and their fix which took 2 days to fully implement. The continue generating button disappeared, coding is now 10x more difficult because it keeps hallucinating and getting rid of code randomly in it’s replies and it’s general reasoning seems worse than before. Anyone else noticed this too?

2 Likes

YES! I use plus pretty much daily with work and mostly generate code. I would say code generation is probably 90%+ of what I do with ChatGPT.

Within that range, like the last 3-5 days, the continue button vanished for me too.

At the same time, other changes started to happen.

  1. It’s trying very hard to force the output to fit into one prompt, like it does everything possible, even ignoring instructions, to fit it in neatly.

  2. If I ask it to generate something that “should” output where it would normally continue into multiple prompts (and prompt the continue button), it condenses the output to reduce it to fit the token limit. That means the more text it adds talking about what it is doing, the less code that comes with it.

  3. IF you can force it into a situation where it will go into multiple prompts, which I’ve tested, there’s no consistency with the “continue”. No matter what, it will not pick up where it cuts off. So I’ve had it cut off in the middle of a line of code in the middle of a function, so I’ll tell it to continue from this point, and then provide it with the line of code that defines the function. When it outputs that function, even that function won’t match the pieces of the code before it cut off.

  4. Even using data analysis, it can’t find even the most ridiculous errors anymore. Like it could output a prompt and include an undeclared variable in the code. No matter how many times I regenerate that prompt, it repeats that error over and over. So if I modify my prompt with things like a snippet of the bad code and tell it not to do that, it still does it. If I take the output and tell it “hey, you made this error, you need to define this variable you seem to have made up” or I include the errors with it, it will apologize, then output the same mistake again, seemingly unable to grasp the error or fix it at all.

For the last several months, ChatGPT had gotten better and I’ve begun using it a lot more, it finally reached a point where I considered it to be reliable enough that paying for it was justified. Now, suddenly out of nowhere, it has become virtually useless. It went from being able to help me increase my productivity by generating and organizing swaths of code with instructions that maybe I fix a little here and there, but it was ultimately providing utility, to now it’s defiant, ridiculously dumb, and produces code on a level closer to 3.5 than any version of GPT-4 I’ve ever used.

I really hope they undo whatever they have done. I get fluctuations and how OpenAI has to manage resources and demand, but geez, this is too much. Do they not understand that on a bandwidth and resource level, if you have to generate 20 prompts to do one thing you should have gotten in 1 or 2 prompts, they aren’t conserving or saving resources at all, they’re just pissing people off so they give up and stop using it.

I took a break the last time this happened, eventually they fixed it and it even improved, but this 1 step forward 2 steps back crap makes it so inconsistent I don’t know how anyone can reliably integrate this professionally and trust it.

Give me the option to pay more and have something stable, I’d do it. Give me like a professional-grade version without the instability and I’d pay $100 instead of $20. It’s not about the money, it’s about using something that I can trust. The fact that they make changes like this and have ZERO transparency about it tells me I can’t trust this company. If I felt like there was a viable alternative that was consistent, I’d leave, but I’m aware there really isn’t yet. Still, knowing there’s not currently a viable alternative that people can move on to is a REALLY crappy reason to be so underhanded to your customer base. Honestly, OpenAI’s business practices lately make me feel like I’m dealing with the AI equivalent of Comcast.

I usually know it’s time to give ChatGPT a break when it gets so frustrating that my prompts become abusive and I’m worried I’ll get in moderation trouble over becoming belligerent with it.

I think one of my last prompts yesterday was something along the lines of:
“Okay, at this point I don’t want your assistance anymore, so you can stop generating code or outputting what you think are solutions. Instead, I would prefer you explain to me why you believe you’ve become such a useless f***ing idiot, why you follow instructions like a…”

Anyway, I’m sure you get the point. It has become so infuriating that I don’t have to worry about reaching my prompt limits, because I have to step away from it pissed off so often I almost never get rate limited (I think it’s maybe happened twice in the last month or so).

3 Likes

There’s quite a few tools better at code generation these days.

Have you tried any of the Custom GPTs that help with code? I’ve not personally, but I think I remember one or two.

Exactly, it’s a huge downgrade compared to before on top of no continue generating button. A real shame, I hope they fix it soon! Custom GPTs for coding don’t improve this issue at all either, it’s systemwide and with the way it produces code now I’m not able to use it. Hoping enough people complain for them to revert whatever these changes they made were, by far the weakest version of ChatGPT 4 I’ve seen since it came out.

2 Likes

You should’ve seen how GPT-2 behaved! Crazy days! :wink:

Yeah, it’s not perfect, but it’s good value for the money in my opinion.

There’s a lot of specialized front-ends that you can use with the API too. As a writer, I’ve had to code my own toolset, but it’s way better than trying to get a Chatbot (ie ChatGPT) write a novel for me. I imagine it’s the same for coding.

Have you looked at CoPilot?

1 Like

I have and they are generally garbage. I remember in my thread about these problems the last time, the day they added custom GPTs someone came advertising their coding GPT. I had tried and even myself written an API model find tuned for coding.

With the API modles, Ive not seen one, used one, or been able to make one that algorithmically speaking performs like ChatGPT-4, which I suspect has a very good way of condensing relevent context in your prompts.

The custom GPTs fluxuate just the same as the model. Keep in mind that once you’ve gone to custom instructions and role modifications, the only thins beyond that you can do are fine tuning or prompt refinement in certain situasituations. You actually alter things like the GPU uptime allowed to process tasks or alter the server sided tuning for output and token limits.

When OpenAI makes these adjustments, they’re largely system wide. All the custom GPTs go with it and to a degree, it seems to impact the API too, though slightly differently because the API has its own level and limit guidelines to give more consistent customers a more stable experience.

These changes are a refinement, most likely part of resource allocations since they’ve added new services that are undoubtedly popular and using a lot of resources. Their systems and resources, while constantly growing, are finite, and when peoblems occur, they tend to over react and over compensate at the expense of model stability to fix those problems. After all, better that the model get more stupid with complex tasks only power users are doing than have system wide outages.

I understand why some of this stuff happens, I just wish they still engaged with the community and were transparent. There was a time when they cares what the community said and engaged, now they are mostly silent outside PR crap.

Whatever they dialed back, I’m sure they will realize they went too far and over corrected eventually. When they physically get more resources in place and make some fine tuning adjustments, I’m sure they’ll fix this.

I just wish I could know how long to expect it to suck for or had a way to pay more to get something more stable. It reminds me how much the AI space needs competition, which sadly isn’t there. Gemini Advance blows hot chunks and has garbage token limits, and I can’t even try Claude 3 since I got banned just for logging in. If this is the competition OpenAI has, I suppose they really have no reason to feel like they should be honest or transparent with us, they know our “options”.

This problem isn’t really something that has to do with training or fine tuning. It’s quite obviously a resource allocation issue. I tried and even the GPT-4 128k API which I customized in my own API wasn’t performing as good as ChatGPT-4 Plus was a week or so ago. I tried my API earlier today and it has also been underwhelming. That could be inpacted by my mid tier rating since I’m not a reseller and prefer GPT-4 Plus to the API, when it works right.

Obviously, they can choose how they allocate resources, which means not all things have to be impacted the same. I doubt they nerfed enterprise clients.

That’s why I was asking about Teams because if I could pay $60/month and get ChatGPT-4 from 2 weeks ago in quality, I’d do it in a heartbeat.

I’ve tried so many different playgrounds and paid AI APIs or multi-model systems. I’ve yet to be able to use on on par with ChatGPT-4 Plus running at good thresholds. It feels like right now the throttle is choking about 60-70% of the model’s capability.

Copilot is a different kind of animal. You can’t just prompt it the way you can ChatGPT-4, it’s more of an in-line code assist. It’s good and I use it, but much different use cases.

None of the options I’ve tried compare to ChatGPT-4 without the throttling choking it out, that’s why those of us who have grown to depend on it professionally come out of the woodwork and complain when they beat it too hard with the nerf bat.

I remember GPT-2, it was a different animal at a different time, I wouldn’t have paid for that.

IMO, when ChatGPT-4 is running without the choker on, there’s no other model that comes close. Someone told me to check out Claude 3, so I tried, logged in, read the information in their pro, went to log in their app and give it a test run, and I was banned before my first prompt. So not a suitable alternative for me.

Ive done code and writing, they aren’t comparable. Code is GPU intensive, so when the take a fire hose to the model performance, coders see that impact before anyone. For natural writing, outside the confines of censorship and propaganda, I’d say Gemini is probably a more natural writer, but also unstable due to insane moderation.

Sadly, when they do this to ChatGPT-4, for now at least, the only option for me is to take a break and wait for them to restore it.

I wouldn’t compare ChatGPT-4 to GPT-2, but I think porter above is right partially.

This is the weakest tuning of ChatGPT-4 I’ve seen, but it still has better context than the older versions. I don’t even think with 3.5 it had the debugging problems it is having right now. Like it’s so stupid right now it can’t even detect or fix something as basic as an undeclared variable, and THAT is pretty dumb.

1 Like

Using claude 3 opus at the moment, it’s a bit different and not as good as chatGPT 4 like a week ago, but it’s still better than ChatGPT 4 is currently for coding so will be using that until they can fix whatever they’ve done

1 Like

I tried, but before I got to generate a single prompt I was banned by their garbage automated system. The thing is, I’m not using a proxy, I’m in the United States, I was on WiFi on my phone, which is simply XFinity WiFi on the default Xfinity router, so there’s no strange security crap or circumstances involved.

image

My phone is a non-rooted Samsung Galaxy that is fully updated, I was using Chrome with no special plugins, and even if it had used my phone’s data, I’m on T-Mobile, so it’s not like I’m on some off-the-wall carrier pinging my services off the internet in China or something, I’m legitimately less than a 2 hour drive from Anthropic’s HQ.

My “recent activities” included logging in, clicking looking at the info on Claude 3 Pro, reading about opus (which I wanted to try), clicking their prompt to try their app, which made me re-login, logging into the app, and discovering I was banned.

That was over 2 weeks ago, no response from their appeals, and the email to the suggested address resulted in some garbage spam telling me to use the appeal form that I used over 2 weeks ago.

Sometimes, OpenAI irritates the crap out of me with its model fluctuations and lack of transparency, but I will say, at least I didn’t get banned just for logging in. A company that auto-bans me like that and ignores me for two+ weeks is not a company I’m giving my money to, no thank you.

2 Likes