Gpt4 - Incomplete and Partial Responses

Same issue here. I started using GPT-4 on day one and was blown away by its answers. Started to notice this issue about a week ago, and today it got really bad to the point where it never outputs more than 300 tokens. GPT-4 API works fine at the same time so it’s more likely a capacity constraint rather than a problem inherent in the model.

Registered for Plus today and start trying the GPT-4 model until I quickly found that the answer was not complete. Need to prompt ‘Please continue after XXX. The answer is not complete’ to get the remaining answers.

Is there an update from OpenAI members of staff on this issue? Having the same issue with GPT-4 and the plus membership. The answers are just cutting out mid sentence for no reason. This is getting to be a persistent issue.

1 Like

Been using ChatGPT daily since it was made public.
Bought ChatGPTPlus the moment it was available.

Switched to GPT-4 the moment it was available.

Past 2 days have also experienced severely cutoff responses from ChatGPT when using the GPT-4 model.
You’ll also notice that OpenAI are being very quiet about this issue. Their communication sucks.
During GPT-3.5 usage, I found a bug with the timestamps in the messages returned. I emailed their support to report the issue. I did not get a response or acknowledgement of the email… however a few days later when I was asking ChatGPT the same questions to tell me it’s timestamp of its messages, the bug had been fixed. Still to this day, no response.

After discussing it with some colleageus, we suspect its another throttling approach (to force us to “use up” our tokens) without reducing the 25 messages / per 3 hours thing even further (because of the backlash when they started doing that already).

I’m confident it’ll ease out soon (GPT-3 via ChatGPT was slow to begin with) and return to normal operations.

Definitely in the last few days this has become a problem. I now quite often have to tell GPT to finish what it was saying multiple times,

They have enormous scaling issues. The amazingly fast growth and the complexity of the model is putting the infrastructure on the brink of failure. Efficiency is their priority, and they’re doing amazing work coding wise, but the Azure datacenters need to adapt really quickly and I think it’s just not humanly possible to scale faster than what’s happening now. No one expected this kind of growth, and it’s amazing they manage to power Bing, ChatGPT and still have the API, which powers 99% of all AI apps out there.

I have the same issue.

I have used chatgpt, then subscribed asAp, and was great. Now responses are always small and I hit the max 25 faster, because I have to write”continue”. I would prefer 6 messages every 3 hours if they were complete. Typically I use it to summarize and organize notes.

Same issue here. Does not look like the “25 thousand words reply” advertised…

Hi @f.camara1109,

can you share where you’ve seen the 25k-Reply Information?

I believe it can handle 25k as Input (Urgent: GPT-4 Fails to Analyze Promised 25,000 Words - Solutions Needed!, however this doesn’t work either at the moment)

hmm, i run into this issue as well. But i just ask it to “continue where you left off” and it continues.

Can be annoying when generating code though.

Same problem here, it bothers me as i just bought it.

I hope it gets fixed soon, otherwise i will stop paying

1 Like

Same issue as described above. Short reply and cut off in the middle for the past few days. Didn’t occured before. Please fix it

I was both happy and sad to see there is a entire thread about this. Today I noticed every response being cut off. I can go back through my chat history to see this happening only once before.

What is going on?

Responses Being Cut Off:

  1. 214 Words, 1491 Characters
  2. 221, 1569
  3. 230, 1646
2 Likes

Try to copy all the responses, and directly send through the text box. Gpt4 will then continue to answer. I think incomplete answer is about the restricted response time.

I’ve been having a similar problem. Most of my output is unfinished across a wide variety of prompts and tasks. (Using ChatGPT4.) I haven’t compared frequency of stalled output to previous versions, as of yet.

I’ve tried a few similar prompts in 3.5, and the output is unfinished as well.

It helps to write “finish output”. I’ve also had some success with “give me the entire code output with each function in a separate code output window”. It’s still not perfect but a bit better.

Yeah, that does help. But, sometimes the finished code isn’t included within the code box. I provided this feedback within ChatGPT itself. It’s getting better …

On top of it returning a chopped off answer, I’m being charged for that incomplete answer. That seems a bit unfair and should not be billed against me. This is not useable if I can not rely on the output.