Output seems to stop abruptly--why is that?

Hi all. I’m a developer who is very new to GPT. I’m using the playground in chat mode to generate code (JavaScript in my case). So far, the prompts have generate complete code, i.e. code with no syntax errors. But in two cases now, the output is incomplete code, as if it stopped
See screenshots here and here. Why is that occurring and how can I prevent it?

In the first case I assumed I maybe interrupted the output in some way. But in the second case, I was sure not to disturb anything.

In other cases, I’ve seen the playground give me prompts about exceeding certain limits in a question. But in these cases, there were no prompts.

Not sure if my question belongs in this forum, or the ChatGPT forum. If mods need to move it, no worries :slight_smile:

GPT has a response limit. If you’re using playground you can try saying “continue”. Otherwise ask it for smaller things (like a function at a time)

OK thanks very much for that. I’ll try ‘continue.’ But for the future, what exactly is the response limit? Is it possible to know its value, and units (e.g. 512 characters or something)? Is there a way to increase it? Most importantly, is there a way I can know when this limit has been reached in a response? It might get reached in between sentences, in which case I might not know that the limit was reached.

Relevant doc links:

I don’t think Playground will tell you if the token limit is reached

OK thanks for that. In these cases, I’m convinced that some limit is being hit. But I can say that in at least one other case, after I submitted a prompt, GPT returned an error message which was of the form:

The sum of <some field I don't remember> <X units> and Maximum Length <Y units>has been exceeded. Please reduce that sum to continue

X and Y were actual values that were printed in the message. I can’t remember the units (if any?). I can’t remember the name of the first field.

So at the end of every response, how can I know if the response is complete, or if it was truncated due to this limit? Specifically, in cases where the GPT response is not in the middle of a sentence, how can I know if response is complete, or if the limit was reached right at the end of that sentence?

Not trying to be snarky or anything. I’ve already hit this limit a few times, but in each case it was easy to identify b/c the limit was reached mid-sentence. Just curious to know if, in the future, there is a way I can be sure.

In the API, the response will contain both “tokens in prompt,” “tokens for completion” and “total tokens.”
Each model has a different limit for total tokens; gpt-3.5-turbo stops at 4097 tokens.
So, look at these fields in the API response and you can tell whether it hit the end or not.
Btw, all the previous context (chat thread) counts towards the limit, so if you find that you need more space, sumarizing or even removing previous context can help. (Again, in the API – the chat all does its own thing here.)

If you’re using the chat app only, end your prompt with something like “end your response with the stop sign emoji” and then look for that stop sign in the answer. If it’s not there, you might have gotten cut off.

1 Like

OK thanks for all that.

Ah yes, smart idea. Will do!