Bug in API response - "finish_reason" field

Previously, when making an API call, the response would include a field called “finish_reason” with the value “stop”. However, now the API response returns the value “length” for the “finish_reason” field, even though the output remains the same as before. This change in behavior has occurred without any modification to the code.

It’s important because my code is using this flag to continue requesting or stop.

Here is the response for reference. It seems to be erratic, because I’ve got some “stop” reasons:

response: {
  id: 'chatcmpl-7YaSuTmKnGmQKBWqsgSNtk6qZDzGR',
  object: 'chat.completion',
  created: 1688477680,
  model: 'gpt-4-0613',
  choices: [ { index: 0, message: [Object], finish_reason: 'length' } ],
  usage: { prompt_tokens: 1261, completion_tokens: 500, total_tokens: 1761 }
}

Welcome to the forum!

What was the max_tokens setting for that request?

1 Like

500 tokens

El El mar, 4 jul 2023 a las 16:45, Spencer Bentley via OpenAI Developer Forum <notifications@openai1.discoursemail.com> escribió:

Ok so you set the limit to 500 tokens and the reply contained (at least) 500 tokens, the finished reason being length seems correct to me.

After that, when I request next response with all the context, the response is the same. Infinite loop. From your point of view, what’s the exit condition to retrieve all the response?

El El mar, 4 jul 2023 a las 18:03, Spencer Bentley via OpenAI Developer Forum <notifications@openai1.discoursemail.com> escribió:

The typical way to handle such a response would be to include the previous response in a new prompt with the instruction “Truncated, please continue” appended to it, you may at that point even wish to increase the prompts token limit to reduce the number of times you have to repeat the process.

But the model will need to see its own output as part of the new prompt for it to be able to continue.

Is there a reason you aren’t increasing the max_tokens? Depending on the request, it’s not like GPT has a full response thought out and knows how to finish it. So you asking it to continue may cause it to “think” it needs to come up with an additional longer response, not just finish the previous response.

If you ask it something really short (like “Hello”), do you see the stop finish reason?

It’s strange because when I include the previous response (without “Truncated, please continue”) gpt-4 is answering with the exact response. I will check with this instruction at the end.

No, any particular reason. I’m testing how to iterate and how to retrieve all the answer. I wouldn’t like to have to review this code in 2 weeks, xDD

Thanks for all your help!