We are receiving some responses from the api with finish_reason of “stop” when the model is calling a function.
Has anyone else experienced this?
Nope, over and over a finish reason:
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"function_call": {
"name": "submit_iso_datetime",
"arguments": "{\n \"iso8601_datetime\": \"2023-02-06T16:33:00Z\"\n}"
}
},
"finish_reason": "function_call"
}
I made plenty of trials also trying to stop any of three models from producing the certainty of multi-line JSON by function descriptions, without success (except once at temperature 2.0, which can also produce copious server 500 errors from malformed function outputs), all without a “stop” being produced in any case.
You can use top_p = 0.4 to reduce the chance of the wrong “finish” output special token being produced.
Hi,
Can you post a copy of the prompt that produced this finish reason?
I can corroborate this.
Just happened to me several times with gpt 3.5 and gpt-4o
.
One piece of important information, it seems to happen when you say a tool choice is required
, ie:
"tool_choice" : "required"
That’s really not good though.
In that circumstance it should still have a ‘tool_calls’ finish_reason
.
If however, I force a specific function the issue is not present.
This looks like a bug!
It is certainly not desirable and definitely inconsistent and needs to be handled with messy code!
I going to have to put a horrible workaround in code for the time being
ie something like this:
(saving you from messy destructuring code!)
if (['stop','length'].include?(finish_reason) && tools_calls.nil?
and something like:
elsif finish_reason == 'tool_calls' || !tools_calls.nil?
instead of just checking the finish reason only - yuk!
@brianz-oai another couple of reports of New API feature: forcing function calling via `tool_choice: "required"` - #13 by brianz-oai
FYI I definitely would be happy to update my code to get rid of this cludge!
Exact same issue with me. Seems to be a recent regression - I don’t have proof but IIRC I could get finish_reason=tool_calls
even with tool_choice=required
a few days ago [EDIT: apparently not]. At any rate, reverting to tool_choice=auto
isn’t a good fix either (for me) - the model often ends up not returning tool calls (despite prompt instructing it to) or just adds a tool_uses
json object in the content
field of the message as a string (with some undocumented keys like recipient_name
. Let me know if you’ve found a solution!
What is the latest on this, is it fixed?
This is covered in the function calling docs:
By default, the model is configured to automatically select which functions to call, as determined by the tool_choice: “auto” setting.
We offer three ways to customize the default behavior:
- To force the model to always call one or more functions, you can set tool_choice: “required”. The model will then always select one or more function(s) to call. This is useful for example if you want the model to pick between multiple actions to perform next.
- To force the model to call a specific function, you can set tool_choice: {“type”: “function”, “function”: {“name”: “my_function”}}.
- To disable function calling and force the model to only generate a user-facing message, you can either provide no tools, or set tool_choice: “none”.
Note that if you do either 1 or 2 (i.e. force the model to call a function) then the subsequent finish_reason will be “stop” instead of being “tool_calls”.
Yes, but just because it is documented doesn’t mean it’s good
Please recall what staff said:
(FYI I would have posted there but the Topic was Closed)
So I guess: “when we release the next API version.” is a key part of this.
Agree. I meant that in both the cases where function call required
or forced the response is certainly going to be function call.
This is not helpful, did you even read the Topic and understand the issue, or just copy and paste from ChatGPT? text-davinci-003
is even deprecated!
Definitely started happening this week, after months of working fine. I call the API like this:
const params: OpenAI.Chat.ChatCompletionCreateParams = {
model: "gpt-4-turbo-preview",
messages: [
{
role: "user",
content: text,
},
],
max_tokens: 4096,
tools: [
{
type: "function",
function: {
name: "functionNameHere",
description:
"PromptHere",
parameters: SomeJsonHere,
},
Disagree @sebastian.wi but welcome to the community
It’s been happening for a long time in the scenario I outline above.
But glad you’ve witnessed it.
This should be cleaned up in the API asap.
Same behavior here. Definitely not the best user experience to have different finish reasons for different options of tool_choice.
fyi
just happened to me in a more than a year program that daily sends the same chat completion.
Appears this is occurring again. With one of my functions, it calls with reason stop, with the rest of my functions it calls with reason function_call. Same function has been calling for months with no issue until today.
Can you report on the AI model that has been stealth-damaged?
Technically, either terminating a function output or terminating a response to a user involves the AI emitting an internal stop sequence token. Without a stop sequence closing the message, the output would continue forever.
However, of two options, It is writing the wrong token number that likely causes the anomaly, directly reported by the API. In cl100k-base, the difference between 100260 and 100265.
A "strict":true
structured function call schema should improve the situation, enforcing what can be output after a function being sent is closed. Or rather, just evaluating what has been finally collected as tool_call when a finish reason streaming chunk is reached, regardless of reason.
Also reporting seeing a finish reason of stop
out of nowhere within the past couple of hours. Using gpt-4o with node sdk. I’m passing tools and not specifying a tool_choice
(as has always worked properly for me)
I’d also like to report this issue. Function calling is now giving me a finish_reason of stop
instead of tool_calls
We also started to get finish_reason=stop
instead of tool_calls
in some cases. I found that removing Answer as concisely as possible
system message fixes the issue