Function call with finish_reason of stop

We are receiving some responses from the api with finish_reason of “stop” when the model is calling a function.
Has anyone else experienced this?

1 Like

Nope, over and over a finish reason:

  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "function_call": {
          "name": "submit_iso_datetime",
          "arguments": "{\n  \"iso8601_datetime\": \"2023-02-06T16:33:00Z\"\n}"
        }
      },
      "finish_reason": "function_call"
    }

I made plenty of trials also trying to stop any of three models from producing the certainty of multi-line JSON by function descriptions, without success (except once at temperature 2.0, which can also produce copious server 500 errors from malformed function outputs), all without a “stop” being produced in any case.

You can use top_p = 0.4 to reduce the chance of the wrong “finish” output special token being produced.

Hi,

Can you post a copy of the prompt that produced this finish reason?

I can corroborate this.

Just happened to me several times with gpt 3.5 and gpt-4o.

One piece of important information, it seems to happen when you say a tool choice is required, ie:

"tool_choice" : "required"

That’s really not good though.

In that circumstance it should still have a ‘tool_calls’ finish_reason.

If however, I force a specific function the issue is not present.

This looks like a bug! :beetle:

It is certainly not desirable and definitely inconsistent and needs to be handled with messy code!

I going to have to put a horrible workaround in code for the time being :cry:

ie something like this:

(saving you from messy destructuring code!)

if (['stop','length'].include?(finish_reason) && tools_calls.nil? 

and something like:

elsif finish_reason == 'tool_calls' || !tools_calls.nil? 

instead of just checking the finish reason only - yuk!

@brianz-oai another couple of reports of New API feature: forcing function calling via `tool_choice: "required"` - #13 by brianz-oai

FYI I definitely would be happy to update my code to get rid of this cludge!

Exact same issue with me. Seems to be a recent regression - I don’t have proof but IIRC I could get finish_reason=tool_calls even with tool_choice=required a few days ago [EDIT: apparently not]. At any rate, reverting to tool_choice=auto isn’t a good fix either (for me) - the model often ends up not returning tool calls (despite prompt instructing it to) or just adds a tool_uses json object in the content field of the message as a string (with some undocumented keys like recipient_name. Let me know if you’ve found a solution!

What is the latest on this, is it fixed?

This is covered in the function calling docs:

By default, the model is configured to automatically select which functions to call, as determined by the tool_choice: “auto” setting.

We offer three ways to customize the default behavior:

  • To force the model to always call one or more functions, you can set tool_choice: “required”. The model will then always select one or more function(s) to call. This is useful for example if you want the model to pick between multiple actions to perform next.
  • To force the model to call a specific function, you can set tool_choice: {“type”: “function”, “function”: {“name”: “my_function”}}.
  • To disable function calling and force the model to only generate a user-facing message, you can either provide no tools, or set tool_choice: “none”.
    Note that if you do either 1 or 2 (i.e. force the model to call a function) then the subsequent finish_reason will be “stop” instead of being “tool_calls”.

Yes, but just because it is documented doesn’t mean it’s good :slight_smile:

Please recall what staff said:

(FYI I would have posted there but the Topic was Closed)

So I guess: “when we release the next API version.” is a key part of this.

1 Like

Agree. I meant that in both the cases where function call required or forced the response is certainly going to be function call.

1 Like

This is not helpful, did you even read the Topic and understand the issue, or just copy and paste from ChatGPT? text-davinci-003 is even deprecated!

Definitely started happening this week, after months of working fine. I call the API like this:

    const params: OpenAI.Chat.ChatCompletionCreateParams = {
        model: "gpt-4-turbo-preview",
        messages: [
            {
                role: "user",
                content: text,
            },
        ],
        max_tokens: 4096,
tools: [
            {
                type: "function",
                function: {
                    name: "functionNameHere",
                    description:
                        "PromptHere",
                    parameters: SomeJsonHere,
                },