Unexpected Auto-Execution of Tools (maybe because of response_format: json)

Hi there,

I’ve encountered an issue where tools in GPT-4o (and other models too) are being executed automatically, regardless of the user’s message or request.

Here is my json request:

{
  "model": "gpt-4o",
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_current_weather",
        "description": "Get the current weather, only if the user explicitly asks for it",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "The city and country, eg. San Francisco, USA"
            },
            "format": {
              "type": "string",
              "enum": [
                "celsius",
                "fahrenheit"
              ]
            }
          },
          "required": [
            "location",
            "format"
          ]
        }
      }
    }
  ],
  "temperature": 0,
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant that extracts data and returns it in JSON format."
    },
    {
      "role": "user",
      "content": "What can you do for me?"
    }
  ],
  "stream": true,
  "response_format": {
    "type": "json_object"
  }
}

When the user simply asks, “What can you do for me?”, the weather function is called automatically with “San Francisco” and “fahrenheit” as parameters, even though the user didn’t ask for that explicitly.

However, when I removed the “response_format” field, the model seems to behave normally. Did something change in the way “response_format” works recently that might be causing this? Is there something else I should be adjusting in my setup?

Thanks in advance!

Welcome to the community!

Well, you’re forcing the model to respond with a json_object.

That means the model has the option to start the response with either [ or { or perhaps a special function call token that then needs to be followed with [ or {.

Given that, how is the model supposed to respond? You’re basically forcing its hand here :slight_smile:

Response format isn’t necessary for function calling, it’s more for other use-cases where the model should only respond in JSON.

What did you hope it would do for you?

Or actually, more curiously:

How did it work for you before? :thinking:

1 Like

Thank you for your answer.

The fact that the answer is returned as a json is intentional and correct. But I get a tool_calls response with the weather function for every message from the user. This should only happen if the user asks for the weather. This is just an example, it happens the same way with other functions.

Previously, calling the functions worked correctly based on the context.

I don’t see a schema provided - can you give an example of the type of answer you’d actually expect?

If I put myself in the shoes of the model, I’d see the following

  1. I have to respond with JSON
  2. the only schema provided is the function schema
  3. the user is asking me what I can do for the user
  4. I have to write "<|functioncall|>{"
  5. what’s next? I guess I can follow the function call schema
  6. "function":{"name":,
  7. ok, what can I do now? looks like “get_current_weather” is the only option
  8. "get_current_weather","arguments": "{
  9. What arguments? I guess the user told me to show them what I can do, I guess I’ll pick something random
  10. now the function call is over, I guess I’ll close it
  11. “}}}”
  12. all braces are closed, what do I do now? well the object is closed, opening a new object wouldn’t be JSON compliant, so I guess the only thing I can do is send <|eom|>

and then the system comes in and breaks off the message.

I don’t see an opportunity for the model to provide a plaintext response :thinking: unless I’m missing something.

Hello, thank you very much for the response.

I had tried a simplified example to explain this, I’ll now try to describe my current workflow.

My concern is that functions/tools are now apparently being called immediately and always, independently of the context.
For example, I have a prompt in which the user is supposed to select data in several steps during a dialogue. (Assume the prompt contains all the information for the selection.)

  • Step 1: Choose an article.
  • Step 2: Choose a customer.
  • Step 3: Choose a delivery address.
  • Step …

Only when all these steps have been completed and the fields article, customer, and delivery address have been selected should the provided function be called. The fields are accordingly defined as required in the function.
Until recently, this “workflow” worked reliably. Since recently, however, the function is always called immediately, regardless of whether all fields are set. GPT then “hallucinates” values for these fields.

This change in behavior seems to be triggered by setting the
"response_format": { "type": "json_object" }.