Assistant content for tool call response

Is there a way to get the model to provide a reason why is is calling a function?

I have tried using a system prompt that includes something like this: “Always include a description explaining why you want a function to be called.”

Looking at the sample JSON response below, this is the example where the model wants the get_current_weather function to be called. The content property always seems to be null, what I would like is a way to have the model add content which explains it’s reasoning for calling a function. One use for this is to present this information to the user so they could potentially approve the function call.

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1699896916,
  "model": "gpt-3.5-turbo-0125",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_abc123",
            "type": "function",
            "function": {
              "name": "get_current_weather",
              "arguments": "{\n\"location\": \"Boston, MA\"\n}"
            }
          }
        ]
      },
      "logprobs": null,
      "finish_reason": "tool_calls"
    }
  ],
  "usage": {
    "prompt_tokens": 82,
    "completion_tokens": 17,
    "total_tokens": 99
  }
}

I sort of do this for a sending email function. I require approval from the user before any email is sent by the 4-turbo model, so I ended up using a session marker that when not “on” it tells the model to ask the user if they want to send an email and turns the marker off, so the next request to the email function sends the email (if the user sends yes). It works fine, I also require it to offer a draft of that email for approval.
So you just have it output the reason it wants to use the function first time around with a request to use the function so the second time around it executes the function.

1 Like

The AI “decides” in most cases before it generates any other language. The very first token that is predicted by the model determines if a function response is invoked.

The AI doesn’t have any awareness of why a special token reached a certainty threshold above a normal language token. It is all from pretraining and the context provided.

In fact, a tool response can be called that can serve no purpose at all to the user input, with the AI still needing to write some function - or then using my “tools_called_in_error” function to escape from the AI being a big dummy with no bias control over the prediction of tools.

Asking “why” after the fact is like asking why DALL-E returned a value “no images due to issues” - the AI can’t know, doesn’t have any internal state of knowing besides seeing the past sequence of context tokens, and will only make wild guesses.

OpenAI even went out of their way to block all special tokens being reported, by giving you a filtered softmax as logprobs and completely shutting off logprobs after tool invocation - so you can’t even do your own debugging.