Function Calling Help - Model Doesn't Seem To Accept Function Prompt?

I am pretty sure I am just not understanding this properly. I am trying to emulate the weather example given in the documentation, Here is the raw request going into gpt-4-1106-preview:

{
  "model": "gpt-4-1106-preview",
  "messages": [
{"role": "user", "content": "What is the weather in Dallas, TX"},
{"role": "function", "name": "GetCurrentWeather", "content": "The current weather in Dallas, TX is Mostly Cloudy at 57 F 13.9 C degrees."}]
}

The response is always:

"choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "As an AI, I don't have live data access. To get the current weather for Dallas, TX, you would need to check a weather service like the National Weather Service, a weather app, or a website that provides real-time weather information. Please check one of those services for the most up-to-date weather conditions."
      },
      "finish_reason": "stop"
    }
  ],

Am I not supplying it the weather data properly?

It gets weirder. If I change the text of the first prompt to: “Write me a story about the current weather in Dallas, TX”, it actually does write me a story using the temperature data give in the function prompt… I am at a loss here…

The “function” role is for returning the response back to the AI, and actually now you should migrate to tools, which requires you to pass a matching ID in the assistant tool output and the tool function return.

The “function” alone doesn’t let the AI imagine what function that text came from, and no spec = no function call trained model.

For passing the specification, I whipped up a bit of code with demo parsing of the API response, and using the with_raw_response, so headers and other request object info is also available.

from openai import OpenAI
client = OpenAI()

tools=[{
        "type": "function",
        "function": {
            "name": "get_weather_forecast",
            "description": "Get weather forecast",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state.",
                    },
                    "format": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The temperature unit to use.",
                    },
                    "time_period": {
                        "type": "number",
                        "description": "length in days or portions of day",
                    }
                },
                "required": ["location", "format", "num_days"]
            },
        }
    }]

params = {
  "model": "gpt-3.5-turbo",
  "messages": [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "How hot will it be today in Seattle?"}
  ],
  "tools": tools
}
  
c = client.chat.completions.with_raw_response.create(**params)

# print any response always
if c.parse().choices[0].message.content:
    print(c.parse().choices[0].message.content)

# print the first function invoked
if c.parse().choices[0].finish_reason:
    print(c.parse().choices[0].message.tool_calls[0].function)
1 Like

This didn’t seem to change anything for me. I still get the same response. It seems that tools just allows for multiple function calls, so when I reconfigured everything to use tools, in the end, I still have to pass back the results of the function to the model and the same thing happens.

Hey Jeff, what endpoint are you using in your code? ```
client.beta.threads.runs.create, or this one, client.chat.completions.create

I am calling this endpoint: https://api.openai.com/v1/chat/completions

without seeing the rest of the code it impossible to say what is exactly going on. But, you can try defining a actual get_current_weather function if you don’t have one that returns your string, “The current weather in Dallas, TX is Mostly Cloudy at 57 F 13.9 C degrees.” and then changing the content to function_response. Like this.
def get_current weather(city, state):
return “The current weather in Dallas, TX is Mostly Cloudy at 57 F 13.9 C degrees.”
{
“model”: “gpt-4-1106-preview”,
“messages”: [
{“role”: “user”, “content”: “What is the weather in Dallas, TX”},
{“role”: “function”, “name”: “GetCurrentWeather”, “content”: function_response}]
}

The problem is you are not including the function call part where the AI decided to use the function. You are only sending the function output.

If I include the function call when I return the results from the origin function call, it just returns that I should call it again, that’s why I remove it on the subsequent call, which as I understand it, is what you’re supposed to do or you get caught in a function loop:

{
  "model": "gpt-4-1106-preview",
  "functions":[{
    "name":"CurrentWeather","description":"Get the real-time  weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"}},"required":["location"]}
  }],
  "messages": [
{"role": "user", "content": "Get weather in Dallas, TX."},
{"role": "function", "name": "GetCurrentWeather", "content": "The current weather in Dallas, TX is Mostly Cloudy at 57 F 13.9 C degrees."}
]

}
{
  "id": "chatcmpl-xxxx",
  "object": "chat.completion",
  "created": 1702597814,
  "model": "gpt-4-1106-preview",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "function_call": {
          "name": "CurrentWeather",
          "arguments": "{\"location\":\"Dallas, TX\"}"
        }
      },
      "finish_reason": "function_call"
    }
  ],
  "usage": {
    "prompt_tokens": 98,
    "completion_tokens": 16,
    "total_tokens": 114
  },
  "system_fingerprint": "fp_xxxxx"
}

I still don’t understand why my second example actually works, by just changing the wording.

Maybe this will illustrate it more using CURL. Get your API token and plug it into here:

curl --location --request POST 'https://api.openai.com/v1/chat/completions' \
--header 'Authorization: Bearer [TOKEN]' \
--header 'Content-Type: application/json' \
--data-raw '{
  "model": "gpt-4-1106-preview",
  "messages": [
{"role": "user", "content": "Get weather in Dallas, TX."},
{"role": "function", "name": "GetCurrentWeather", "content": "The current weather in Dallas, TX is Mostly Cloudy at 57 F 13.9 C degrees."}
]

}'

Now try this:

curl --location --request POST 'https://api.openai.com/v1/chat/completions' \
--header 'Authorization: Bearer [TOKEN]' \
--header 'Content-Type: application/json' \
--data-raw '{
  "model": "gpt-4-1106-preview",
  "messages": [
{"role": "user", "content": "Write me a story about the current weather in Dallas, TX"},
{"role": "function", "name": "GetCurrentWeather", "content": "The current weather in Dallas, TX is Mostly Cloudy at 57 F 13.9 C degrees."}
]

}'

The second example works in that it generally uses the data from the function prompt…

I decided to make the mega-demo. Going beyond this I might as well just write a chatbot with classes for handling simulated functions and showing rewriting them to a real API…


# imports and set up the OpenAI client object with a shorter timeout
from openai import OpenAI
import json
client = OpenAI(timeout=30)
# Here we'll make a tool specification, more flexible by adding one at a time
toolspec=[]
toolspec.extend([{
        "type": "function",
        "function": {
            "name": "get_weather_forecast",
            "description": "Get weather forecast. AI can make multiple tool calls in one response.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state.",
                    },
                    "format": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The temperature unit to use.",
                    },
                    "time_period": {
                        "type": "number",
                        "description": "length in days or portions of day",
                    }
                },
                "required": ["location", "format", "num_days"]
            },
        }
    }]
)
# Then we'll form the basis of our call to API, with the user input
# Note I ask the preview model for two answers at once
params = {
  "model": "gpt-3.5-turbo-1106",
  "tools": toolspec,
  "messages": [
    {
        "role": "system", "content": "You are a helpful AI assistant."
    },
    {
        "role": "user", "content": ("How hot will it be today in Seattle? And in Miami?"
                                    " Use multi-tool to get both at the same time")
        
    },
    ],
}
# Now after we see that the AI emits functions, we can add the multi-tool function return
# Rename "xparams" to "params" to add assistant/tools
# Then run again to get your AI answer

xparams = {"messages": []}  # dump disabled messages here

# Show the AI what it previously emitted to us
xparams['messages'].extend(
[
  {
    "role": "assistant",
    "content": "Let me look up the weather in those cities for you...",
    "tool_calls": [
         {
          "id": "call_rygjilssMBx8JQGUgEo7QqeY",
          "type": "function",
          "function": {
            "name": "get_weather_forecast",
            "arguments": "{\"location\": \"Seattle\", \"format\": \"fahrenheit\", \"time_period\": 1}"
          }
        },
        {
          "id": "call_pI6vxWtSMU5puVBHNm5nJhw3",
          "type": "function",
          "function": {
            "name": "get_weather_forecast",
            "arguments": "{\"location\": \"Miami\", \"format\": \"fahrenheit\", \"time_period\": 1}"
        },
    }
    ]
  }
]
)
# Return values: what the tool_call with multiple functions gives
# rename xparams to params here also for the 2nd run
xparams['messages'].extend(
[
  {
    "role": "tool", "tool_call_id": "call_rygjilssMBx8JQGUgEo7QqeY", "content":
        "Seattle 2022-12-15 forecast: high 62, low 42, partly cloudy\n"  
  },
  {
    "role": "tool", "tool_call_id": "call_pI6vxWtSMU5puVBHNm5nJhw3", "content":
        "Miami 2022-12-15 forecast: high 77, low 66, sunny\n"  
  }
]
)
# Make API call to OpenAI
c = None
try:
    c = client.chat.completions.with_raw_response.create(**params)
except Exception as e:
    print(f"Error: {e}")

# If we got the response, load a whole bunch of demo variables
# This is different because of the 'with raw response' for obtaining headers
if c:
    headers_dict = c.headers.items().mapping.copy()
    for key, value in headers_dict.items():
        variable_name = f'headers_{key.replace("-", "_")}'
        globals()[variable_name] = value
    remains = headers_x_ratelimit_remaining_tokens  # show we set variables
    
    api_return_dict = json.loads(c.content.decode())
    api_finish_str = api_return_dict.get('choices')[0].get('finish_reason')
    usage_dict = api_return_dict.get('usage')
    api_message_dict = api_return_dict.get('choices')[0].get('message')
    api_message_str = api_return_dict.get('choices')[0].get('message').get('content')
    api_tools_list = api_return_dict.get('choices')[0].get('message').get('tool_calls')
    # print any response always
    if api_message_str:
        print(api_message_str)

    # print all tool functions pretty
    if api_tools_list:
        for tool_item in api_tools_list:
            print(json.dumps(tool_item, indent=2))

"""
AI says to us:
Here are the weather forecasts for today:
- Seattle: High 62°C, Low 42°C, Partly Cloudy
- Miami: High 77°C, Low 66°C, Sunny
"""
8 Likes

I really appreciate your help on this. I tried several permutations of this with no luck. I can’t quite figure out what is going on here. Odd thing is, this was just working not that long ago and no code has changed on my end. I will keep tinkering.

Perhaps the missing component is the quality of the conversation history about functions and tools.

You must provide the AI basically a limitless context length of chat history for storing the history of AI assistant tool output and tool return values. If it exceeds the context length or displaces the question being asked, only then the AI has to be cut off.

A full conversation history allows the AI to perform iterative tasks, while seeing the results or errors it got before.

Then the function definition being constantly there is part of understanding those tool calls - it must remain.

1 Like

I knew it was possible, that works perfectly, thank you !

I tried a variation of the example provided here and it worked:

Navigate to: Send the response back to the model to summarize

Here is their example:

curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H 'Content-Type: application/json' -d '{
  "model": "gpt-3.5-turbo-0613",
  "messages": [
    {"role": "user", "content": "What is the weather like in Boston?"},
    {"role": "assistant", "content": null, "function_call": {"name": "get_current_weather", "arguments": "{ \"location\": \"Boston, MA\"}"}},
    {"role": "function", "name": "get_current_weather", "content": "{\"temperature\": "22", \"unit\": \"celsius\", \"description\": \"Sunny\"}"}
  ],
  "functions": [
    {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": ["celsius", "fahrenheit"]
          }
        },
        "required": ["location"]
      }
    }
  ]
}'

I used gpt-3.5-turbo-0125 ; on my example, I specifically added a system message giving a role to the assistant and instructing it to output valid JSON.

Note that the assistant message to give GPT the context of the previous response has a "content": null and it has a function_call property that provides both the function’s name and arguments; I used the same name provided on the function spec and gave it the arguments a json.dumps() of the response I had received from the previous function.