Chat_completion with function calling

Hello,

I want to run through a set of questions, where I ask many different things. Some of these questions are about the current weather.

My goal is to give the assistant the choice of either doing chat completion or replying with the function call.

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "format": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The temperature unit to use. Infer this from the users location.",
                    },
                },
                "required": ["location", "format"],
            },
        }
    }
]

# Loop through your dataset
for i in range(len(df)):
    # Your prompt or input from the dataset
    user_input = df.loc[i, 'Question']  # Adjust column name as needed
    
    for j in range(num_iterations):
        # Start timer for measuring inference time
        start = time.time()
        
        # Call OpenAI API for inference
        response = client.chat.completions.create(
        # response = openai.ChatCompletion.create(
            model=gpt_model,
            messages=[
                {"role": "system", "content": "You are a helpful assistant. Use functions if appropriate."},
                {"role": "user", "content": user_input}
            ],
            functions=[tools[0]['function']], 
            function_call="auto"
        )
        
        # Extract the assistant's response
        result = response.choices[0].message.content
        
        # Store the response in the DataFrame
        df.at[i, f'Result_{j}'] = result
        
        # End timer
        end = time.time()
        
        # Calculate duration and store it
        duration = end - start
        df.at[i, f'Inference_Time_{j}'] = duration

The code is running all right, but as this is a first, I would like to get your feedback if this is the correct approach for my goal?

Sample question and answer I ask and would expect to get:

  • how warm is it in Kuala Lumpur right now?,“{‘type’: ‘function’, ‘function’: {‘name’: ‘get_current_weather’, ‘parameters’: {‘location’: ‘Kuala Lumpur, Malaysia’, ‘format’: ‘celsius’}}}”
  • Has Beijing been hotter than usual this spring?,“Accurate and detailed historical weather data, such as temperature extremes or precipitation levels, can be accessed through specialized weather databases or meteorological agencies.”

Background:
I am comparing different models in their ability to detect intent and accuracy of tool use. gpt-3.5 and gpt-4 versions will serve as baseline. In my thesis I will use open models (Llama 2 and Mistral) and compare how fine-tuned models can (possibly) improve performance in terms of function calling.

What does “get_current_weather” function look like? Do you have any contingencies in place when that function doesn’t return an expected result? Have you considered filtering the initial user prompt using the API before giving it the option to use a function? From what I see, it most likely will hallucinate a response to your historical weather data question unless you give it another function to do that kind of work.

Hey,

Sorry not sure I understood the bottom line question exactly :sweat_smile:

In general when using the ‘auto’ tool_choice the behavior is supposed to be the way you describe it: function only called when relevant, otherwise assistant replies.

In order to improve function calling for the second case you provided (I tried it out and it didn’t work as expected for me- calling the function), you can possibly describe the intended behavior in a system message. Here’s a very simple example I’ve tried that has worked:

Thanks! This made it work :grinning:

At first I thought it wasn’t working, because I kept getting ‘None’ as a result when it should have been the JSON output. That was because I did extract the wrong part of the response.

So, instead of capturing this

response.choices[0].message.content

I had to save this

response.choices[0].message.function_call.arguments
1 Like