Multifunction calling - unable to generate both function calls

Hi,

I am fairly new to coding and have struggled to find a way to get this working. I am trying to get this code to call both functions, but so far have only been able to return one. Would anyone be able to point out where I am going wrong?

In essence, I have two functons, get_current_weather & get_current_traffic. If ask for the weather and traffic in any given location, it will always only return the first function.

Thanks in advance.

Code extract:
def get_current_weather(location, unit=“fahrenheit”):
def get_current_traffic(location):

available_functions = {
“get_current_weather”: get_current_weather,
“get_current_traffic”: get_current_traffic,
}

tools = [
{
“name”: “get_current_weather”,
“description”: “Get the current weather in a given location”,
“parameters”: {
“type”: “object”,
“properties”: {
“location”: {
“type”: “string”,
“description”: “The city and state, e.g. San Francisco, CA”,
},
“unit”: {“type”: “string”, “enum”: [“celsius”, “fahrenheit”]},
},
“required”: [“location”],
},

        },
        {
            "name": "get_current_traffic",
            "description": "Get the current traffic in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                },
                "required": ["location"],
        },
    },
]

response = client.chat.completions.create(
model=“gpt-4-1106-preview”,
messages=[{“role”: “user”, “content”:“what is the weather in san francisco, also could you give me the traffic?”}],
temperature=0,
functions = tools,
function_call = “auto”,
)
response_message = response.choices.function_call
print(response_message)

It’d be great to have a look at the response dump, and most times it’s always exploding the array of the tool_calls output and inside each you have a call to that function and append it to an array variable. After the loops the array variable then passed back to the API, even if the AI only wants 1 function.

I tested your functions and it is being invoked at the same time.

Messages

[
  {
    role: 'system',
    content: 'You are a helpful assistant.\n' +
      'When the user wants to know the current weather in a given location, call get_current_weather function.\n' +
      'When the user wants to know the current traffic situation in a given location, call get_current_traffic function.\n' +
      'Today is Fri Dec 08 2023 09:01:51 GMT+0900 (Japan Standard Time).'
  },
  {
    role: 'user',
    content: 'What is the current weather and traffic in Tokyo?'
  }
]

Output

[
  {
    id: 'call_Hm5WMfSmF46vwGFr1O0A1NVg',
    type: 'function',
    function: {
      name: 'get_current_weather',
      arguments: '{"location": "Tokyo", "unit": "celsius"}'
    }
  },
  {
    id: 'call_pi4VDljDSlauwIp9CRtf6W1l',
    type: 'function',
    function: { name: 'get_current_traffic', arguments: '{"location": "Tokyo"}' }
  }
]

Hi,

The response is as follows: ChatCompletionMessage(content=None, role=‘assistant’, function_call=FunctionCall(arguments=‘{“location”:“San Francisco, CA”,“unit”:“celsius”}’, name=‘get_current_weather’), tool_calls=None)

Looking at your output, what model are you using? If you need parallel function calling, you have to use 1106 models (e.g. gpt-4-1106-preview, gpt-3.5-turbo-1106).

Hi,

Thanks for coming back to me. Do you mind sharing your completions code? I am using gpt-4-1106-preview. I’ve tried updating the prompt to the same as yours, but no luck.

Is the issue in the way I am calling the LLM? Do I need to loop the response through it to get multiple functions? I had a go at doing this but the result remained unchanged. Relevant code extract is below further below.

For some further context, I had this working properly when I was using an older openai library version 0.28.1 where the code for the creation of the completion was “openai.ChatCompletions.create(…” which seemed to have some success in pulling multiple functions, but to date with the latest version of the openai library, I just can’t seem to get it right.

response = client.chat.completions.create(
model=“gpt-4-1106-preview”,
messages=prompt,
temperature=0,
functions=tools,
function_call=“auto”
)
response_message = response
print(response_message)

The only problem in your code is you are using functions, function_call. Change it to tools and tool_choice.

const response = await openai.chat.completions.create({
    model: "gpt-3.5-turbo-1106",
    messages: messages,
    tools: tools,
    tool_choice: "auto", // auto is default, but we'll be explicit
  });

See [docs page}(https://platform.openai.com/docs/guides/function-calling)

Thanks, this has resolved the matter, I appreciate your assistance!

I did have to update my tools variable to ensure it was compatible with the tools parameter. Putting the final code here so that anyone else with a similar problem can find it.

tools = [
{
“type”: “function”,
“function”: {
“name”: “get_current_weather”,
“description”: “Get the current weather in a given location”,
“parameters”: {
“type”: “object”,
“properties”: {
“location”: {
“type”: “string”,
“description”: “The city and state, e.g. San Francisco, CA”,
},
},
“required”: [“location”],
},
},
},
{
“type”: “function”,
“function”: {
“name”: “get_current_traffic”,
“description”: “Get the current traffic in a given location”,
“parameters”: {
“type”: “object”,
“properties”: {
“location”: {
“type”: “string”,
“description”: “The city and state, e.g. San Francisco, CA”,
},
},
“required”: [“location”],
},
},
}
]

prompt = [
{
‘role’: ‘system’,
‘content’: ‘You are a helpful assistant. When the user wants to know the current weather in a given location, call get_current_weather function. When the user wants to know the current traffic situation in a given location, call get_current_traffic function.’
},
{
‘role’:‘user’,
‘content’: ‘What is the current weather and traffic in Tokyo?’,
}
]

response = client.chat.completions.create(
model=“gpt-4-1106-preview”,
messages=prompt,
temperature=0,
tools =tools,
tool_choice=“auto”
)
response_message = response
print(response_message)

You can use markdown to format your code using ``` to display it better and makes it easy to copy.