Function Calling Accuracy - Parallel Function Calling with Actions

I have noticed the accuracy of function calling dropping since the new API was introduced. I had up to 10 possible function calls the model used to get right all the time, since the upgrade the model more often picks the wrong function.

I understand I must upgrade to the two step parallel function calling arrangement. My only problem with that is some of the functions are “Action” functions and do not return a response but rather complete an action - eg. Generate an Image or send an email. So calling that function can’t return a string or text back to the API to help it formulate a message to the user.

So in essence, how to handle parallel function calls where actions are involved, and not json objects that can be returned to the API?

1 Like

I have also noticed a decline when using function calling in gpt-4-1106-preview (also using about 10 functions). gpt-4 does not seem to suffer from this. I get a response like:

response:

ChatCompletion(id='...', choices=[Choice(finish_reason='function_call', index=0, message=ChatCompletionMessage(content=None, role='assistant', function_call=FunctionCall(arguments='{"destinationName":"Hans"}', name='*/functions.transfer'), tool_calls=None))], created=1701333279, model='gpt-4-1106-preview', object='chat.completion', system_fingerprint='fp_a24b4d720c', usage=CompletionUsage(completion_tokens=15, prompt_tokens=543, total_tokens=558))

where it says name=‘*/functions.transfer’)
instead of name=‘transfer’

so I would have to repair the response on my side now…