Prevent Parallel Function Calling

Is there a way to prevent parallel function calls from occurring in the chat API?

The reason I want to call them one-by-one is that GPT-4 often needs context from one function before calling another, so it if calls them in parallel it can get things wrong.

Would I need to add something to the system message to prevent this, or is there an better way I haven’t seen?

If one call depends on the results from another you should be able to prompt that?

Share some more details?

It doesn’t necessarily depend on it, but certainly helps!

For example, let’s say we have a function that executes code. Sometimes it’ll call two code functions in parallel, when if they were called one after the other, it would have the output from the first piece of code as context.

1 Like

I do think that both the instructions in the descrption of the function, and the assistant prompt should / could be so that the Assistant would not call both functions if knows that they should be called sequentially.

I tried adding a pretty explicit statement in the system prompt but it’s still running them in parallel.

This is on the chat endpoint, not assistants btw.

Hey,

Interesting case! Can you provide some more concrete info about it?

Here are some initial thoughts (* I don’t have all the information so sorry in advance if some of these aren’t relevant):

  • I’d try a very thorough explanation of the logic behind the function calling in the system message (something more concrete than “don’t call in parallel”).
  • Maybe try to refine the function description to refer to the cases when it shouldn’t be called?
  • Maybe it’s possible to add your own logic of when to actually call each of the functions according to your needs (and the provided response)?
1 Like