Handle Function Calls when using Assistants API

Hello community!

Is it any prompt that I could use to instruct an Assistant that Function Calls need to be made one by one?

That’s to say, if the user prompt requires the Assistant to perform, let’s say, two functions, it should perform the first one and wait for the Function Output, then perform the second one, etc.

Thanks in advance! :slight_smile:

1 Like

Out of curiosity why can’t you do this in the back-end?

Is the workflow like this?

User query → Function Call #1 → Response → Next user query → Function Call #2 w/ data from #1

1 Like

Hey Ronald!

I could handle this in the back-end indeed, but I’m looking to find a quicker solution first, hence my question on this post.

The expected workflow would be more like this:

User query (where two function calls are required) → Function Call 1 → Response → Function Call 2 → Response → Run completed and message sent to user

In other words, I want the assistant to perform every required Function Call on a one-by-one basis.

Yes. That is how:
That is how to manage the required actions awaiting one by one:

This is the vector of functions loaded dynamically above:
image

Another reason why assistants and parallel tool “stinks”

Right in the language of the new “multitool” - another specification you aren’t told about, is language telling the AI “call the functions in parallel with this even if the function says otherwise”. A real middle finger to the API developer, just like the need for matching tool IDs and matching assistant output and tool return added only one way to the conversation.

The new models were made recently to favor parallel tool calls besides calling functions even when not needed. “Use this function to run multiple tools simultaneously, but only if they can operate in parallel. Do this even if the prompt suggests using the tools sequentially.”

So, because classic functions as a method are not exposed, you’ll have to break the AI yourself with instructions and specifications. Specify your own functions as “this function cannot run in parallel”…“This function can only be used if xxx function was used immediately before and returned a value successfully”…

the multitool injection into your AI assistant and chat completions
## multi_tool_use

// This tool serves as a wrapper for utilizing multiple tools. Each tool that can be used must be specified in the tool sections. Only tools in the functions namespace are permitted.
// Ensure that the parameters provided to each tool are valid according to that tool's specification.
namespace multi_tool_use {

// Use this function to run multiple tools simultaneously, but only if they can operate in parallel. Do this even if the prompt suggests using the tools sequentially.
type parallel = (_: {
// The tools to be executed in parallel. NOTE: only functions tools are permitted
tool_uses: {
// The name of the tool to use. The format should either be just the name of the tool, or in the format namespace.function_name for plugin and function tools.
recipient_name: string,
// The parameters to pass to the tool. Ensure these are valid according to the tool's own specifications.
parameters: object,
}[],
}) => any;

} // namespace multi_tool_use
2 Likes

I think that is really something to be prompted. Both in the Assistant instructions and in the description of the function you can add what and how you want this done. I have a function ‘check_if_exist’ and ‘add_new’ for example and they are logically never to be called together or add_new after check_if_exist.
What problem are you having with defining this in the instructions? Share more details!

Thanks all for your comments!

@jlvanhulst, I have instructed the model to “call function X first, then call function X second”, but it seems to just ignore that instruction and adds two “required actions” to a Run.

I mean, it’s a nice thing from my point of view as it allows tasks to be performed in parallel when it’s possible (let’s say tasks which results are not co-dependent). But for the case that functions that need to be performed sequentially, it can be messy to handle.

I’ve been thinking on this and I’m now also aware that the Assistants API is just not designed (or at least it’s my impression) to work in a way that:

  • A thread runs and determines that two function calls need to be performed.
  • Falls into “requires_action” status with the first function call.
  • Receives a function output for that function call.
  • Continue running.
  • Falls into “requires_action” status with the second function call.
  • Receives a function output.
  • Continue running.
  • Falls into “completed” status.

The above flow is what I would expect either by giving the proper prompts to the model (which is what I was asking in first instance :sweat_smile:), or just by having the Thread Run behaving like that.

Just to clarify, I understand that this can be handled at back-end level (as @mouimet suggested for example), however I’m just looking for a way to avoid implementing such a workaround.

Christian,
That is exactly how it can work. I have several Assistants that work that way. Feel free to share more details on the functions and the prompts!

1 Like

I still don’t understand. This doesn’t seem to be the model’s responsibility.

If Function A depends on the results of Function B then you want to build a dependency graph and manage this in the back-end.

In fact, if Function A ALWAYS requires Function B then you only need to specify a single function

This is not a workaround. This is the solution. A workaround, by definition is trying to deviate GPT from it’s intended path.

It could be that maybe I have never dealt with such a situation (I have functions that depend on others however), could you clarify a bit more of your use-case? You have said that it’s possible to do this entirely in the back-end but prefer to run extra tokens & time to have GPT ingest it?

1 Like

Hey guys, thanks again for your interest on this topic.

After doing some tests - and by following @jlvanhulst’s advice - I’ve managed to have the Assistant calling the functions one by one!

@RonaldGRuckus, sorry that I didn’t give enough details. I mean workaround because the back-end (which in this specific case would be a middleware actually) is a Chatbot building SaaS platform with Call API functions available, which I’m using to exchange messages with the Assistants API to create responses for users’ utterances. That’s why I refer to this as a workaround, even though I understand that the actual solution would be to handle this just with some piece of coding.

Have a great weekend guys :slight_smile:

1 Like

Cool stuff and glad you got it working!

One of the things I am still playing around with is the ‘tying’ of Assistants - so far the way it’s playing out for me is that assistants get called sequentially ie when a run completes that will trigger a follow up task (and that is orchestrated in code not in prompts)