Is it any prompt that I could use to instruct an Assistant that Function Calls need to be made one by one?
That’s to say, if the user prompt requires the Assistant to perform, let’s say, two functions, it should perform the first one and wait for the Function Output, then perform the second one, etc.
Another reason why assistants and parallel tool “stinks”
Right in the language of the new “multitool” - another specification you aren’t told about, is language telling the AI “call the functions in parallel with this even if the function says otherwise”. A real middle finger to the API developer, just like the need for matching tool IDs and matching assistant output and tool return added only one way to the conversation.
The new models were made recently to favor parallel tool calls besides calling functions even when not needed. “Use this function to run multiple tools simultaneously, but only if they can operate in parallel. Do this even if the prompt suggests using the tools sequentially.”
So, because classic functions as a method are not exposed, you’ll have to break the AI yourself with instructions and specifications. Specify your own functions as “this function cannot run in parallel”…“This function can only be used if xxx function was used immediately before and returned a value successfully”…
the multitool injection into your AI assistant and chat completions
## multi_tool_use
// This tool serves as a wrapper for utilizing multiple tools. Each tool that can be used must be specified in the tool sections. Only tools in the functions namespace are permitted.
// Ensure that the parameters provided to each tool are valid according to that tool's specification.
namespace multi_tool_use {
// Use this function to run multiple tools simultaneously, but only if they can operate in parallel. Do this even if the prompt suggests using the tools sequentially.
type parallel = (_: {
// The tools to be executed in parallel. NOTE: only functions tools are permitted
tool_uses: {
// The name of the tool to use. The format should either be just the name of the tool, or in the format namespace.function_name for plugin and function tools.
recipient_name: string,
// The parameters to pass to the tool. Ensure these are valid according to the tool's own specifications.
parameters: object,
}[],
}) => any;
} // namespace multi_tool_use
I think that is really something to be prompted. Both in the Assistant instructions and in the description of the function you can add what and how you want this done. I have a function ‘check_if_exist’ and ‘add_new’ for example and they are logically never to be called together or add_new after check_if_exist.
What problem are you having with defining this in the instructions? Share more details!
@jlvanhulst, I have instructed the model to “call function X first, then call function X second”, but it seems to just ignore that instruction and adds two “required actions” to a Run.
I mean, it’s a nice thing from my point of view as it allows tasks to be performed in parallel when it’s possible (let’s say tasks which results are not co-dependent). But for the case that functions that need to be performed sequentially, it can be messy to handle.
I’ve been thinking on this and I’m now also aware that the Assistants API is just not designed (or at least it’s my impression) to work in a way that:
A thread runs and determines that two function calls need to be performed.
Falls into “requires_action” status with the first function call.
Receives a function output for that function call.
Continue running.
Falls into “requires_action” status with the second function call.
Receives a function output.
Continue running.
Falls into “completed” status.
The above flow is what I would expect either by giving the proper prompts to the model (which is what I was asking in first instance ), or just by having the Thread Run behaving like that.
Just to clarify, I understand that this can be handled at back-end level (as @mouimet suggested for example), however I’m just looking for a way to avoid implementing such a workaround.
Christian,
That is exactly how it can work. I have several Assistants that work that way. Feel free to share more details on the functions and the prompts!
I still don’t understand. This doesn’t seem to be the model’s responsibility.
If Function A depends on the results of Function B then you want to build a dependency graph and manage this in the back-end.
In fact, if Function A ALWAYS requires Function B then you only need to specify a single function
This is not a workaround. This is the solution. A workaround, by definition is trying to deviate GPT from it’s intended path.
It could be that maybe I have never dealt with such a situation (I have functions that depend on others however), could you clarify a bit more of your use-case? You have said that it’s possible to do this entirely in the back-end but prefer to run extra tokens & time to have GPT ingest it?
Hey guys, thanks again for your interest on this topic.
After doing some tests - and by following @jlvanhulst’s advice - I’ve managed to have the Assistant calling the functions one by one!
@anon10827405, sorry that I didn’t give enough details. I mean workaround because the back-end (which in this specific case would be a middleware actually) is a Chatbot building SaaS platform with Call API functions available, which I’m using to exchange messages with the Assistants API to create responses for users’ utterances. That’s why I refer to this as a workaround, even though I understand that the actual solution would be to handle this just with some piece of coding.
One of the things I am still playing around with is the ‘tying’ of Assistants - so far the way it’s playing out for me is that assistants get called sequentially ie when a run completes that will trigger a follow up task (and that is orchestrated in code not in prompts)
I do have the same problem. Using the prompting strategy works in that the assistant does indeed call the first function and waits for the results and then tries to call the second function only after that. However, I have the problem that, if this happens, the run gets stuck, probably in the “requires action” state. The second function does not get called and no new messages can be added to that thread. Any help would be greatly appreciated.
here is a short example use case:
You > Call the list events function to list my events on Friday. Then you MUST wait for the results and list my event. Only after you receive the results and listed my Events, call the delete event function to delete my event.
run_I6NXVUvOg9d1SZcTEeOJgtl2
requires action
Function called: list_calendar_events
Arguments: {“time_min”:“2024-05-31T00:00:00Z”,“time_max”:“2024-06-01T00:00:00Z”,“calendar_id”:“primary”}
Assistant > Okay, Mr. Lutz. Here’s your event for Friday:
Test Event at 19:00 (ID: pq62k2kngpopa6s6ci1tgoq3v0_20240531T170000Z)
Perfect, right? Now, let’s go ahead and obliterate it.
One moment please.
// here is where the assistant/run gets stuck. If I try to add another message to the thread, the following happens:
You > Did you delete it?
Traceback (most recent call last):
openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “Can’t add messages to thread_aLCdY9GYjAfKd9c0DDJ8NwVj while a run run_I6NXVUvOg9d1SZcTEeOJgtl2 is active.”, ‘type’: ‘invalid_request_error’, ‘param’: None, ‘code’: None}}
It sounds to me like you don’t actually handle the second requires action properly. It clearly enters that state (thus the 400 error) . Check your loop? Or share more code.
Thank you very much for your swift reply! Yes, I also figured there must be something wrong with the required action state. I tested it in the playground and it worked there. It’s probably something with my event_handler… I just could not find out what exactly the problem was… I’m still very new to programming in python
here is my current setup for the eventhandler:
def on_event(self, event):
logging.debug(f"Event received: {event.event}")
if event.event == 'thread.run.requires_action':
run_id = event.data.id # Retrieve the run ID from the event data
logging.debug(f"Handling requires action for run_id: {run_id}")
self.handle_requires_action(event.data, run_id)
elif event.event == 'thread.run.text_delta':
self.handle_text_delta(event.data)
elif event.event == 'thread.run.completed':
self.response_complete.set()
self.handle_response_complete()
elif event.event == 'thread.run.failed':
logging.error(f"Run failed with data: {event.data}")
def handle_requires_action(self, data, run_id):
logging.debug(f"Requires action data: {data}")
tool_outputs = []
for tool in data.required_action.submit_tool_outputs.tool_calls:
function_name = tool.function.name
logging.debug(f"Calling function: {function_name} with arguments: {tool.function.arguments}")
if function_name in self.function_handlers:
handler = self.function_handlers[function_name]
try:
tool_output = handler(tool.function.arguments, tool.id)
logging.debug(f"Tool output: {tool_output}")
tool_outputs.append(tool_output)
except Exception as e:
logging.error(f"Error handling function {function_name}: {e}")
tool_outputs.append({
"tool_call_id": tool.id,
"output": f"Error: {str(e)}"
})
else:
logging.error(f"Unsupported function: {function_name}")
tool_outputs.append({
"tool_call_id": tool.id,
"output": f"Function {function_name} is not supported."
})
# Submit all tool_outputs at the same time
logging.debug(f"Submitting tool outputs: {tool_outputs}")
self.submit_tool_outputs(tool_outputs, run_id)
logging.debug("Submitted tool outputs, checking for next actions.")
def submit_tool_outputs(self, tool_outputs, run_id):
logging.debug(f"Submitting tool outputs for run_id: {run_id}")
with self.client.beta.threads.runs.submit_tool_outputs_stream(
thread_id=self.thread_id,
run_id=run_id,
tool_outputs=tool_outputs,
event_handler=self,
) as stream:
for text in stream.text_deltas:
logging.debug(f"Stream text delta: {text}")
print(text, end="", flush=True)
with self.text_lock:
self.accumulated_text += text # Treat text as a string for TTS
logging.debug("Completed streaming tool outputs.")
logging.debug(f"Accumulated text: {self.accumulated_text}")
print()