Hi OpenAI Community,
I’m currently developing an app that uses OpenAI assistants, and I have a question regarding how assistants can call tools in a smart and dynamic manner.
In my app, I have an Location assistant that utilizes multiple tools. The assistant is designed to perform tasks like:
- Web search using an API to gather location details.
- MapKit framework to provide address links or directions based on the results of a given search query
I want to know if it’s possible for the assistant to call tools in parallel when it’s efficient (for example, fetching web data while also preparing map resources), but also call them sequentially when needed (like waiting for web search results before generating the map link).
More specifically:
-
Can the assistant intelligently decide the order and timing of tool calls based on the task? For example, it could:
First perform a web search for a location.
Then, after getting results, use the MapKit tool to generate a map link for the found location. -
Is there a best practice for managing this kind of tool orchestration? Is there a way for the assistant to execute tasks in parallel when no dependencies exist, but also sequentially when one tool relies on another’s output?
I would appreciate any insights or advice on how to approach this tool execution strategy. Ideally, I’d like the assistant to make decisions dynamically on whether it needs to wait for one tool’s output before calling another.
Thank you in advance!