We have an assistant with access to several functions. Some of these return data to the assistant and, depending on what function is called, I want to provide additional instructions for how to deal with that data. You can provide additional instructions when creating a run but I don’t see a similar parameter for the submit_tool_outputs method.
I have tried loading instructions for all the use cases into the base assistant instructions. E.g.
You are a helpful assistant. These are the most common user interactions and specific instructions for each.
User interaction 1: Use the data returned by function X to …
User interaction 2: Use the data returned by function Y to …
But it is not reliable. Sometimes it follows the instructions, sometimes not. My next idea is to inject more detailed instructions directly at the function call, but I’m unclear what the best way to achieve that might be.
My current brainstorm:
Append an Instructions field to the tool output before submitting it back to the assistant. This is probably what I will try next but I expect it to be also unreliable since the assistant will treat the field as data rather than instructions.
I found this thread asking something similar, but I don’t love the idea of building a new assistant for each function: Overriding instructions with Function Calling?
Some sort of routing layer that catches the last user message, predetermines the function call and injects the correct custom instruction before creating a new run. Feels overly complicated.
Other ideas? Am I missing something obvious?
Have you tried expanding the instructions INSIDE the function? Like the description field? I use that extensively to describe how to use the functions and what to do with the parameters. I also use Description for certainly parameter fields where it makes sense.
Furthermore if anything I can recommend to look very critically at your Assistant prompt. It is probably not detailed and clear enough. And I’m saying that based on my own experience doing the same that you are doing. I have a several assistants running that have 10 functions and a prompt that is more than 2 pages long. And I still keep updating the prompt every week.
If you had to explain a 10 year old to do what you want the assistant to do - would he or she be able to do it? Would I understand your prompt if I read it without knowing anything else? I have come to believe we’re at best mediocre at giving clear instructions 
2 Likes
That’s why I asked if I was missing something obvious - I totally overlooked the description field in the function definition
. Will start there.
The instructions I provided above are not real - the real ones are definitely more detailed. GPT4 does a better job of following very detailed base instructions, but even that fails sometimes in my (limited) experience.
For example, experimenting with custom GPTs before moving to the assistant, I had an action to query a vector database. The GPT was instructed to always augment the user’s query with related terms before submitting it to the API, but it never did. Bit of a different use case than interpreting returned data, and that particular problem would be relatively easy to solve in code when working in the assistants API.
Anyway, thank you!!
Always happy to look at prompts - and yes - I keep switching between GPT-4 and GPT-4-1106 - it feels like they both have specific strenghts - or moods of the day 