If my organization has multiple assistants, do the Assistants share their context with other Assistants in my organization (only within my organization), or is each assistant only aware of the context of messages in their own threads? If not, it would be nice to make this a configurable option (Assistant A is able to share a threadID to Assistant B so Assistant B can also have that context when asked about something different).
The root of this question is motivated by the fact that currently an Assistant can only have 1 json_schema defined for its response. This is limiting me as there are times that (based on context) I may want a different json_schema definition for different responses. A workaround for me would be having another Assistant with a different schema.
Think of the assistant as the container for instructions, your content (file_search) and tools (code_interpreter, etc.). Think of a thread as the container for message history.
In our case, we have one assistant (and corresponding vector store) for each of our customers. Each of their users gets his or her own thread. And we send messages from a user to the corresponding thread.
You can override things on each message – such as specific instructions and tool settings. But if you have this 30,000 foot view it will help you understand the model.
Right. For me there are two use cases where I want different structures in the response from the Assistant. First, the customer is “donating” specific data to the Assistant so it can learn about customer. The structure I get from this response might be one schema. Second, the customer may ask the Assistant to give them insight into the data they have shared over time, which will need a different schema structure in the response.
I’m thinking of it as two different endpoints on a REST API, basically. One for donating, one for asking. It seems like this may not be possible with (or the intended design of) the Assistant API, though.