I have an app where user can create bot with instructions and function tools that bot can use.
To implement this, for each bot user creates, I create an assistant in OpenAI platform with configured instructions and tools. It works! Only drawback is syncing bot with OpenAI assistant. When bot is created/updated/deleted I need to create/updated/deleted assistant.
But recently I observed that I can pass instructions and tools while creating thread as well. Now I am rethinking strategy to create a single assistant and reuse it for different bot by passing different instructions and tools while creating thread.
I am wondering is there any drawbacks of reusing same assistant this way?
You may consider having client instructions in a run’s “additional_instructions”.
Then an assistant instruction that frames that with a GPT-like instruction prefix to ensure the user-placed instruction limitations and level of trust are known by AI, code that puts the additional instructions in a container and code to strip any container jailbreaks from additional instruction input.
There is no limit to running tons of a single concurrent assistant except a very low number of API calls per minute to the endpoint before you get rate limited (60-300?); it is not a model or a resource, just a context-sender.
Assistants is a hard product to resell. It has high cost which is unpredictable before a run, and a smart consumer will use your value-add only if you are mistakenly less expensive for them then their own use of the API.
ChatGPT Plus is essentially Assistants, with a monthly price that is its daily API usage cost from a top utilizer. So evaluate the competition.