From my research on here it looks like if you want to use the Assistants you need to create the model, with your instructions, every time, then feed it a thread, and poll the thread for responses.
I see there is a way to retrieve assistants but not how to connect an existing one esp one built on the web platform to a thread. And the new GPTs I see nothing on them in the API.
So, thats fine, I just want to make sure I got it right. The idea then is to test your thing in the web, then copy your instructions into the API for a new agent every time?
Edits: OR is the idea these WILL be hooked up with the API, once y’all are done making the engine better?
Thank you. Ok, so once I call listAssistants, is the idea that I then feed that information into self.client.beta.models.create, model name, instructions, etc?
First: have enough programming experience to have been able to put together everything an assistant does yourself, such as having a python sandbox callable by a function, and embeddings database that injects knowledge similar to a user query, a database of knowledge the AI can call on by function, management of multiple user sessions, accounts, conversations in a database.
Then after having that experience, to then work with assistants, you’ll need to start again to apply just a slightly greater amount of programming effort to interact with assistant objects, threads, instructions, messages, runs, function calls, attachments, modifications, annotations.
So don’t recommend. But yes, once an assistant is “built”, you’d interact with API functions for creating a thread, attaching messages to the thread, placing the thread into a run, monitoring the run for function calls back to you or errors, and use repeated polling to find when the non-streaming answer to a user has been completed by an indeterminate number of model run steps.
Sound preposterous?
You haven’t paid the bill for a run yet to find what preposterous is.