What Difference between chat with Prompt and chat through Assistant api with same instruction I can’t understand difference between output
chat with Prompt would be using Chat Completion APIs, here you’re using LLM Generative AI that is completing your chat based on the data models trained on. On the other hand, chat through Assistant api means that you are accessing additional data which is present in the files stored in openAI storage.
So As I understand Assistants are the same as GPTs Chats With embedded Data and assistants is just easy way to store data in Vector stores
Chat Completions is: send input (including past messages), get a response. A response that can also invoke a tool that you have specified and coded.
Assistants is an agent. Server-side, you give it permanent instructions. You upload files, from which text can be extracted on many documents. You create a thread, a place for storage of a conversation and tool call history. You put a user message in the thread. You invoke a run specifying the assistant ID and thread ID. The software then calls the model, loading what it wants into the input, and has several internal tools that the AI can call upon repeatedly without you interacting with the internal AI responses. To then keep waiting to see if an answer is ready for you.
what do you think about the drawbacks of assistant api over chat completion api? One I can find is fine tuned models cant be used with assistant api. And I guess it works slower