Choosing between GPT Assistant and Chat Completion. Which one is better?

Hello everyone.
First of all, I would like to thank those who respond to developers’ questions in this Community.

We have a Meditation app that is a ChatBot based on AI.

We intend to investigate whether it is better to migrate to GPT Assistants or to keep the current structure, which is Chat Completion API.

Our needs:
We have 5 prompts/instructions that our users can initiate their conversation with AI based on one of these 5 prompts and have a conversation of about 100-200 messages.

1 - Are there any restrictions/limitations on creating “Threads” by API/SDK for GPT Assistants?
2 - Does attaching practical Meditation-related files to GPT Assistant help improve performance during a “Thread” with a user?
3 - If the number of Threads increases (but they are not active), will the cost escalate?
4 - Apart from the $0.03 we have to pay for creating each Thread for GPT Assistant, what other costs are there for GPT Assistant that are not present in Chat Completion API?
5 - Can the Instructions for Assistants be updated over time? Or do we need to create a new Assistant with updated Instructions?

And finally, for a Meditation app based on ChatBot, which structure do you recommend?
Chat Completion or GPT Assistant API?

Any suggestion, help, idea?

As with most things, it’s a tradeoff.

Assistants is a beta endpoint, and is constantly buggy/changing, so if you value your sanity, I would suggest that you go with Completions for now.

The main use case for Assistants is that they contain their own prompts and vector databases, and they are separated from the threads meaning you can swap them in and out or modify them between runs for potential multi-agentic workflows.

Assistants can have one or more vector DB, but threads may also, meaning an assistant can have it’s own vectorstore plus the thread one when it does file search.

If you use completions, you’re going to come up with your own vector DB solution if you’re going to use RAG, otherwise you’re using the chat endpoin for context management.