API assistant VS GPTs: what's the main difference?

Hi everyone,
in my understanding Assistant_API retrieve informations on the document provided, as if we do cosine similarity against our context, while GPTs are models trained directly on our data. Is it right?

1 Like

No, they both perform the same kind of vector retrieval on the provided data.

1 Like

So, what’s the benefit to use GPTs instead of Assistant? With Assistant i’m already able to work with the pay as you go account, while with GPTs i need a PLUS. There’s something i’m missing :roll_eyes:

Assistants are used through the API and are pay as you go, GPTs are limited to 50 messages/3 hours but cost the Plus subscription.

GPTs are just the easy-access version of Assistants from what I understand. They’re the same thing, just for a different market (non-technical or entry-level)


If it’s like as you say it’s a very good news :wink:

1 Like

But, as we see in other topics, there is a difference between the behavior of GPTs and the Assistant. Actually, I don’t have a Plus subscription to check this myself

1 Like

One thing is clear: openai documentation is very very very cryptical :slight_smile:


I’m sure clever people who already know python can navigate it easily, unfortunately I’m not one of them lol

Besides GPTs and Assistant API are targeted to different people(non-technical or technical guys).
Assistant API does not have the browsing capability, and does not integrate Dall-e.
However, you can integrate web browsing and Dall-e by function calling. Here are 7 awesome assistant api demos on github, davideuler/awesome-assistant-api


I am working on integrating assistants into an application. I have not yet built a GPT, because, as I understand it, GPTs are not accessible through the API, only through the OpenAI UI. But given the similarities, I’ve been wondering this myself. Are GPTs just assistants on the backend with all the tedium of building the thread and the polling routine pre-built and exposed through a canned interface? It very much appears that way.


Are GPTs just assistants on the backend with all the tedium of building the thread and the polling routine pre-built and exposed through a canned interface? It very much appears that way.

Essentially, yes!


I created a GPT via ChatGPT Plus and had it reference a JSON file and discuss the contents of it and the results were FANTASTIC.

I then pasted the same instructions and uploaded the same file to the playground while creating an assistant and its responses didn’t take the whole context of the JSON file into account and just in general weren’t as intelligent.

I hope they close any gaps there.


The GPTs seem to still use GPT-4 instead of GPT-4 Turbo (which powers , at least that’s what the token limits suggest.

If that is the case, maybe selecting gpt-4 instead of the preview model in the assistants page would help?

1 Like

I think it depends on which model you use. Try selecting different GPT4 models.

I have created a GPT and an Assistant (via the API Playground) with the same documents available for retrieval. In this single case, I have found that the Assistants API does a better job of searching the documents and finding specific references that I ask it to find. The Assistants API also provides specific citations that, in the Playground, I can hover over to see the exact source of the information. This citation behavior does not exist in the GPTs interface as far as I can tell.

1 Like

Yes i have noticed the same diference the asistants retreval is much better. But I don’t know why. Does anyone know what is going on ?

Interesting, and I was hoping this was this case.

Quick question: Were you using GPT4 or GPT4-Turbo?

Asking because other people in the thread seems to think GPT works better, and it might depend on the model chosen.


I Use GPT4-1106 preview :wink: