Custom gpt vs assistant api

I created a custom GPT and an assistant using the Assistant API some a q and a bot for a company. Both have the same instructions and access to the same exact set of documents. Same exact model also. However, it appears that the custom GPT consistently delivers more reliable and comprehensive results compared to the Assistant API. What could be the reason for this discrepancy?


interesting - do you know if custom GPTs have infinite thread context, like the assistant api?

i don’t think it has to do with thread context. assistant api doesn’t seem to be thorough in searching the documents that i’ve uploaded…at least that’s what i noticed

1 Like

im guessing that you have the web capability turned on for the gpt vs the inability for the assistant to access real time data from the web whereby giving it access to far more comprehensive responses

1 Like

OT, but is it possible that if I create an assistant via the API it does not show in ‘My GPTS’ ?

I have some difficulty understanding the purpose and goal of creating the Assistant API. Am I correct in understanding that essentially it is a tool similar to GPTs, with the ability to train a specific model on one’s own data and integrate it into one’s web application outside the openai platform?

1 Like

As far as I understand, you’re right. GPTs are used as a part of “ChatGPT ecosystem”, and Assistant API is a tool for using OpenAIs models outside of that ecosystem, for example a chatbot on your own website.


I encountered the same problem. It seems the implementation is different between GPTs and assistant. Benzri, have you made any progress on this issue?

I have noticed the exact same thing, I guess there is a reason the assistant API is in Beta. Using same set of docs and instructions, the “My GPT” version comes up with much better results than Assistant API. Even when tweaking the instructions to be even more specific, assistant seems to miss the mark most of the time.

Hopefully, it will make progress, otherwise what could have been one of the best API functions, would turn out to be the least useful in real life :frowning:

1 Like

Can you create a function to use the api to call on the Gpt to run the query?

1 Like

Hi and welcome to the Developer Forum!

You cannot call a GPT.
You can call an Assistant.
GPTs can call assistants if properly configured.
Assistants cannot call GPTs.


Exactly my thoughts as well. Whats the point when we already have Custom GPTs?

Not needing to produce a product solely for the benefit of driving ChatGPT Plus subscriptions to OpenAI’s own site in order to use your custom AI function?

1 Like

Same for me. The custom GPT I built as a help assistant vastly outperforms the Assistants API with the exact same settings and documents. Web browsing not a variable because vast majority of our site is behind a login. The gap is significant: the Assistants API is generating incorrect information about 20% of the time, where the GPT is consistently solid. Until OAI closes the gap, I don’t think the Assistants API is ready for prime time. (The API would enable us to integrate help functionality in our site, which is what most businesses need.)


I think Assistant feature will dissapear. Or merged somehow to ChatGPTs feature, allowing ChatGPTs to be called from an API.

They are mostly the same and a they are a little confusing. This is a whole new product, a lot of things will change, quickly.


Looking at it strictly from a Retrieval perspective (web browsing not the issue), my identical Custom GPT vastly outperforms the Assistant. Not sure what or why the reason (perhaps to incentivize Custom GPT use?), but it is a noticeable difference.

I wish I could API call the Custom GPT instead. For now, it works so much better.


I completely agree. I originally built my GPT, and it is working exactly how I want, giving exactly the right answers based on the supporting documents and data provided. I have then built an Assistant using GPT-4-1106 Preview, and I’m getting ok answers, but it’s nowhere near the level that my GPT is giving me.

I copied the instructions from the GPT to the Asistant to make them both as close as possible but still not getting the same level of response. It’s almost like GPTs are using GPT5 and Assistants are using GPT-4-1106 Preview as mentioned.

If only OpenAi would let us port a GPT to Assistant so we can make it available outside of the openai website. or enable API access to the GPT and have the pricing model be metered.