Custom gpt vs assistant api

I created a custom GPT and an assistant using the Assistant API some a q and a bot for a company. Both have the same instructions and access to the same exact set of documents. Same exact model also. However, it appears that the custom GPT consistently delivers more reliable and comprehensive results compared to the Assistant API. What could be the reason for this discrepancy?


interesting - do you know if custom GPTs have infinite thread context, like the assistant api?

i don’t think it has to do with thread context. assistant api doesn’t seem to be thorough in searching the documents that i’ve uploaded…at least that’s what i noticed


im guessing that you have the web capability turned on for the gpt vs the inability for the assistant to access real time data from the web whereby giving it access to far more comprehensive responses

1 Like

OT, but is it possible that if I create an assistant via the API it does not show in ‘My GPTS’ ?

I have some difficulty understanding the purpose and goal of creating the Assistant API. Am I correct in understanding that essentially it is a tool similar to GPTs, with the ability to train a specific model on one’s own data and integrate it into one’s web application outside the openai platform?

1 Like

As far as I understand, you’re right. GPTs are used as a part of “ChatGPT ecosystem”, and Assistant API is a tool for using OpenAIs models outside of that ecosystem, for example a chatbot on your own website.


I encountered the same problem. It seems the implementation is different between GPTs and assistant. Benzri, have you made any progress on this issue?

I have noticed the exact same thing, I guess there is a reason the assistant API is in Beta. Using same set of docs and instructions, the “My GPT” version comes up with much better results than Assistant API. Even when tweaking the instructions to be even more specific, assistant seems to miss the mark most of the time.

Hopefully, it will make progress, otherwise what could have been one of the best API functions, would turn out to be the least useful in real life :frowning:

1 Like

Can you create a function to use the api to call on the Gpt to run the query?

1 Like

Hi and welcome to the Developer Forum!

You cannot call a GPT.
You can call an Assistant.
GPTs can call assistants if properly configured.
Assistants cannot call GPTs.


Exactly my thoughts as well. Whats the point when we already have Custom GPTs?

Not needing to produce a product solely for the benefit of driving ChatGPT Plus subscriptions to OpenAI’s own site in order to use your custom AI function?


Same for me. The custom GPT I built as a help assistant vastly outperforms the Assistants API with the exact same settings and documents. Web browsing not a variable because vast majority of our site is behind a login. The gap is significant: the Assistants API is generating incorrect information about 20% of the time, where the GPT is consistently solid. Until OAI closes the gap, I don’t think the Assistants API is ready for prime time. (The API would enable us to integrate help functionality in our site, which is what most businesses need.)


I think Assistant feature will dissapear. Or merged somehow to ChatGPTs feature, allowing ChatGPTs to be called from an API.

They are mostly the same and a they are a little confusing. This is a whole new product, a lot of things will change, quickly.


Looking at it strictly from a Retrieval perspective (web browsing not the issue), my identical Custom GPT vastly outperforms the Assistant. Not sure what or why the reason (perhaps to incentivize Custom GPT use?), but it is a noticeable difference.

I wish I could API call the Custom GPT instead. For now, it works so much better.


I completely agree. I originally built my GPT, and it is working exactly how I want, giving exactly the right answers based on the supporting documents and data provided. I have then built an Assistant using GPT-4-1106 Preview, and I’m getting ok answers, but it’s nowhere near the level that my GPT is giving me.

I copied the instructions from the GPT to the Asistant to make them both as close as possible but still not getting the same level of response. It’s almost like GPTs are using GPT5 and Assistants are using GPT-4-1106 Preview as mentioned.

If only OpenAi would let us port a GPT to Assistant so we can make it available outside of the openai website. or enable API access to the GPT and have the pricing model be metered.


I’ve been deep diving into programming with these systems so forgive my ignorance of some of the terminology. I was hoping to create a way to access features of GPT4 including the ability to create Dall-E 3 images within the chat like GPT4 currently allows. From my understanding it’s all in a wrapper that executes all these functions in context and then brings all of it back to the user somewhat transparently. I was hoping the Assistants API would allow this functionality to be embedded to our websites but it doesn’t seem to be that simple. Am I misunderstanding the capabilities of assistants? It seems that these are different than how the custom GPT’s work and now i’m back to square one and I would hate to try to program an entire custom system to have them open up the API to the GPT’s in a month and then all the works go to waste. Maybe thats just the name of the game right now. I’m playing catchup right now trying to understand this whole thing so forgive my ignorance.

Thanks for everyone here helping out, you guys rock.

1 Like

I’m in the same boat, this is really frustrating, why give people access to this and then completely limited functionality on the api level? It’s really silly. I’m willing to pay more to have access to be able to “break the sandbox” for the GPTs, but it seems to require doing everything from scratch for an inferior product.

1 Like

I’m getting the same level of discrepency. The GPT built using the same knowledge document and instructions is significantly more accurate than the assistant. The GPT is useable, but the Assistant is not. We would love to build this assistant into our app, so clients can use it without heading to ChatGPT. Until that gap is closed, it’s not even useable by our support staff.