How to make an API call to a custom GPT model?

Big + 1 for this , we want to integrate our custom GPT into our application and use internally for data processing

+++++1 for this one!
GPT’s are useless for 3rd party apps, if they’re stuck on OpenAI UI.
Don’t you guys wanna make more $$$ with token and requests?

Hopefully in the future we can duplicate the GPT as an Assistant (assuming they use the same API, who knows, Assistants don’t even have GPT-4V or Dall-E yet)

But if you are expecting GPTs to be freely available for API, well, don’t hold your breath. You should’ve been building an Assistant and paying for the service.

Not at all. You can attract & introduce customers that are already Plus subscribers to your services using GPTs & Actions. Users can switch between commonly used GPTs for their daily routines instead of having 3 different apps.

It’s free for you to create and distribute a GPT (And you get revenue sharing) on the platform because it’s expected to stay IN the platform and attract plus subscribers to the GPT ecosystem.

1 Like

Im not sure you got my point.
If Im building bots, that can be beneficial for our company and employees, and I would like to white label them, GPTS are personalized and polished models that I’d love to use inside my company. But, how can I do that, If our employees has to use OpenAI’s GUI? GPTs stuck inside OpenAI platform, has no use for my use case that Im trying to build. I believe I can fine tune a model there and use it, but… Do you understand my point? Would be much easier to just been able to share the GPTs thru APIs, and of course, we’d pay for it as we already do to use tokens.

4 Likes

Assistants are GPTs thru API. You have to pay for the service.

  • for same requirement, I have created custom gpt using interface and its public. Now I want to access it through API. Mostly from Android / IOS apps. Please assume mobile developers while documenting the API reference docs.
1 Like

What’s up with the implication that we aren’t “paying for the service” when building a custom GPT?

Custom GPTs and the GPT store are not available to non-plus users.

1 Like

It’s not an implication, you’re also missing the point.

If you want your GPT to be used on your website you need to create an Assistant and pay for it (for other people to use it)

Of course you need to pay a plus membership to create & use GPTs, but that’s not what I was talking about.

1 Like

Hello, guys! You’ve created a wonderful service, and I sincerely thank you for it. However, it doesn’t work on my website and displays the error ‘Sorry, something went wrong. Please try again later.’ Meanwhile, the ChatGPT connection through the API, which I set up myself, works fine. Your test code for your Assistant also works. What could be the problem? Care Chat

This seems to be weirdly difficult for some users to understand, I guess that’s to be expected when you release something like GPTs. It lowers the barrier to access drastically.

1 Like

+1 i think this is important because the Assistants do not have web browsing capabilities so +1

Assistants CAN browse the web. GPT’s web browsing capabilities are just functions that call external APIs. You can have your Assistant use more APIs than the base GPT, but you need to code the function execution + retrieval.

I set up an Assistant and asked it to summarize the contents of a web page, given a link. It responded by stating that it cannot browse the web. Perhaps this can be accomplished by way of functions?

1 Like

+1 to being able to call your own GPT. My main use case is I want to be able to send API calls with context (i.e. knowledge in a custom GPT) but I don’t want to have to pass it in the call every time - mainly due to cost implications.

Or is any of that possible via Assistant API?

1 Like

API isn’t really for one-on-one interactions. There is an API method to disconnect a retrieval file from an assistant, but that should be considered more for building and revising them. You’d not know when a user is asking a question that is relevant to the retrieval documents.

If you really want control, you’d write an API chatbot that gives you as much control as even enabling or disabling past conversation turns as you see fit. Not an assistant.

+1 for the CustomGPT via API

1 Like

Hi Thanks for sharing your expertise.
its mean Custom GPT can create by using interface and cannot access via API while Assistance API is not Custom GPT ?

AM i Right?
Thanks

You are right.

“GPTs” are only in ChatGPT Plus, the web chatbot with paid monthly subscription.

“Assistants” is an agent-like interface for API that obfuscates interactions you could program directly with AI models.

This is a massive need. It’s a pity that OpenAI has not made GPT accessible via API. Defeats the entire purpose.

Any workaround anyone has been able to discover to browser internet through Assistants API?

Several posts in this thread state that assistants are custom GPTs with API. However, I haven‘t figured out yet how to enable an assistant to call my own API by providing the OpenAPI 3.0 file and some auth infos.
Would be killer to enable users to interact with your app/service by just chatting with you gpt assistant.