I asked gpt-4 to recreate the original post from just the summary in the previous reply, in the original frustrated tone. Here you go, to humorous effect:
Alright, listen up, because I’m only going to say this once. I’ve seen a lot of you out there with this misguided notion about GPT models. You’re all running around like headless chickens, thinking they’re APIs that you can just plug into your external APIs. Well, you’re wrong. Dead wrong.
These GPT models are consumer-focused tools, not APIs. They’re not some magical piece of code that you can just integrate into your system willy-nilly. If you’d actually take the time to explore OpenAI’s platform, you’d see the clear difference between creating an ‘Assistant’ and using a GPT.
And for the love of all things holy, stop trying to create a “Custom GPT” without adding specific actions. You’re just wasting your time. You can’t just slap some custom instructions onto standard GPT queries and call it a day. It’s like trying to turn a toaster into a microwave - it’s not going to happen.
Now, let’s talk about the Assistant API. This isn’t some simple, stateless ‘chat.Completions’ feature. No, this is a whole different beast. It offers a more complex interaction with GPT models. You’re going to need some serious coding skills to manage stateful interactions. It’s not for the faint of heart or the lazy.
And finally, I implore you, for your own sake, read the documentation. I mean, really read it. Don’t just skim through it while you’re eating your breakfast or waiting for your code to compile. Dive deep into it, understand it, live it. It’s going to save you a lot of headaches in the long run.
I hope this clears up the confusion. If not, well, I don’t know what to tell you. Maybe try reading this post again. Or better yet, go read the documentation.
Plus like… you’re writing the code, you can and always have been able put together any data sources you need before passing it to the model. None of this is new, it’s just easier because OpenAI does the heavy lifting.
Damn, friend, you need to chill. Remember, your use case is not everyone’s use case. Do you want me to build an entire assistant just for a joke app? It has a ‘Generate Joke’ button, and once you click the button, it calls a custom GPT that creates a joke (with a previously trained tone and context). Why do you want me to force chatting with the app?
Why do you find it so confusing to have an API endpoint to work with those custom GPTs outside of the OpenAI GUI? As far as I know, a user can also upload files using the chat interface. Then, why do we have a files API endpoint?
We already understand that assistants are not the same as custom GPTs, but why are you so frustrated with a question nobody has asked?
I think the title is very clear: ‘How to make an API call to a custom GPT model?’. It’s clear that it’s simply not possible now; that’s the only valid answer. But why does it upset you that building a custom GPT is easier than building an assistant?
Yep. Its also how people lke the guy above, continue to use the terms interchangeably…immediately after a post about how they’re arent the same thing.
Not sure how my post communicates that I’m upset that one thing is easier than another?
My frustration is that we’re on the Developer forum, specifically for API discussions and people can’t be bothered to even click a link like Foxablio gave right away.
Do i think OP shouldve read the docs a little first, or search the forums? Sure, but im not going to fault anyone for asking a question… to a point.
Its MOSTLY the concerning number of people confidently answering OP, who also haven’t read, making this 100% more confusing for anyone trying to get help.
Totally. An assistant with dalle, code interpreter, and function/tool calls to gptv, local web search tools, plus document summary, retrieval and generation does some magical stuff.
For all it’s worth, I was able to hook the Assistant gpt with my application using a combination of assistant, functions and OpenApi specification and JWT token
@Foxalabs I see your comment and that is helpful thanks! One thing that is confusing though is assistants actions looks to be much different than the custom gpt actions… In the gpt actions the actions files seems to be much more intuitive with the paths and explicit crud operations… the assistant actions has a different config entirely…is the main difference in the assistant you have to call those API’s for the chat and the gpt calls them on it’s own?
Not a bad way of looking at it, essentially there are 3 levels of “complexity” and with that control, the first is GPTs which require no coding skills at the basic level, although making use of API’s and tools can get interesting, then there is assistants which you have to create the functions for yourself and handle the calling and returning of data, and then there is the root API where you control everything manually.
The topic of this discussion is “How to make an API call to a custom GPT model?”.
user1 expressed the need to communicate with their self-made GPT model via API and noticed this model doesn’t appear in the catalog of available models at https://api.openai.com/v1/models endpoint. SigmoidFreud suggested looking into the assistants API where creating an assistant provides a stable interface similar to a custom GPT model.
jajosheni wondered if there is a simpler way to use the model they previously configured, using a similar method to defining a different model. TeesValleyAI expressed the hope that future updates would allow users to deploy their GPT models into various platforms like websites and messaging apps. Several others, including gaurab, joseblogelectronica, and CHEFDR voiced support for the ability to access their custom GPT models via API from their applications.
Foxabilo redirected the participants to the assistants overview, pointing out that their function is similar to the requested feature. Sl4ck3r and trenton.dambrowitz also chimed in supporting this direction, the latter adding that making the GPT models themselves accessible via API would essentially nullify the objective of the GPT store and the drive for more Plus Subscribers.
Marc-DitchCarbon concluded with a strong endorsement for the ability to integrate custom GPT models for internal data processing in applications.
Summarized with AI on Dec 22 2023
AI used: gpt-4-32k
+1 for the CustomGPT via API.
We need to enter a page URL, extract some keywords from the web page and match them with a list of keyword dynamically retrieved from a custom API . Not sure how this can be done using assistant (no browsing and no api calling feature).
The assistants api may provide this today, but currently it is synchronous and therefore slow. The web page for assistants mentions they expect to add a streaming api soon, which should give us all what we want.
For chat completions fine-tuning models is probably the equivalent. I have not done that yet, but it appears to provide the same functionality a little different way: https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples. The desirable thing about this is that we can do the fine-tuning once rather than use up tokens in every prompt for instructions. The page implies this may be limited to the gpt 3.5 line with 4.0 in limited beta, but the page also may be out of date. Again, I have not used this feature yet, but it appears to be what everyone is asking for in terms of providing a completions-accessible model that is tuned to our preferences.
So I would say +1 for completions access to the custom gpts. And yes, assistants can currently access the custom gpts. But I will not find them usable until there is a streaming api.
+1 from me too.
There is so many chatbots on web page and I would like to use custom gpt as chatbot in company page with product info etc.
There is lot build in tools for customized GPT example azure, ai search etc.
But interface in this is way more simpler. It just a web page to configure something not set of indexing and vectors and semantics etc. settings and handling lot of azure resources.
Sorry for reviving a dead thread but I digress. If they allowed API access to custom GPTs, since the API specifically bills per query, they could much more easily revenue share and push the GPT store. You could potentially even allow developers to set their price and that would be on top of OpenAI’s API pricing. While I do agree that the intent with Assistants was to allow this, I believe that for some reason the Custom GPTs are often higher quality than Assistants. This may be in part to the program done by the Creator and the back end, something hard to match when creating an Assistant.
+1 for the capability to call CustomGPT via API integration from a custom app.
If they want us to use assistants for this purpose they should make them the same features! They are not the same. What’s the point of CustomGPT if we cannot use outside the playground?