I need to directly call my custom GPT model for a single function when using the API, rather than continuously adjusting the official model
Hello there and welcome to the community!
So, custom GPTs aren’t directly accessible to the OAI API. What you would do is create and assistant. Custom GPTs are more meant for little-to-no code type stuff, unless you’re building your own custom API endpoint.
Hi,
There are valuable custom GPTs, such as the “Data Analyst GPT” created by OpenAI. Are you planning to allow an API Assistant to access to a custom GPT model like that one?
How do I make an endpoint that uses my custom GPT? Without it, it would make no sense for me… Is there a way I can download the model and host it elsewhere?
My model is such that the output will be interpreted by an external program to do stuff for a website. How do I get there?
Hey, quick question: So the workaround would be to train a custom gpt with custom data as or via an assistant and then use this assistant for the API call, right?
Hi and welcome to the community!
It’s important to clarify the terminology:
We don’t train the new OpenAI models. Instead, we provide context for the model to process in order to generate the desired outputs.
A custom GPT is a feature available in the ChatGPT interface. On the other hand, the Assistants API allows you to create AI assistants within your own applications. An Assistant can follow specific instructions and leverage models, tools, and files to respond to user queries.
As a result, the Assistants API offers significantly more flexibility.
If you’re interested in building your own assistant, you can check out the documentation. Additionally, when using the platform, you also have access to a no-code interface for building your solution, though you’ll need to purchase credits separately.
Hope this helps!
Thanks sir, that helped a lot!