Subject: Inquiry About Accessing Custom GPT Models via API

Hello OpenAI Community!

I hope everyone’s doing well. I’m reaching out to gather some insights about the use of OpenAI’s API for custom GPT models.

I’m particularly interested in understanding if it’s possible to access and integrate custom GPT models, like specialized versions (for example, one tailored for laundry care advice) through the OpenAI API. My goal is to incorporate such a model into my application to enhance its functionality.

Is there a straightforward way to integrate custom GPT models using the current API framework? If not, is there a plan to enable this?

Any insights, documentation links, or pointers towards relevant resources would be immensely helpful.


I’ve been wondering the exact same thing! I’ve tried a few things without success. hoping for an answer here!




That’s honestly all the information, unfortunately.

I also hope we can do this. For now it seems best to just copy/paste the settings over

@RonaldGRuckus is there a way to reliably “copy/paste the settings over”. I’m not aware of one, because in order to get my gpt behavior how I wanted to, we had a dialogue of discussion which I don’t think I can replicate over the api

You can goto the Configure tab and see most of the settings there

I’m definitely having issues getting the same performance out of an assistant compared to what I can do with a customGPT. (for reference, my application is interfacing with an SQL database).

The assistant is verbose and provides explanations no matter how many times I tell it not to. It also doesn’t seem to understand the documents I provide on the database schema. For whatever reason the custom GPT is much better at these tasks which is frustrating, because I’d like to just use the custom GPT in API.

If anyone has tips on how to get the assistant to perform better in this scenario I’m all ears. Or i’d love if someone from openai came and told me that custom gpts could be accessed through the api soon


Were you able to find a solution here improving performance of the assistant to the point it is comparable to the customGPT? I am trying to do something similar interfacing with a database. The customGPT works well, but have not yet tried porting over to an assistant.

I’ve noticed they aren’t comparable as well. I think the assistants model is fine tuned in ways that make it a lot less flexible.