How to make an API call to a custom GPT model?

I created a custom GPT model and would like to interact with it via the API.

When I make a request to the https://api.openai.com/v1/models endpoint, I do not see the custom GPT in the list of available models.

Is it possible to make requests to a custom GPT using the API?

37 Likes

I would recommend having a look at the assistants API. When creating an assistant through the API, instructions and tools are stable across conversations (much like in the custom GPT you’ve created)

2 Likes

I have already created a useful custom GPT through the interface. Is there not a way to just define this model as we did before eg:

openAI.chat.completions
	.create({
		messages: [{ role: 'system', content: 'You are a helpful assistant.' }],
		model: 'gpt-4-1106-preview'
	})
	.then((completion: any) => {
		console.log(completion.choices[0]);
	});

So now instead of the model we specify the custom GPT , or another field that specifies what we’ve created?

5 Likes

Thank you for this suggestion, I had the same question, and I am going to try this. It looks like what I wanted to do, I have to redo the work I did with the GPT… but whatever :slight_smile:

The Assistants are more limited in comparison to custom GPTs, namely online search for example. I tried creating same kind of GPT with Assistant and Custom GPT, it’s not the same: so there’s no way of using an API to interact with a custom GPT?

2 Likes

+1 for this. Would be good to be able to call a CustomGPT through the API

20 Likes

I’d have thought it’s inevitable that at some point this will be implemented, and that you’ll be able to deploy a GPT to a website, or a slack channel, or a whatsapp number etc, through your OpenAI acount in a couple of clicks.

8 Likes

+1 for the capability to call CustomGPT via API integration from a custom app.

12 Likes

Another one with the same requeriment :slight_smile:
+1 for the capability to call CustomGPT via API integration from a custom app.

5 Likes

Hi and welcome to the Developer Forum!

This is what Assistants are for, please see:

4 Likes

+1 for me also ! That’s would be a wonderful feature !

3 Likes

One more:
+1 for the capability to call CustomGPT via API integration from a custom app.

5 Likes

+1 from me as well. I was all excited to just hit my custom GPT with curl calls from a PHP widget, sigh very disappointed, I’m assuming this will be implemented soon ?

5 Likes
  • on this, it would be great to be able to call my customGPT via API to my webapp.
7 Likes

+1 for the capability to call CustomGPT via API integration from a custom app.

3 Likes

As @Foxalabs said, this is specifically what assistants are for? OpenAI likely won’t ever make the GPTs themselves interactable through the API, that defeats the whole purpose of the GPT store and the push for more Plus Subscribers.

Assistants are nearly identical, especially if you add in dalle-3 and code interpreter. Vision will likely come eventually, but at the moment it would be too compute-intensive to run on the scale of GPT-4 Turbo.

3 Likes

Another way to ask this question could be, is there going to be feature parity to fine tune models via API the same way custom GPTs are trained?

2 Likes

This is needed. i spent hours and hours yesterday creating a custom GPT client through the interface, So happy with my results only to find out that i have no way of connecting my fastapi script to it. i can only share the web link… what is the point ? hours and hours wasted.

9 Likes

Same here bro, it’s frustrating to have to backtrack!

1 Like

Big + 1 for this , we want to integrate our custom GPT into our application and use internally for data processing

3 Likes