Access to new custom "My GPTs" through API?

Can I use my API key to access a custom GPT created through the My GPTs feature that was rolled out earlier this week?

I see mention of using Assistants, but there are some key differences. First, the user-friendliness of the My GPTs is fantastic. Just chat, drag, and drop. But more importantly, there is the issue of internet access. There does not appear to be any ready-made internet access feature for assistants. Maybe something can be coded, but nonetheless, the My GPTs seem like the better product.

43 Likes

This would be an AMAZING feature. Strongly support this update

20 Likes

If as you state only assistants are possible to call and configure from the API, how can we cover aspects of the Actions and the knowledge base (Providing files) of the agent in the program ?

Having an hybrid agent would be amazing : Able to be created and manupulated in a program. All the capabilities of specialisation of the custom my GPTs.

Anyway, great work on your side Open AI

3 Likes

I really hope this is at the top of list of future updates from OpenAI

Endless possibilities will be unlocked with this…

1 Like

You are free to build a plugin API that a ChatGPT Plus GPT action can interact with — in fact, that’s the only thing that makes them valuable (your prompt engineering is uninteresting).

However, if you don’t merely make a database API or a calculation API, but rather you make an API that is powered by a costly AI where your API allows the GPT user to essentially use your OpenAI API account, then you are the one paying the bill for those enhanced services you offer.

API’s assistants are also not really made to service applications like a chatbot’s functions. They are themselves chatbot agents that may take a long time (or never) to provide an expensive answer - that you get to pay for if you use AI services to power your GPT plugin API for others to use for free.

1 Like

I agree that this would dramatically improve usability - includnig by keeping a version of truth of one’s agents.
An easy enough workaround is just to copy the instructions and recreate them -
although any changes would have to be made in both places now

1 Like

You can just put the GPT/Assistant behind a pay wall.

2 Likes

@Sim2K
Can you tell me more please. How can you do it?

I can share some of my learnings.

On my site (link in my bio) I have over 250+ GPTs and I wanted to give users who don’t have ChatGPT Plus the possibility to also use them. (Btw, since today ChatGPT Plus seems available for new subscribers)

On my site I now created an experimental possibility for (registered) users to add in their OpenAI API key and then chat with my GPTs.

I originally wanted to develop Assistants (with similar instructions as the GPTs) but after experimenting I resorted back to using the ‘usual’ chat completion approach and putting my custom GPT instructions into the system prompt. That seems to work, although not as great as the GPTs (yet).

Two reasons why I did not use the API Assistants (I may review later again):

a. The Assistants only work if you have the same API key as were they were created, so this means I either needed to (re)create the Assistants in their account OR I needed give them access to my API Key. I don’t like either approach.

b. I find the costs for using the Assistants too high to be honest. You not only pay for the sessions (threads), but also data retrieval AND for tokens used.

Hope this helps…

4 Likes

How are you managing the conversation history? Are you storing it locally and appending everything to the prompt each time?

Correct, history gets feeded into user prompt as addition, although with some tweaks to ensure it can remain within token limits. I also allow the user to reset history.

2 Likes

Hi! How do you save the information (docs) on which the assistant should be based using chat completion?

Good question. At moment I dont use any docs, even in my Public GPTs. However, in another system that I built I had the documents (hundreds of docs, including very large ones) embedded into/via a vector database, and then used Embeddings to find and extract the relevant text and provide this back to ChatGPT to review and complete. That works quite good in terms of responses, and is also probably cheaper then Assistants as well. But it requires using another vector database and a bit more coding. I’ll probably (re)add that functionality in when I have more time.

1 Like

Thanks for your answer! What vector database were you using in those projects?

You’re welcome.

I use pinecone.io as a vector database.

1 Like

Assistants also support adding files to them, so that would be another option?

It’s too bad that custom GPTs can’t be access by the API. I don’t find the assistants API to be comparable because it appears to be fine tuned to impersonate a customer support representative, and this behavior often overrides the custom prompting.

1 Like

There is no direct way to convert your Custom GPT into api. You would need to use Assistants api to achieve this. There are few open-source projects available in Github which you can search and use to build your Custom GPT with Assistants api

1 Like

Hello fam!, I arrived here with the same intention, I believe and I’m hopeful that every 3 days someone will always arrive looking for this API and they will soon make it available to us lol, now I’m starting a project and as we still don’t have “Gpts API” I’m going to start via Assistants API, has anyone here managed to configure a multimodal Assistants, which text and generates images in the same response?

1 Like

Yes, if you enable Dall E as one of the options in the api, your assistant can generate images in addition to text