Any news about connecting a custom GPT to the API?

Hello guys,
I’ll put it in simple words, hoping for a simple solution:
I have trained a custom GPT for many months. Fed it with files filled with knowledge, and gave it hundreds of instructions as well. It works great.
But when I recently wanted to connect it to a designated App that I asked a developer to build, I was told that the GPT that I built cannot be connected to the API.
For that, my dear developer said, there’s something called an Assistant.

My questions to you, smart community-

  1. Is that really so? Is there no way to connect the custom gpt to the API?
  2. If indeed the answer is “no way to connect”, then what does it mean using an assistant? How can I make this assistant act like the well-trained GPT? Uploading to the assistant the files of the database won’t suffice because there were hundreds of guidlines… See my point?

Sorry it’s long but I guess others will benefit from this thread.

Ps,
I did read the full thread from November 2023 that discussed a similar question. It was closed on March with no conclusive advise.
:pray:

3 Likes

Yeah, I don’t think it’s changed since then… basically, the ChatGPT ecosystem will always be slightly different from the Assistants API likely.

Have you tried replicating with Assistants API, or you just think it will be too difficult?

GPTs are the appeal to get users to upgrade to ChatGPT Plus.

They are also products of independent developers, and can use developer’s external resources.

Therefore the chance you can put another developer’s product extracted from a paid platform web site into your own product? I don’t even think I need to include “approaching” when I say “approaching zero”.

There’s also no “training by hundreds of instructions”. Go to the manual view tab, and you’ll see the one instruction. Place that as system message in an assistant on API, and you’ll get better following than the instruction framed by “you are a user’s GPT”.

That’s helpful, thanks mate.
Do you know if the assistant can use regular files as a knowledge base, like GPT can?
By regular I mean simple text, like articles.
Because from the documents it seems to me that the files should contain text in the forms of prompts only (if you recieve X reply with Y).

You might have stumbled across fine-tuning instructions.

Uploaded files for assistants instead are used for semantic similarity search - and are a very poor place to try to instruct the AI.

Documents are instead just that - text that the AI can perform searches on and would be provided on demand or when useful.

Similar to creating a GPT and providing files with knowledge there.

https://platform.openai.com/docs/assistants/tools/file-search

1 Like

Also, I do think that OpenAi would do a good thing to allow developers of GPTs to migrate their Own created and trained GPTs to an API.
After all, what I’m forced to do now is to train from scratch an Assistant. So what’s the point? I’m only will waist time…

Don’t you agree?

1 Like

The resources that could be copied are few and child’s play to copy with the resources available now.

  • DALLE GPT? Vision GPT? Nope, no image creation or vision supported in Assistants (edit: computer vision now possible);
  • Browser for the internet? Nope, no internet access without writing your own tool;
  • Actions? Nope, again the API doesn’t access internet and you’d reimplement in code.
  • Icon? Uh, no.
  • Training? No, there is no training, there is just a single context instruction.
  • Conversation starters? You’d have to write your own UI code to do that.

So what do you have to “migrate”?

  • Files? Maximum of 20. ChatGPT has them in one place, but they cannot be migrated because there are two choices in Assistants.
  • Instructions? A block of text.

Let’s start with deleting everything out of Assistants, because this endpoint has 0 value to me except making demonstrations or diagnosing other’s foibles before they also give it up.

Assistants, toast
image

Threads, a script to blast them from existence
image

Vector storage, nothing of value was lost
image


Then head over to a GPT. Edit and click on “configure”. You literally see EVERYTHING it is in text boxes.

Some random curated knowledge and telling the AI what to do there.

Go to the API playground. Paste those instructions and give a new assistant a name. Boom, same thing.

Then I’ve got some files. A maximum of 20. Those have to distinctly be moved into code interpreter storage or a vector store. Drag and drop if you want to not write code.

Stick those in a vector store ID for the assistant.


And then it’s ready for use.

And I wasted 1300 tokens to get an undesired response out of the gpt-4-turbo AI. It should only respond that way if seeing the ellipsis of a starter. Another Assistant for the delete button.

2 Likes

Wait wait I am confused.

  1. You can connect a GPT to any API or even create an API to do so.
  1. Are you saying the developer you hired owns the platform, so they are saying you can’t use thier platforms API?
  2. Yes you can use the “Assistants API” as well with is the same thing - only real difference is it is not in ChatGPT interface, you need to supply the interface, code, etc.
  3. A disagree with the perspective that “GPT’s are products of developrs” - I have hundreds of small business owners, coaches, agencies who literally don’t know what HTML is and have GPT’s that they monetize. Yes it is true the developers are going to have the most GPTs becuase of the understanding of APIs.
  4. This is related to #1 . Here is a video of me connecting a GPT to an AirTable database: https://youtu.be/LItw8qJxkas?si=rS8QPABqsTUkt1vV

I hope this gives some perspective.

1 Like

I don’t know if you are asking about what I just posted above.

I write about the feasibility of a “migrate” button to convert your GPT into a Assistant on the OpenAI API. And how little would be meaningful to migrate by such a converter that couldn’t be done in five minutes.

Actions within a custom ChatGPT Plus GPT, they interact with a developer’s or third-party API someone has set up on the internet, using authentication and credentials.

The API has NO internet access that comes out of OpenAI.

The effect of an Assistant’s actions thus would have to be recreated using tools, and code to handle tool calls in a similar way would have to be reimplemented. That likely means that you don’t need to make “API calls” at all, since the code to process such tool calls can be right on the server.

Add to that diagram: Every potential GPT user is OpenAI’s customer, not yours, and must subscribe.

It is copying those instructins and pasting them in the playground - your assistant is auto created.

I am sure we are on the same page about APIs - not all require auth - not all are paid. Of course not all API is internet access…
Yes they have to be recretaed with tools, but can still call upon an API.

Also I don’t have to add that to the diagram becuase it is known when I present it - or usually to most people.

Perhaps you read the headline and not the body.

This topic (and several others) are “I made a GPT, now how can I talk to it with API calls?”

The new part you posted was noth there, I read through. We can agree to diagree. That is why I started with “Wait I am confused”.

it surely does, thanks.

now I am confused :slight_smile:
The developer is a programmer that i hired to develop an IOS/Android App that would take the GPT and through the OpenAI API would connect it to the app. The developer does not own any platform. He just informed me thar a GPT cannot be connected to the API, as i wanted, and instead i would need to use an Assistant and train it a new.

Now, i read your post and you are saying that it IS possible to connect a GPT? hmm…

He may be right, as I don’t have all the information - but can a GPT connect to API - yes it can.

I wish I understood your discussion guys. But I’m also a newbie and not from the AI/Computer field. I also have a similar case like @dudi11 .

I have developed a Custom GPT model using GPT Builder. I did it initially for my own purpose i.e., for self-use. But soon I started adding some features to it and it became famous in my office. Now one company has approached me asking to embed my Custom GPT model into their website. I have never used Assistant API. I’m confused how to make it possible; Would I need to pay more to create a new Assitant API; Can I not just connect my GPT model to a website without using Assitant API; How to sell this to them; How to set a usage limit; etc are all my questions… :frowning:

Believe it or not it is incredibly easy to change a GPT to an Assistant - granted there is a lot of hoopla and not right information going around.

You and @dudi11 can take a look at this video I made talking about GPT’s & Assistants. Later I will make a more updated one showing 5-10 companies that allow you to easily create/pull your Assistant.

On another note I would put your GPT behind a paywall and have your company pay you. You have a golden chance.

1 Like

So, after some deep reading, the bottom line, you need to create an Assistant, which is not a big deal. It’s fast. You will only need to upload the files of the knowledge base that you used for your GPT (if you used any).
Once you have that you can create a paying mechanism, a designated app, whichever and begin to market it.

And if you are not technical, I’m not, hire a developer to that for you. It’s quick.

1 Like

My GPT works with my server including custom API calls, all you have to do is add the endpoints in your schema so that it can interact with the server. Example here: https://chatgpt.com/g/g-5lbas60Rz-forex-rates-premium-version/c/6dc293c1-5b92-41fa-8e86-5e14cbd8fbaf

(Here is a sample chat: https://chat.openai.com/share/8fd45083-683c-4c02-9800-5d47daeb95f8) The candle data was pulled from the server using 4 parameters, then the server injected a URL generated on the fly to our ForexGPT web app.

Ok, the confusion here boils down to this -

A GPT can make calls to an API,

an API cannot make calls to a GPT in the same way.

Meaning, A custom GPT can make use of existing APIs and use them like tools, whereas you cannot create an app which makes API calls to a custom GPT. Not easily anyway.

_J is right, and so is the contracted developer. You need to create an Assistant, which your developer should be able to help you with, I would assume.

3 Likes

This is what I have done.

This affords me much needed time to build the necessary skillset to migrate to Assistants .GPT builder is an ideal starting point for testing ideas .

1 Like