Hi Forum, I created a custom GPT but later realised that I could not consume the output of that through APIs.
If I give the same context (that I gave my custom GPT) to CHAT GPT-4 every time I make a call, will the GPT-4 model give the same type of responses?
To be exact my use case is : To pass youtube comments and channel-related data to the model and expect a great reply to that comment on behalf of the author.
Should be really close. Give it a shot, and please report back to let us know.
Welcome to the forum!
Aren’t GPT agents available via the API. I thought they were. You have their ID, you can access the thread and add conversations to the thread.
Okay @PaulBellow . The other challenge I am facing is that powering this application is becoming super expensive through chat gpt 4 whereas GPT 3.5 isn’t useful since it has limit on #tokens.
For my specific use case would you guys suggest training my own model and go ahead? I am just scared about how much time it will take to get a good state, since basic communication skill for GPTs is pretty much sorted.
My method is usually get the prompt working great / flawless in GPT-4. Then try to replicate that with 3.5… difficult but most of the time can be done. Do you need more than 16k tokens each call?
aren’t they just the same? We won’t know the temperature/max tokens + whether if openai is adding some pre-prompting but you should be able to replicate the experience! I built PayMeForMyAI - a tool where creators build and monetize their GPTs on their terms and I use the API to power the GPTs. The responses are usually the same!
Just curious, what’s the reasoning to switch from custom GPT to API? I’ve found GPTs to be much easier to build, maintain and cheaper to leverage.
The API allows you to build a value-added product of your own that can do anything and which can appear in or be employed in whatever site or application you want it to.
A GPT enhances OpenAI’s own product, and can only be used by ChatGPT Plus subscribers.
cant use them in an application running outside openAI UI
I think it’s close - if you can pass the past correspondence in the prompt as well.
On my site (see link in bio) I created an option for people to use my GPTs via a chat on my site, which then uses the normal API instructions. I found it getting close generally.
I originally wanted to have others use my GPTs as Assistants but that only works if they use my API keys, which I dont want. Since I want other users being able to use my GPTs/Assistants with their API keys I had to build it like this (hopefully interim approach).
You have full control over the hyper parameters (Top P, Temperature, Stop sequence, presence and frequency penalty). Admitted, not simple when you try to tame the beast in this way. In my almost a year of daily trying doing so, I am rarely using ChatGPT or Custom GPTs to achieve my research goals…