You can transform your GPT to an assistant and use the assistant api.
Another way to ask this question could be, is there going to be feature parity to fine tune models via API the same way custom GPTs are trained?
This is needed. i spent hours and hours yesterday creating a custom GPT client through the interface, So happy with my results only to find out that i have no way of connecting my fastapi script to it. i can only share the web link… what is the point ? hours and hours wasted.
3 Likes
Same here bro, it’s frustrating to have to backtrack!
Big + 1 for this , we want to integrate our custom GPT into our application and use internally for data processing
dygos2
23
+++++1 for this one!
GPT’s are useless for 3rd party apps, if they’re stuck on OpenAI UI.
Don’t you guys wanna make more $$$ with token and requests?
Hopefully in the future we can duplicate the GPT as an Assistant (assuming they use the same API, who knows, Assistants don’t even have GPT-4V or Dall-E yet)
But if you are expecting GPTs to be freely available for API, well, don’t hold your breath. You should’ve been building an Assistant and paying for the service.
Not at all. You can attract & introduce customers that are already Plus subscribers to your services using GPTs & Actions. Users can switch between commonly used GPTs for their daily routines instead of having 3 different apps.
It’s free for you to create and distribute a GPT (And you get revenue sharing) on the platform because it’s expected to stay IN the platform and attract plus subscribers to the GPT ecosystem.
1 Like
dygos2
25
Im not sure you got my point.
If Im building bots, that can be beneficial for our company and employees, and I would like to white label them, GPTS are personalized and polished models that I’d love to use inside my company. But, how can I do that, If our employees has to use OpenAI’s GUI? GPTs stuck inside OpenAI platform, has no use for my use case that Im trying to build. I believe I can fine tune a model there and use it, but… Do you understand my point? Would be much easier to just been able to share the GPTs thru APIs, and of course, we’d pay for it as we already do to use tokens.
2 Likes
Assistants are GPTs thru API. You have to pay for the service.
cdd
28
What’s up with the implication that we aren’t “paying for the service” when building a custom GPT?
Custom GPTs and the GPT store are not available to non-plus users.
1 Like
It’s not an implication, you’re also missing the point.
If you want your GPT to be used on your website you need to create an Assistant and pay for it (for other people to use it)
Of course you need to pay a plus membership to create & use GPTs, but that’s not what I was talking about.
1 Like
Hello, guys! You’ve created a wonderful service, and I sincerely thank you for it. However, it doesn’t work on my website and displays the error ‘Sorry, something went wrong. Please try again later.’ Meanwhile, the ChatGPT connection through the API, which I set up myself, works fine. Your test code for your Assistant also works. What could be the problem? Care Chat
This seems to be weirdly difficult for some users to understand, I guess that’s to be expected when you release something like GPTs. It lowers the barrier to access drastically.
1 Like
+1 i think this is important because the Assistants do not have web browsing capabilities so +1
Assistants CAN browse the web. GPT’s web browsing capabilities are just functions that call external APIs. You can have your Assistant use more APIs than the base GPT, but you need to code the function execution + retrieval.
yev
34
I set up an Assistant and asked it to summarize the contents of a web page, given a link. It responded by stating that it cannot browse the web. Perhaps this can be accomplished by way of functions?
1 Like
+1 to being able to call your own GPT. My main use case is I want to be able to send API calls with context (i.e. knowledge in a custom GPT) but I don’t want to have to pass it in the call every time - mainly due to cost implications.
Or is any of that possible via Assistant API?
1 Like
_j
36
API isn’t really for one-on-one interactions. There is an API method to disconnect a retrieval file from an assistant, but that should be considered more for building and revising them. You’d not know when a user is asking a question that is relevant to the retrieval documents.
If you really want control, you’d write an API chatbot that gives you as much control as even enabling or disabling past conversation turns as you see fit. Not an assistant.
+1 for the CustomGPT via API
1 Like