Is it possible to connect to GPTs through an HTTP request in my applications, or is it necessary to train my own GPT model for this purpose? I am interested in understanding whether I can use an existing custom GPT model as an endpoint.
Welcome to the community!
There’s good news and there’s bad news:
-
the bad news is that it’s not possible with custom GPTs
-
the good news is that assistants (https://platform.openai.com/playground?mode=assistant) are essentially the same thing and made for exactly that purpose*
edit:
- the second bad news is that assistants kinda suck a little though, but they’re working on it - but they’re useable depending on your use case
1 Like
I’m hoping it’s a feature in the works as might be hinted at by this commit:
…that was subsequently reverted:
It’s also possible that was a different feature altogether, is the feature but was intended to be internal-only or private preview only, or it isn’t a feature at all.
But that first commit was exciting for awhile there before it was reverted.