If my server is running locally (localhost), and I register my plugin in the plugin store, how can the model access my local server when calling an API?
The only option I can think of is the following:
On plugin registration ChatGPT pulls the manifest from the local server
On a user prompt ChatGPT pulls the OpenAPI spec from the local server
ChatGPT sends the prompt along with the API spec to the model (located remotely)
The model decides whether to call an API and if so, sends a JSON back to ChatGPT with the call details
ChatGPT performs a REST call on the local server
ChatGPT gets the result and sends it back to the model
The model combines the result with the completion and sends the final result back to ChatGPT
ChatGPT displays the combined completion to user
But this route is supposedly different from the case when my server runs on a remote domain (in which case the model can perform the REST call directly).
Hi Michael and welcome! I don’t think you can register a localhost plug-in with the plug-in store. Localhost is just a convenience to help you develop and test your plug-in locally without having to deploy every change. But you do have to deploy it to a publicly accessible domain before you can register with the plug-in store. At least that’s my understanding. I’ve never tried but I can’t imagine how it would be possible otherwise.
The steps you’re describing seem reasonable for how the localhost dev/testing setup works but for registered plug-ins ChatGPT can’t rely on your local browser context to make local REST calls to any local API since you might not even be online let alone connected to ChatGPT via a browser window.
yeah, this was my initial understanding too, but many examples provided by OpenAI use localhost URL even if other parts of the manifest use domains such as example,com - that’s what confused me.
Also, I’ve recently heard that there is an option for a localhost to allow access to itself by a remote server . I certainly hope this is not what ChatGPT is doing…
The logo, manifest and OpenAPI specification (OAS) are all accessed with HTTP GET upon registration. Also before any HTTP commands such as GET, POST, PUT an HTTPS OPTIONS command, AKA CORS preflight, is done.
It really helps if you turn on debug mode with the HTTP server and send the debug messages to the HTTP console that started the server.
OpenAI created the Open plugin devtools to help with this. This can be turned on in the ChatGPT settings. It will open a panel to right on the ChatGPT page and show if the manifest and OAS are validated or show errors.
When a prompt is issued it will also show the request to the Plugin and the response.
The OAS is not sent with the prompt.
I think you mis- worded that, it is very confusing.
The other statements are close but this really needs more details to be of value.