I have an idea that would be super easy to do. I’ve setup a simple prototype on localhost. Basically discover plugins using ChatGPT. I posted a couple of screenshots on how this would work. Would anyone be interested in this?
I have zapvine.com catalog for the info already setup. Any devs wanting to have their plugins discoverable just add your plugin manifest link to zapvine.
Open the dev console in the browser.
Go to the network tab.
Select GPT4 and then go to the plugin store.
In the network tab appears “p?offset=0&limit=250&statuses=approved” as name.
Inspect this in the preview.
Copy the object from the response and voila…
You have all the plugins with their respective ai-plugin.json and openapi.yaml link.
I don’t know if this helps.
Is your local hosted plugin now able to select the appropriate plugins from the store?
Can this be used to switch plugins on and off during conversation?
Thanks for the info on getting the existing plugin info. I’m going to check that out.
What I put together is for devs to be able to upload their manifest to zapvine so that its discoverable through a ChatGPT plugin. The plugin would not be not integrated into any official sources, nor would it be able to switch plugins on and off.
But the funny thing is, that with a locally hosted plugin and all the endpoints from all plugins available, the limitation of using only 3 plugins is now obsolete. You can request all endpoints locally and give back their response through your own local hosted plugin.
That is an interesting idea. It would be a local proxy. It’s worth investigating. However, if the backend services required authentication, it would be problem to support those plugins.
What may go wrong with so many endpoints is, that the LLM will not choose the proper endpoint.
I guess explicit prompting could help LLM steering to the wanted endpoint.
Or …
The Universal Local Plugin (ULP) do not need all endpoints included. Solving this through prompting.
For example the ULP needs only 2 or max 3 request parameters: {plugin_name} , {endpoint_command} , {kwargs}.
On the local side you then execute the endpoint to which parameters are prompted.
I did get an initial version of this out. I’m just manually pulling the list of plugins for now. I tried that URL through postman without much success. Its flagging me as a bot and asking for a captcha.
I’m starting to look into this as well. As the first step, I’m thinking of how to merge the OAS files. My thoughts are to use the name_for_model in the manifest as the root context of a new path, and then add the path from the API spec in the OAS. This will make it easier for the plugin to map the different apis to the correct backend service. It also makes each endpoint unique.
I have now made it to the point where you can make any request. ULP first searches for the appropriate plugin, then the endpoints are called (with optional parameters) and a direct request is sent to the endpoint.
Unfortunately this only works with plugins that don’t have OAuth or that work in general. There are a few whose rate limit has been reached. Or again others, whose domain or server URL is not correct.
Generally ULP would work, if there was the possibility to access the endpoints without restrictions.
Which in turn means OpenAI could technically make all plugins available instead of limiting the user to only 3 plugins.