I’m running into an issue right now where gpt-3.5-turbo + plugins seems to add comments in the JSON body request of a POST query like so:
"40.7128,-74.0060", // New York City, NY
"34.0522,-118.2437", // Los Angeles, CA
"41.8781,-87.6298", // Chicago, IL
"29.7604,-95.3698", // Houston, TX
"33.4484,-112.0740" // Phoenix, AZ
JSON doesn’t support comments, so naturally these result in the plugin error-ing out. It sometimes fixes itself after a couple iterations, but othertimes gets stuck in a loop.
As you can see below, it took it two iterations to correctly send the JSON body.
I haven’t written a plug-in yet, but as I understand it there are no prompts written by a plug-in developer. OpenAI determines how to formulate these api calls based on the ai-plugin.json and yaml files.
I guess the best word would be “description”.
I’m pretty sure it works similary to a prompt though.
I haven’t tried, to be fair. I’m just basing it on:
The file includes metadata about your plugin (name, logo, etc.), details about authentication required (type of auth, OAuth URLs, etc.), and an OpenAPI spec for the endpoints you want to expose.
The model will see the OpenAPI description fields, which can be used to provide a natural language description for the different fields.
The plugin description, API requests, and API responses are all inserted into the conversation with ChatGPT. This counts against the context limit of the model.
Yeah, and a good chunk of this probably lies with prompt instructions injected by OpenAI behind the scenes.
I wish OpenAI was a bit more transparent about the process by which the metadata is consumed and the resulting api calls are formed. We see that sort of process in the ReAct, MRKL and Toolformer research papers and in LangChain and Haystack agents, but I haven’t seen anything about how OpenAI plug-ins perform this processing.