Preventing Comments in JSON Query Body

I’m running into an issue right now where gpt-3.5-turbo + plugins seems to add comments in the JSON body request of a POST query like so:

  "coordinates": [
    "40.7128,-74.0060", // New York City, NY
    "34.0522,-118.2437", // Los Angeles, CA
    "41.8781,-87.6298", // Chicago, IL
    "29.7604,-95.3698", // Houston, TX
    "33.4484,-112.0740" // Phoenix, AZ

JSON doesn’t support comments, so naturally these result in the plugin error-ing out. It sometimes fixes itself after a couple iterations, but othertimes gets stuck in a loop.

As you can see below, it took it two iterations to correctly send the JSON body.

To prevent this, I thought to explicitly state “no comments in JSON” in the plugin description (in ai-plugin.json) & in the API description in my OpenAPI YAML file. That didn’t work though.

Anyone run into something similar? I feel like GPT-4 w/ plugins wouldn’t make this mistake, but in the meantime I’d love to prevent this from happening with 3.5-turbo.

Or just remove every comment programatically with a regular expression?

You may ask chatGPT to write it for you :slight_smile:

I imagine the solution is simply to try different prompts? I missed the part where you mentioned being the plug-in owner.

GPT doesn’t do well with “don’t do this”, and does much better if you lead by example, or give it a separate path

I haven’t written a plug-in yet, but as I understand it there are no prompts written by a plug-in developer. OpenAI determines how to formulate these api calls based on the ai-plugin.json and yaml files.

I guess the best word would be “description”.
I’m pretty sure it works similary to a prompt though.

I haven’t tried, to be fair. I’m just basing it on:

The file includes metadata about your plugin (name, logo, etc.), details about authentication required (type of auth, OAuth URLs, etc.), and an OpenAPI spec for the endpoints you want to expose.
The model will see the OpenAPI description fields, which can be used to provide a natural language description for the different fields.
The plugin description, API requests, and API responses are all inserted into the conversation with ChatGPT. This counts against the context limit of the model.

Yeah, and a good chunk of this probably lies with prompt instructions injected by OpenAI behind the scenes.

I wish OpenAI was a bit more transparent about the process by which the metadata is consumed and the resulting api calls are formed. We see that sort of process in the ReAct, MRKL and Toolformer research papers and in LangChain and Haystack agents, but I haven’t seen anything about how OpenAI plug-ins perform this processing.