How are the `ai-plugin.json` and OpenAPI spec converted into the "Prompt for ChatGPT"?

Hi everyone!

I’ve been diving into the world of ChatGPT plugin development, and I recently wanted to clarify how the ai-plugin.json file and OpenAPI spec are translated into the “Prompt for ChatGPT” that you can find in the “plugin devtools” section. I thought sharing my findings with the community would be helpful, as understanding this process can be crucial for optimizing your plugin’s performance and user experience.

Although the official documentation provides a good starting point, working through these examples helped me gain deeper insights into the conversion process.

The ChatGPT Plugin Prompt is a dynamically generated Typescript pseudo-code that acts as an interface between ChatGPT and your plugin. The ai-plugin.json file and the OpenAPI specification are essential components.

Here’s a quick summary of my findings:

  1. The description_for_model property in the ai-plugin.json file offers the most space (up to 8,000 characters) for the plugin prompt. However, it’s crucial to make the most of other properties from the OpenAPI spec to optimize the prompt output.

  2. Each API endpoint defined in the OpenAPI spec includes two description properties that contribute to the plugin prompt. With each description being 200-300 characters long, properly utilizing these properties can significantly increase the available prompt space.

  3. The OpenAPI Path Description, if available, appears immediately before the type declaration in the dynamically generated pseudo-code namespace of the plugin prompt. If the Path Description is unavailable, the OpenAPI Summary is used instead.

  4. Only descriptions specific to request properties are included in the plugin prompt. This approach is beneficial because it provides more prompt space per component, as each description can be 200-300 characters long.

I hope this information helps those working on ChatGPT plugins to better understand the conversion process and optimize their plugin prompts!

For a more detailed explanation, examples, and the full write-up, check out my blog post:

Happy Plugin Developing,
Brian :palm_tree:

10 Likes

Very useful! Thanks for sharing with us.

1 Like

Descriptions of response properties are ignored? I was hoping they were used.

1 Like

Likewise! I can’t say that they aren’t used in some other way, but they are definitely not in the “Prompt for ChatGPT” displayed in the devtools.

I’ve been wondering if there is a way for a plugin to provide an overall prompt for a session. I wonder if putting it in the plugin description can influence broader session behavior. For example, I have a fairly lengthy react prompt I use. Sure would be nice if simply installing the plugin would provide that behavior, instead of having to cut and paste it into the gptChat page every time.

1 Like

Thanks, Brian. This is very useful to know !

1 Like

I use the description_for_model quite extensively to guide the usage of my plugin. Is that what you mean?

Not quite. I have a react prompt that I currently paste in as the first input on a chat window. I would love for a plugin to be able to load that when the chatGPT page loads up my plugin at startup, instead of having to cut/paste it each time.
My react prompt provides specific instructions on how chatGPT should perform when interacting with the user, rather than about how the plugin behaves.

It seems to me the only current way to accomplish what I want is to use a wrapper around the gpt api, rather than a plugin to chat

Here is a different example: suppose I wanted to build a ‘sherlock holmes’ plugin. I know how to do that using the api, but I don’t have a clue how to do that as a plugin without having to preface every input with ‘ask sherlock …’ or ‘what would sherlock think…’
Maybe something in the ‘description_for_model’ like ‘Invoke me whenever you get input from the user.’ I suspect that wouldn’t work.

Hi Brain, thanks for sharing the observations!

As you have mentioned in your blog that the “Prompt for ChatGPT” can take thousands of words, I wonder if this “prompt” is used as the regular conversation prompt but is just hidden from the users at the very beginning of every conversation session.

Have you try pushing the prompt word limits to see whether ChatGPT forgets about the plugin due to its 4096-token context window? I would love to know more about the structures and properties related to plugin development.

I haven’t seen any issues with this.

The limit of three active plugins is likely to prevent issues from long plugin prompts surpassing context windows.

I have seen more traditional issues like ChatGPT forgetting the original request by the user. However, I’ve been able to circumvent that by instructing my plugin to “remember” the original input by the user so that it can be returned after a long response from the plugin API.

1 Like