Enabling CRUD operations for Playground “Saved Prompts” via the OpenAI API

I’d like to propose adding first‑class API support for creating, listing, updating, and deleting the “Saved Prompts” currently managed only through the Playground UI.

Current workflow

  • In the OpenAI Playground, you can manually save prompts under your account.
  • The Responses API allows you to reference these saved prompts by prompt_id when calling POST /v1/responses.
  • However, there is no endpoint to programmatically create or modify these saved prompts.

Benefits

  • CI/CD integration: Automate prompt updates as part of your development pipeline.
  • Version control: Keep prompt definitions in Git and deploy changes via API.
  • Team collaboration: Provision prompts for new team members and environments programmatically.

Are there any plans to expose these operations in the public API? If not, could we consider adding this feature to streamline prompt management and improve automation workflows?

Thank you for your consideration!

4 Likes

Like this a lot. If we spin it up internally I’ll let you know!

2 Likes

There is nothing to “spin up”, only to “provide”.

The (today, even more screwed up) playground operates via APIs. The saving and bulk listing of these prompts already exists. Use a browser session key that was obtained by Oauth2, emulate a HTTP/3 browser successfully against OpenAI’s attempts to block bots, and then, replicating the existing calls is completely within an “owner’s” control, by the “prompts” project scoping sent with API requests.

You’d just need a method to retrieve single IDs by version the same as they are utilized by Responses API calls. Then for an individual ID listing you’d have to work through the backwards versioning based on an individual prompt id. This is mandatory, yet unprovided. Retrieval of a prompt is required to:

  • get information like the model, to determine if sending a “system” role for internal reprompting or RAG would be an API error,
  • get model for if an “include” for reasoning summaries would be an API error,
  • ID verification status, as request for streaming against an unseen prompt containing o4-mini results in an API error (blocked without ID),
  • tools, for “include” for particular tools data would completely fail with an API error.
  • countless more parameters, that one sees you simply must have in total client-side.

You say, “sure, but your specialized app, you’d know these things”. If the code knows these things, “prompts” is pointless.

If you are considering requests, please: revert this whole idea of server-side settings out of the API and the Playground (that are half of what is needed to make a successful API request against the enabled tools and model). Put back Chat Completions and its shareable, modifiable presets as a first class standalone UI just as it was before, and put Pro reasoning models and codex/CU models there too.

Then: actually provide the APIs or features on existing endpoints that give org-based model features and endpoint capabilities to the Playground by API (on its own session-key “models” listing endpoint), costs and further model features (which is provided in script to the models pages), whether an org is validated (served to the platform organization page), and things that are useful for developing products.

Maybe even a Responses truncation option that can budget and is responsive to some knowledge of cache timeout and previous response ID create time in its discarding of turns, so that entire endpoint’s server state isn’t another completely valueless proposition.

Design something for dummies, only dummies will use it.

Need this very badly. Don’t want to have to spin up an entire part of my database just for this, nor do I want to have to adopt remote config tool solely for this use case. Love how OpenAI has pretty much everything in one place, very annoying to be missing this. Would be a huge help for the non-technical member of my team to adjust prompts.

I’d like to pile-on and say that this feature would save our team from having to use a bunch of hacky workarounds and would be greatly appreciated

At least enable listing, which is completely easy on your end, but gives a lot of advantage on code managment on builders end.

I think this prompts thing is really helpful because it lets non-coding people do minimal changes easily, but it’s still pretty hard to use for the devs.