There is nothing to “spin up”, only to “provide”.
The (today, even more screwed up) playground operates via APIs. The saving and bulk listing of these prompts already exists. Use a browser session key that was obtained by Oauth2, emulate a HTTP/3 browser successfully against OpenAI’s attempts to block bots, and then, replicating the existing calls is completely within an “owner’s” control, by the “prompts” project scoping sent with API requests.
You’d just need a method to retrieve single IDs by version the same as they are utilized by Responses API calls. Then for an individual ID listing you’d have to work through the backwards versioning based on an individual prompt id. This is mandatory, yet unprovided. Retrieval of a prompt is required to:
- get information like the model, to determine if sending a “system” role for internal reprompting or RAG would be an API error,
- get model for if an “include” for reasoning summaries would be an API error,
- ID verification status, as request for streaming against an unseen prompt containing o4-mini results in an API error (blocked without ID),
- tools, for “include” for particular tools data would completely fail with an API error.
- countless more parameters, that one sees you simply must have in total client-side.
You say, “sure, but your specialized app, you’d know these things”. If the code knows these things, “prompts” is pointless.
If you are considering requests, please: revert this whole idea of server-side settings out of the API and the Playground (that are half of what is needed to make a successful API request against the enabled tools and model). Put back Chat Completions and its shareable, modifiable presets as a first class standalone UI just as it was before, and put Pro reasoning models and codex/CU models there too.
Then: actually provide the APIs or features on existing endpoints that give org-based model features and endpoint capabilities to the Playground by API (on its own session-key “models” listing endpoint), costs and further model features (which is provided in script to the models pages), whether an org is validated (served to the platform organization page), and things that are useful for developing products.
Maybe even a Responses truncation
option that can budget and is responsive to some knowledge of cache timeout and previous response ID create time in its discarding of turns, so that entire endpoint’s server state isn’t another completely valueless proposition.
Design something for dummies, only dummies will use it.