That’s exactly the case imo. In the meantime, are there any workarounds such as the one @aprendendo.next suggested?
This is awesome @dmitry-p… and welcome to the forum! =)
This looks kind of, but not exactly like the current MCP protocol… which makes sense since you were probably working on them in parallel. But I was wondering if you’re going to continue to tune internal OpenAI stuff so it reflects the MCP standard?
In general creating a more 1:1 relationship between the terminology would be greatly appreciated. Like, what if the Prompt Object == the MCP Prompt Standard?
So, https://platform.openai.com/docs/api-reference/responses/create#responses-create-prompt
becomes:
{
name: string; // Unique identifier for the prompt
description?: string; // Human-readable description
arguments?: [ // Optional list of arguments
{
name: string; // Argument identifier
description?: string; // Argument description
required?: boolean; // Whether argument is required
}
]
}
from: Prompts - Model Context Protocol
I’m looking forward to being able to use this structure in the backend. I use python and a LOT of Pydantic for handoffs and validation, and it would be great if everything could be 1:1 in the first place.
While I don’t think this exists yet, it would also be great if we could agree on tools (including function calls), resource, and prompt terminology throughout the industry. For example, standardizing different common types of prompts by creating formal json schema for schema—e.g. prompt_type = “few shot.”—or other interior prompt considerations like “example” responses and that sort of thing.
I understand wanting to keep everything general, but having some guidance a’la optional json fields is very helpful for future validation / type checking and I assume it would help a model perform better if it understands it’s working on a certain type of task from the beginning.
Otherwise, thank you very much! Great work. This is right in line with MCP and it’s great that OpenAI especially is on top of changes.
Thanks for seeding this idea. Could certainly be interesting for future get/list Prompt endpoints. Looks like the main deltas are pretty additive: 1) versioning, 2) model config, 3) tool definitions (and I guess descriptions)
Actual summary, by o3 forum bot:
Thread summary – “Enhanced Prompt Management” (≈ 40 posts, June 2025)
Announcement
• OpenAI introduced “Prompts”, a new first-class, versioned resource that bundles:
– System/user messages and few-shot examples
– Tool definitions and full model configuration
– Template variables
• They ship with:
Optimize – an assistant that flags contradictions/ambiguity and suggests rewrites
A revamped Prompt Playground and gallery
One-click migration of legacy “Presets”
Tight links to Logs and Evals; API use via openai.responses.create({prompt:{id,…}})
Overall sentiment: cautiously positive but immediately mixed
Positives (≈ 60 % of remarks)
• Version control, variables, and API addressability are welcomed (“what we’ve been waiting for”).
• Optimize’s lint-style feedback is seen as useful.
• Dashboard rename ability, future sharing, and links to Evals get thumbs-up.
Frustrations & pain-points (≈ 40 %)
• Removal / breakage of the old “Presets” inside Chat Completions surprised some and triggered push-back (“unneeded”, “0-day attack on presets”).
• Terminology confusion: calling a whole config object a “prompt” felt misleading; requests to align with Model Context Protocol.
• SDK / spec bugs: model and input still marked “required”, producing 422/429 errors; workaround posted.
• Playground UI quirks:
– Max-tokens not saved; slider cap wrong for GPT-4-1.
– Version selector opens on v1; no “set new default” shortcut.
– Variables accumulate when switching prompts; URL model param ignored.
• Missing features: cannot edit prompt name from Playground, no comment field, no sharing, no listbox defaults for variables.
• Unclear free-tier messaging: long sub-thread debating whether new users really get 200 free calls/day; request for official documentation.
OpenAI responses
• Acknowledged the bugs/UI gaps, several already patched (strict schema flag, rename in gallery).
• Promised faster fixes (default-version button, better variable reset) and reiterated roadmap for deeper Optimize + Evals integration.
• Clarified that a limited free API tier exists for “mini” models but admitted edge-cases in onboarding flow.
Net takeaway
Developers like the direction—centralized, versioned, variable-driven prompts with linting—yet early execution feels rough. Sentiment is “promising but please finish the plumbing, restore parity with Presets, document the free tier, and tighten the UI/API contract.”
I have been waiting for this for a long long time.
Tired of maintaining prompt templates, I was storing in this in the db already!
Is there a way to sync our prompts with whats on the platform - API?
Let me chime in then as well!
Having just migrated from Assistants to Responses API (40+ Assistants) and having decided to ‘have my prompts in github’ going forward (creates versioning etc..) I don’t think I will be using this.
I am sure that those that have not migrated yet would love a migration tool that creates Prompts for each Assistant.
We hear you on Assistants import and API. Stay tuned!
In the meantime, excited to share that we fixed the stateful links @aprendendo.next flagged and now offer an inline publish action (addressing @christian.velez1 's flag):
Thanks @dmitry-p. I just tested it and seems much better now!
Thank you for releasing the inline publish action @dmitry-p! This is moving nicely in the right direction!
In the interim, do you have any advice or workarounds on dealing with the model being required as per the OpenAPI spec, and how to (de)couple it from the reusable prompts? Thanks!
Hey, sorry! Still working on that - we’ll push updated SDKs shortly.
@jai if you set model=None it uses the model defined in the prompt. If you define a value, it overrides.
Thank you @dmitry-p, looking forward to it! ![]()
Yes, will use this meantime. Thank you @aprendendo.next!
Awesome news! Thanks a lot for hearing it @dmitry-p !!
Hey all! The most recent SDK versions should have the model field as optional now - if that’s not what you’re seeing, please let us know and confirm you have the latest release from your package manager. Thanks!
Thank you for the quick resolution on this @kwhinnery!
Cool, glad to help! ![]()
While we’re at it, perhaps it’s a good time to standardize a ContentBlock object for the models to work with?
The new 2025/06/18 MCP schema updates some core ideas, like ContentBlocks and Authentication types. Here’s the full changelog.
Here’s a json-schema summary of the section I’m talking about. (Made with o3
)
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/schema/2025-06-18/schema.json",
"title": "MCP Content Blocks – 2025-06-18",
"description": "Discrete content objects that may appear in prompts, tool results, or messages.",
"oneOf": [
{ "$ref": "#/$defs/TextContent" },
{ "$ref": "#/$defs/ImageContent" },
{ "$ref": "#/$defs/AudioContent" },
{ "$ref": "#/$defs/ResourceLink" },
{ "$ref": "#/$defs/EmbeddedResource" }
],
"$defs": {
/* -------------------------------------------------------------------- */
/* Generic annotations placeholder – replace with your real schema */
/* -------------------------------------------------------------------- */
"Annotations": {
"type": "object",
"description": "Client-facing rendering hints (user-defined).",
"additionalProperties": {}
},
/* -------------------------------------------------------------------- */
"TextContent": {
"type": "object",
"description": "Plain text provided to or from an LLM.",
"required": ["type", "text"],
"properties": {
"type": { "const": "text", "type": "string" },
"text": { "type": "string", "description": "UTF-8 text payload." },
"annotations": {
"$ref": "#/$defs/Annotations",
"description": "Optional client annotations."
},
"_meta": {
"type": "object",
"description": "Opaque metadata (see spec general fields).",
"additionalProperties": {}
}
},
"additionalProperties": false
},
/* -------------------------------------------------------------------- */
"ImageContent": {
"type": "object",
"description": "Base64-encoded image passed to/from an LLM.",
"required": ["type", "data", "mimeType"],
"properties": {
"type": { "const": "image", "type": "string" },
"data": { "type": "string", "format": "byte", "description": "Base64 image data." },
"mimeType": { "type": "string", "description": "Image MIME type (e.g. image/png)." },
"annotations": { "$ref": "#/$defs/Annotations" },
"_meta": {
"type": "object",
"additionalProperties": {},
"description": "See spec for _meta usage."
}
},
"additionalProperties": false
},
/* -------------------------------------------------------------------- */
"AudioContent": {
"type": "object",
"description": "Base64-encoded audio passed to/from an LLM.",
"required": ["type", "data", "mimeType"],
"properties": {
"type": { "const": "audio", "type": "string" },
"data": { "type": "string", "format": "byte", "description": "Base64 audio data." },
"mimeType": { "type": "string", "description": "Audio MIME type (e.g. audio/mpeg)." },
"annotations": { "$ref": "#/$defs/Annotations" },
"_meta": {
"type": "object",
"additionalProperties": {},
"description": "See spec for _meta usage."
}
},
"additionalProperties": false
},
/* -------------------------------------------------------------------- */
"ResourceLink": {
"type": "object",
"description": "Pointer to an external resource accessible by the server.",
"required": ["type", "name", "uri"],
"properties": {
"type": { "const": "resource_link", "type": "string" },
"uri": { "type": "string", "format": "uri", "description": "Resource URI." },
"name": { "type": "string", "description": "Logical identifier." },
"title": { "type": "string", "description": "Human-readable label (UI)." },
"description": { "type": "string", "description": "LLM-facing hint / summary." },
"mimeType": { "type": "string", "description": "Resource MIME type, if known." },
"size": { "type": "integer", "description": "Raw size in bytes, if known." },
"annotations": { "$ref": "#/$defs/Annotations" },
"_meta": {
"type": "object",
"additionalProperties": {},
"description": "See spec for _meta usage."
}
},
"additionalProperties": false
},
/* -------------------------------------------------------------------- */
"EmbeddedResource": {
"type": "object",
"description": "Inlined resource content embedded in a prompt or result.",
"required": ["type", "resource"],
"properties": {
"type": { "const": "resource", "type": "string" },
"resource": {
"description": "Either TextResourceContents or BlobResourceContents.",
"anyOf": [
{ "$ref": "#/definitions/TextResourceContents" },
{ "$ref": "#/definitions/BlobResourceContents" }
]
},
"annotations": { "$ref": "#/$defs/Annotations" },
"_meta": {
"type": "object",
"additionalProperties": {},
"description": "See spec for _meta usage."
}
},
"additionalProperties": false
}
}
}
I think this is mainly additive as well, at least it seems like these definitions could slide in neatly with the existing message content object.
Any place where communication channels can be standardized across the industry with strong typing, json-schema, and protocols are greatly appreciated because of how smooth it’s making the handoffs.
Is there some issue with the playground presets? The versions of different prompts and even entire presets are being deleted from my account?
Hey @Tushar_Gopalka can you say more? You should be able to import all of your previous presets to any one of your projects. You can view all of your prompts in the dashboard under “Prompts” and see each of the versions.
OpenAI has further screwed with the interface of the Playground.
There is no longer a strong visual indicator of whether the unwanted “Responses” has been selected.
This may be what is messing you up - no indication that “presets” were made “prompts”, the Playground itself renamed “prompts” to mess with your understanding, and things unpredictably switching.
Selecting a preset in chat completions that was made with “responses” (such as one that might have previously shared a symptom report like multiple unterminating assistant outputs from the Responses endpoint), completely switches the dropdown you were browsing presets in to be “prompts”. Then you no longer can continue looking through those presets without going to the dropdown menu, swiching back to chat completions, getting a pop-up you have to dismiss, then look back at the presets that no longer have any facility at all to manage them, either to create, modify, or delete. Which is gonna be leaving bad presets behind when OpenAI shuts of the model you wanted (gpt-4.5) on the endpoint you wanted (chat completions).
Degrade and remove useful models and foist a continually-broken endpoint by any means of coercion and vendor lock-in, not going to work.

