Enhanced Prompt Management

Enhanced Prompt Management

Prompt quality is critical to the success of your integrations, but many developers are managing prompts through copy-paste and vibe checks alone. The resulting uncertainty leads to slower integration velocity and limits adoption of new models and capabilities.

We want to fix this!

Introducing… Prompts

Prompts are reusable configuration for responses, combining: messages, tool definitions and model config. They’re versioned and support template variables.

const response = await openai.responses.create({
  prompt: {
    "id": "pmpt_685061e957dc8196a30dfd58aba02b940984717f96494ab6",
    "variables": {
      "weather": "7 day",
      "city": "San Francisco"
    }
  }
});

What’s launching

Starting today, Prompts are a first-class, versioned resource within Platform. They’re deeply integrated with Evals, Logs and natively accessible via the API, making it easier than ever to manage, iterate and deploy high-quality prompts at scale.

We’ve overhauled the Prompts Playground and introduced a new Optimize tool to help developers hone in on the most effective versions of their prompts.

The Optimize tool helps catch contradictory instructions, ambiguous formatting hints, suggesting a more optimized rewrite.

Q: What’s happening to Presets?

You can now import your Presets into Prompts and begin using the new features including versioning and prompt templates.

Q: What’s next?

We have a pretty exciting roadmap for improving Optimize and providing an even more seamless experience when working with Evals. Would love to hear what features you’d most like to see!

Q: Where can I learn more?

Take a look at the docs and try Prompts in the updated Prompt Playground?

12 Likes

This was unneeded. If anyone needed an AI prompt rewriter, I could have shared this chat completions preset that would do the task (with a model that outperformed others by actual understanding.)

Presets and these “prompts” is completely broken and now gone in chat completions.


This is what we want:

3 Likes

Appreciate the feedback! We’re making some changes to provide better support for viewing existing Chat Completions Presets. The view you shared is very in-line with where we want to take Optimize!

6 Likes

Hi, I’m still testing it to understand better the advantages, but the variables are certainly an improvement.

There are a few suggestions though, if possible:

  • Allow us to rename prompt names;
  • Allow a pre-set value for variables, or even a listbox for options (e.g. a preset value of “option1;option2;option3” would become a combobox);
  • A comment field would be helpful to document what variables or the prompt itself is for;
  • A sharing option.

Thanks!

2 Likes

Hi,

If I understand correctly: this Text Generation and Prompting feature, including the responses.create() method, structured prompts, role-based messaging, reusable prompts, and structured outputs, is specifically part of the OpenAI API, meaning it:

  1. Requires using the API (not just ChatGPT in the browser),
  2. Incurs usage-based fees based on the model used, tokens consumed, and features leveraged (e.g., context window size)?

I’m a long-time ChatGPT Plus subscriber. I signed up not just for access to GPT-4, but because I believed in what you’re building. I wanted to support it and be part of it.

But lately, it’s been hard not to notice a gap. The most powerful new features, like structured outputs, reusable prompts, developer instructions, and API-based logic, are all placed behind another paywall. As a paying user, it feels like I’m being invited in but kept at the threshold.

Here’s a simple ask. If we’re already paying monthly, give us more to build with. Not everything needs to be locked behind API tokens. Here are three things that would help:

  1. Unify access. Let Plus users use the API within the same plan. Right now, the experience is split and limiting.
  2. Include monthly API credits. Even a small token allowance would let us explore deeper capabilities without having to manage a separate billing system.
  3. Bring advanced features into the ChatGPT interface. Prompt templates, role-based instructions, and structured output formatting would be incredibly useful inside the chat.

We are not asking for special treatment. We are asking to be treated as partners in this process, not just as consumers.

Thank you for listening.

2 Likes

Great ideas.

FYI - rename is supported in the Dashboard Prompt gallery view under the “…”:

We’re adding it to Playground shortly!

3 Likes

I totally hear you on the API <> ChatGPT feature mismatch. We’re always exploring ways to make ChatGPT more powerful.

Re: “Include monthly API credits” - we offer a free tier allowing you to call 4o-mini , 4.1-mini or 4.1-nano in the API (up to 200 calls/day for each)

2 Likes

I’m looking at the pricing chart, but I’m not seeing where it explain the free calls. Can you please screen snapshot or indicate the precise location of the info?
Thanks

In the model comparison view (eg. GPT-4.1-mini):

2 Likes

I think you are confused. (or now it seems there is indeed some undisclosed and undocumented program for signing up new users that has some free usage, that would need to be explained to those made eligible and those who might support them…)

The “free” tier is just a tier level and limit that nobody is really at, unless OpenAI gives them free promo credits somehown but does not increase the tier level for them.

There are NO free API credits granted to pay for services. The program giving $5 of fast-expiring credit upon sign-up was discontinued over a year ago.


“unify access” is not gonna be there for pay-per-use - API developers can rack up hundreds of dollars in charges in a minute.

4 Likes

We do support a limited API tier upon sign up! The new developer onboarding flow actually instructs you to try it

I guess this account that has never paid a penny, nor received a free trial credit (by being a dupe phone number), is not going to be so "onboarding flow"ed…

The platform site just wants my money…

That is what someone engaging with “ChatGPT Free” after a while would see when visiting the API?

Then in Firefox, the card information field does not work even if I’m eager. (I found this was from adblock, blocking Stripe’s web-bug that makes direct access to their API, that doesn’t need to know every login and every movement of every user to the platform site…)


On my account with prepay org and with a “personal” unused billing org added to it automatically a while back, maybe this is the flow discussed, after switching to that org. A “start building” call-to-action instead of dashboard links already working.
image

It already is an organization, though, selectable through the drop-down and with an organization ID assigned.

So I’ll keep on flowing, invite others to spend? No. Next:

Following a prompts playground link in a different tab, after getting to having generated a key and the dialog that wants me to buy credits, still takes one to the “payment method needed”.

“maybe later” is my choice for buying credits.

And here’s the final result of “onboarding”, and using the offered key:

OpenAI library version: 1.82.1
Exception occurred: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

So I was offered to try a script with an API key in a dialog. And tried a script I already had open, hard-coding `Client(api_key=…) with gpt-4o-mini. Receiving what was expected - 429 from Responses and Chat Completions.


So for those who have ChatGPT accounts, of whatever status and age, now visiting the API, you’ll need to be clear exactly who actually can use the API in such a manner without paying first.

@dmitry-p , this is great news!!

I’d suggest improving the UI so that the System Message text box can be expanded (maybe to fit the whole window) so that managing large prompts is easier :slight_smile:

Also, when selecting a Prompt, loading the latest version instead of the first one would be better!

2 Likes

@dmitry-p This is great! And we have been waiting for proper prompts sharing and versioning.

However, it seems that we run into an issue:

  • When saving the prompt config we can’t set max tokens value
  • Also for testing purposes we set schema “strict”: “false” which is also not captured by the saved prompt version

Are these issues bugs, or by design?

Adding some feedback on potential bugs on playground interface when switching between prompts:

  1. The version combobox opens at version 1 instead of the latest, which might cause some confusion.
    Edit: I found the “set as default” later in the dashboard. But when adding a new update on playground it would be nice to have the option to set it as new default.
  2. Variables are not reset after loading a new prompt, keeping the variable list growing cumulatively in the UI.
2 Likes

Thanks! Looking into those

4 Likes

Then - how about not calling this “prompts” at all, which originates from “writing prompt” to prod the completion into generating what should follow.

OpenAI is just trying to make language useless over and over by redefining and stealing nomenclature for themselves.

Tell me that instructing someone wouldn’t be confusing: "To update your sampling parameters to reduce the tail of uncertain logits, you will go into a UI site, click on “playground” and receive a page identify itself “prompts”, go to a prompt drop-down and change “prompt”.
Then have a completely-ruined “get code” in that UI if you are in a “prompt”, as the only apparent place to receive a prompt ID to actually use on the API.

Of course, you can’t tell in the API call even what model is used to know if “top_p” is even possible, or make decisions in code based on models, like whether to have “include” parameter based on a tool that would fail if the tool is not present in the “prompt” or if the model is not a reasoning model.

This is meeting no API need except for OpenAI’s demand for server-side data – server-side data like Chat Completions’ presets that they just proved they will destroy with a 0-day attack.


Here’s a code snippet, the idea of “prompts” for chat completions. Connect to your own dB.

from datetime import datetime
from openai import OpenAI; client = OpenAI()

# Preset configuration class
class PromptPreset:
    def __init__(self, model, tools, system_template):
        self.model = model
        self.tools = tools
        self.system_template = system_template
    
    def system_message(self, **variables):
        return self.system_template.format(**variables)

# Preset creation (this would typically be loaded from a user interface or database)
default_preset = PromptPreset(
    model="gpt-4-turbo-2024-04-09",
    tools=[],  # tools can be dynamically populated or pre-filled
    system_template=(
        "Chat session initiated {session_start} {user_timezone}\n"
        "Latest user input at {datetime_now}\n\n"
        "You are a helpful assistant with a free weather tool"
    )
)

# Example of using preset per session:
session_variables = {
    "session_start": datetime(2024, 6, 17, 9, 30).strftime("%Y-%m-%d %H:%M:%S"),
    "user_timezone": "UTC+0",
    "datetime_now": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
}

response = client.chat.completions.create(
    model=default_preset.model,
    tools=default_preset.tools,
    messages=[
        {"role": "system", "content": default_preset.system_message(**session_variables)},
        {"role": "user", "content": "Hello!"}
    ]
)

print("AI said:", response.choices[0].message.content)

Ah, I think that might be an edge case. All new accounts get free quota - you can try signing up with a new account and re-running the snippet that’s shown to you in the flow.

Are you still seeing this 429 even after waiting/creating a new key?

2 Likes

I’d be looking anywhere here to read about a free quota:

https://help.openai.com/en/?q=free+api

but there are no results saying anything like “API organizations have toll-free 200 calls per day to a variety of these models, at no expense”.

To reiterate: tier is all about limits, the maximum calls an API organization can make per period of measurement.
tier itself does not imply that you have any funding to pay for services, billed at the rate in the pricing page.

So this new information you introduce in this reply to a user about “sign up, don’t pay anything, use the API without 429 errors” has no corroborating anecdote or documentation.

Re: Presets - you can import these over to Prompts with just a click. We also just landed some recent changes for improved viewing of Presets on Chat Completions.

You can think of Prompts as a more powerful version of Presets - they support template variables, versioning, can be referenced in the API and Evals. We’ve talked to a lot of developers in the past few months and prompt management came up as a recurring need. Again, this is just the start and if there are feature gaps or any parts of the UI that are particularly frustrating, let us know. We want to make Prompts maximally useful to you!

1 Like