Enhanced Prompt Management

Thanks for your reply. It was totally fine until I logged out and when I logged in again, many of my prompts are not visible. All of them are only v1 while for most I had upto even v15 or v16. This is extremely crucial since we are an early stage startup and hence, months of work vanished in an instant. Please look into this!

I even tried using a different browser or checking histories but somehow all the changes made within the past 3-4 weeks are not showing up at all. From what I see, it has reverted to the prompts which I had more than a month ago. Also, any new prompt that I create or even delete an existing prompt is somehow not being saved. When I refresh the browser, the new published prompt is not available in the “Import preset” options, while any deleted prompt reappears again. Hence, I am not sure what’s going on. My end points using the prompts that we had created in the past is also failing on the backend of our website. Is it the case that one project only allows a limited number of preset prompts and that’s why it deleted mine? It has deleted a large number of my prompts which causes a huge loss of trust in a company like OpenAI

1 Like

@Tushar_Gopalka - let me take a closer look.

Can I ask you to double check one thing - since we support import of presets to any of your Projects, can you make sure that you’re in the same Project as you were before (prior to log out/log in). Let me know if you’re able to find them in a different Project.

I am 100% in the same project. I only had one project before this issue. Now I am making multiple copies to store backup

Got it, mind DMing your org id?

Also, just to confirm - presets can be imported to prompts, but once they’re imported, they’ll show up in a separate dropdown from the “presets”. From there, they are managed at the Project level with a linear history of changes.

When I refresh the browser, the new published prompt is not available in the “Import preset” options, while any deleted prompt reappears again.

1 Like

@Tushar_Gopalka:

I have enabled your account to send private messages. You can contact @dmitry-p directly and privately for sharing your org-id by clicking on their profile picture or by going to their profile directly.

2 Likes

Guys, please consider allowing the “System message” text box to be expanded to full window when editing it in the Playground.

This will make it easier for editing large prompts - huge UX/UI improvement!

Where’s the audio modality on chat completion’s playground? It’s just gone. Edict to damage this endpoint’s presentation beyond breaking presets and pushing Responses?

Which is funny, because in Data Controls, see the bottom bullet point for what is not actually reproduced:

Then to OpenAI: Please stop this flow-interrupting “write a prompt to make a system message for responses” chat interface that comes from selecting “dashboard”->“chat”. I’d want to show how to make an app, not get “think carefully step by step” in the same prompt as “don’t show internal thinking” and guardrail text.

  • Love getting a prompt maker instead of the playground UI
  • Hate getting a prompt maker, give the playground UI
0 voters

Hey, audio should still be there. Does this reflect what you see in the playground?

2 Likes

My bad - I had to follow through in actually picking the correct model to get a microphone icon. “Training issue” is your resolution. :slight_smile:

Being able to send an audio file would be useful, and crafting its placement within text context input of a turn, where I was looking at attachment to see only document file types. Thus: seeing if the prompted concept could work before coding it up.

The attachment option behaves oddly, not indicating that it switches from file attachment to audio attachment by model.

Protip: sanitize and strip your message inputs

Just in case API gets variables as bad as the Playground coders.

Or hash up the names pretty good.

bug
There seems to be a problem with reusable prompts : when not defined a max_output_tokens value, it seems to default to 2048, causing empty responses when using reasoning models like gpt-5.

I thought it was a legacy of older prompts when the UI used to have a setting to choose a max_output_tokens value, but creating a new reusable prompt in the dashboard also seems to have the same issue.

Here is an example resulting from a newly created prompt:

And here is a minimal run without prompts (no max_output_tokens value):

You can bypass it by setting a value in the max_output_tokens parameter, but that is leaving people puzzled as it is totally unexpected.

@dmitry-p If someone can take a look at this, it will be much appreciated.

It has been reported here:

2 Likes

@aprendendo.next - that looks like a bug. We’re deploying a fix shortly!

2 Likes

Hi everyone,

I’ve been following this thread and wanted to ask for more details regarding the management of prompts via the OpenAI API. From what I understand, the ability to create, save, and version prompts is currently only available through the OpenAI dashboard, but I am interested in whether there are any API endpoints that allow programmatic management of prompts — such as creating, updating, listing, and deleting them as reusable entities.

Has OpenAI released, or is there any plan to release, an API endpoint for this functionality? If not, is there any beta or enterprise-level access that would provide this feature?

I appreciate any insights or updates on this, as it would greatly improve automation and integration within our workflows.

2 Likes

If you have access to a local store, you can do this on your own.

We have our own Prompt Management System.

For example, we have access to an SQL Server database where we can create, update, and delete as many selectable reusable prompts as we want. For example, here is an instruction prompt record:

Description (for UI pick list):

Response Format: Text - Standard Paragraph Format

Instruction Prompt:

The response must be in standard paragraph format.

  • A heading must be created for each paragragh.
  • The heading must be in title case.
  • Prepend headings with large Roman numerals.
  • Separate the heading and the following paragraph with a blank line.
  • Create a title for the response above the first paragraph heading and separate with a blank line.
  • The title must be in title case.

If properly designed, doing it yourself is far better that relying on an OpenAI API. We have been doing this for over a year now with great success.

1 Like

Thanks for sharing. I appreciate the insight.

Hi, so you don’t use the “Chat prompts” from the OpenAI dashboard, and instead, you send the “instructions” in the request body on every new model/response request?

“prompts” stores a versioned preset that has instructions (that actually can be several messages, and the “instructions” field can return them but “instructions” blocks you sending a context of multiple messages). In addition, it has settings like the tools or reasoning effort.

If you figured out a way to keep track of which prompt ID and version you should be sending, and also track what is being done so you can actually request the correct “include” and handle what comes out…you also can simply send the parameters yourself, not relying on any UI to gate your fluid API use. You save yourself whatever database lookup OpenAI is doing before generation can start.

1 Like

Yes. we build own own UIs and users can either manually enter in an instruction prompt or select a pre-defined instruction prompt from a pick list.

2 Likes