Thanks for your reply. It was totally fine until I logged out and when I logged in again, many of my prompts are not visible. All of them are only v1 while for most I had upto even v15 or v16. This is extremely crucial since we are an early stage startup and hence, months of work vanished in an instant. Please look into this!
I even tried using a different browser or checking histories but somehow all the changes made within the past 3-4 weeks are not showing up at all. From what I see, it has reverted to the prompts which I had more than a month ago. Also, any new prompt that I create or even delete an existing prompt is somehow not being saved. When I refresh the browser, the new published prompt is not available in the âImport presetâ options, while any deleted prompt reappears again. Hence, I am not sure whatâs going on. My end points using the prompts that we had created in the past is also failing on the backend of our website. Is it the case that one project only allows a limited number of preset prompts and thatâs why it deleted mine? It has deleted a large number of my prompts which causes a huge loss of trust in a company like OpenAI
@Tushar_Gopalka - let me take a closer look.
Can I ask you to double check one thing - since we support import of presets to any of your Projects, can you make sure that youâre in the same Project as you were before (prior to log out/log in). Let me know if youâre able to find them in a different Project.
I am 100% in the same project. I only had one project before this issue. Now I am making multiple copies to store backup
Got it, mind DMing your org id?
Also, just to confirm - presets can be imported to prompts, but once theyâre imported, theyâll show up in a separate dropdown from the âpresetsâ. From there, they are managed at the Project level with a linear history of changes.
When I refresh the browser, the new published prompt is not available in the âImport presetâ options, while any deleted prompt reappears again.
I have enabled your account to send private messages. You can contact @dmitry-p directly and privately for sharing your org-id by clicking on their profile picture or by going to their profile directly.
Guys, please consider allowing the âSystem messageâ text box to be expanded to full window when editing it in the Playground.
This will make it easier for editing large prompts - huge UX/UI improvement!
Whereâs the audio modality on chat completionâs playground? Itâs just gone. Edict to damage this endpointâs presentation beyond breaking presets and pushing Responses?
Which is funny, because in Data Controls, see the bottom bullet point for what is not actually reproduced:
Then to OpenAI: Please stop this flow-interrupting âwrite a prompt to make a system message for responsesâ chat interface that comes from selecting âdashboardâ->âchatâ. Iâd want to show how to make an app, not get âthink carefully step by stepâ in the same prompt as âdonât show internal thinkingâ and guardrail text.
- Love getting a prompt maker instead of the playground UI
- Hate getting a prompt maker, give the playground UI
My bad - I had to follow through in actually picking the correct model to get a microphone icon. âTraining issueâ is your resolution. ![]()
Being able to send an audio file would be useful, and crafting its placement within text context input of a turn, where I was looking at attachment to see only document file types. Thus: seeing if the prompted concept could work before coding it up.
The attachment option behaves oddly, not indicating that it switches from file attachment to audio attachment by model.
Protip: sanitize and strip your message inputs
Just in case API gets variables as bad as the Playground coders.
Or hash up the names pretty good.
bug
There seems to be a problem with reusable prompts : when not defined a max_output_tokens value, it seems to default to 2048, causing empty responses when using reasoning models like gpt-5.
I thought it was a legacy of older prompts when the UI used to have a setting to choose a max_output_tokens value, but creating a new reusable prompt in the dashboard also seems to have the same issue.
Here is an example resulting from a newly created prompt:
And here is a minimal run without prompts (no max_output_tokens value):
You can bypass it by setting a value in the max_output_tokens parameter, but that is leaving people puzzled as it is totally unexpected.
@dmitry-p If someone can take a look at this, it will be much appreciated.
It has been reported here:
@aprendendo.next - that looks like a bug. Weâre deploying a fix shortly!
Hi everyone,
Iâve been following this thread and wanted to ask for more details regarding the management of prompts via the OpenAI API. From what I understand, the ability to create, save, and version prompts is currently only available through the OpenAI dashboard, but I am interested in whether there are any API endpoints that allow programmatic management of prompts â such as creating, updating, listing, and deleting them as reusable entities.
Has OpenAI released, or is there any plan to release, an API endpoint for this functionality? If not, is there any beta or enterprise-level access that would provide this feature?
I appreciate any insights or updates on this, as it would greatly improve automation and integration within our workflows.
If you have access to a local store, you can do this on your own.
We have our own Prompt Management System.
For example, we have access to an SQL Server database where we can create, update, and delete as many selectable reusable prompts as we want. For example, here is an instruction prompt record:
Description (for UI pick list):
Response Format: Text - Standard Paragraph Format
Instruction Prompt:
The response must be in standard paragraph format.
- A heading must be created for each paragragh.
- The heading must be in title case.
- Prepend headings with large Roman numerals.
- Separate the heading and the following paragraph with a blank line.
- Create a title for the response above the first paragraph heading and separate with a blank line.
- The title must be in title case.
If properly designed, doing it yourself is far better that relying on an OpenAI API. We have been doing this for over a year now with great success.
Thanks for sharing. I appreciate the insight.
Hi, so you donât use the âChat promptsâ from the OpenAI dashboard, and instead, you send the âinstructionsâ in the request body on every new model/response request?
âpromptsâ stores a versioned preset that has instructions (that actually can be several messages, and the âinstructionsâ field can return them but âinstructionsâ blocks you sending a context of multiple messages). In addition, it has settings like the tools or reasoning effort.
If you figured out a way to keep track of which prompt ID and version you should be sending, and also track what is being done so you can actually request the correct âincludeâ and handle what comes outâŚyou also can simply send the parameters yourself, not relying on any UI to gate your fluid API use. You save yourself whatever database lookup OpenAI is doing before generation can start.






