GPT 5.2 not working with reusable prompts in responses api

Hi all,

I use the reusable prompts feature quite extensively and as I usually do when a new model comes out I updated the prompt to use GPT-5.2. However while this works in openai’s developer platform on our client end we get:

Uh-oh! There was an issue with the response. Error: 400, message=‘Bad Request’, url=‘https://api.openai.com/v1/responses

Switching the model back to 5.1 fixes this immediately.

I have tried similarly creating a new prompt with gpt-5.2 and get the same error.

3 Likes

I’m seeing the same.

"error": {
	"message": "Unsupported parameter: 'top_p' is not supported with this model.",
	"type": "invalid_request_error",
	"param": "top_p",
	"code": null
}
2 Likes

I’m also seeing the same. And this is now happening with any model.
”HTTP 400 BadRequest: Unsupported parameter: ‘top_p’ is not supported with this model.”

The failing seems to be the Responses API refusing a parameter that should be accepted if a default of 1, as has been constant and consistent, as there are many “defaults” of parameters not supported but which will pass and be echoed as “medium”, [], etc.

GPT-5.2 actually does support temperature and top_p - when using the default reasoning effort of “none”. Otherwise, such high temperature as this not constrained by first passing top “p” probability distribution would rapidly descend into textual madness (as seen below on Chat Completions).

You’ll get an error if attempting non-default sampling parameters with higher reasoning settings, and that is the validation that must continue on this model, and top_p must be allowed to remove more of a tail of the worst predictions.

Action needed by OpenAI

  • tolerate top_p always on Responses
  • validate on sampling values in combination with “none” reasoning
  • provide “prompts” UI presentation for configuring all parameters within all accepted ranges, and store to the actual prompt object.
  • Provide real create, update, retrieve, list, delete API methods by developer API key, so I don’t have to warn every single inquiry against ever using this platform lock-in.
3 Likes

Hi everyone,

In my testing, I was able to successfully use the passed prompt param with the gpt-5.2 model. If you’re still running into issues, please let me know the steps to reproduce them.

Regarding the "top_p" error, this is explained in the migration guidance docs:

GPT-5.2 parameter compatibility

The following parameters are only supported when using GPT-5.2 with reasoning effort set to none:

  • temperature
  • top_p
  • logprobs

Requests to GPT-5.2 or GPT-5.1 with any other reasoning effort setting, or to older GPT-5 models (e.g., gpt-5, gpt-5-mini, gpt-5-nano) that include these fields will raise an error.

3 Likes

There is a bug in ResponsesApi. I don’t send top_p in payload, but error is the result.

1 Like

Regarding the top-p error, the symptom is indicative of

  • API validation not accepting it at all, when it should be accepting int(1) in all cases
  • Prompts sending a value that does not validate.

One can use developer tools->network, and observe the returned prompts object that is echoed back to the prompts playground when saving. You can expect that these are parameters that will run against the model.

Further, if a bad value was being placed by prompts that a model won’t run, a runtime correction of sending "top_p":1 yourself should be a mitgation on Responses.


Replication? I’m not currently in the mood to run diagnosis API calls against endpoint methods where I have already predicted and avoided pain points.

2 Likes

UPDATE:

After further testing with a new prompt created for gpt-5.1 I was able to reproduce this issue by passing that same prompt with Responses API call with model set to gpt-5.2.

Sharing with the team now.

7 Likes

Is there any update or solution?

Replication of issue:

Create a new “prompt” from scratch, using “gpt-5.2”.

Doing so does not send a top_p parameter when creating a prompt, but the prompt object returned has a top_p:1.0. This is also echoed in the successful use of the prompt ID. WHEN reasoning.effort is “none”.

However, upgrade that same prompt ID with reasoning.effort:medium, the same top_p is echoed in the object from the platform site’s API call (that you cannot make yourself with API key). Then:

Error code: 400 - {'error': {'message': "Unsupported parameter: 'top_p' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'top_p', 'code': None}}

I never had a model other than GPT-5.2 in that particular brand new prompt ID, still failure.

The workaround?

Pass top_p:0.98 in the API call you make

Yes, despite the error message text and unexpected malfunction in conjunction with prompts, you must disobey the error’s information. The reasoning mode of gpt-5.2 will accept its own default of 0.98 as a prompt override, and then run.

3 Likes

Jumping on the bandwagon here - top_p not supported, but wasn’t in the request - appreciate the .98 hack, but i’ll just wait till its properly fixed…

Ongoing error when I try to begin a session inside any project folder.

1 Like

I’m having the same problem: ”HTTP 400 BadRequest: Unsupported parameter: ‘top_p’ is not supported with this model”.

I don’t send top_p parameter in payload.

Then do send top_p parameter in the API request payload as described above until OpenAI corrects the model so that any backend top_p 1.0 that is meant to be a default is silently cast to the accepted value. The workaround described above.

try:
    response = client.responses.create(
        top_p=0.98,  # this is required against gpt-5.2 "prompt" settings
        prompt={
            "id": "pmpt_12345",
            "version": "1"
        },
        input=input_messages,
        max_output_tokens=12345,
        store=False,
        service_tier=None,
    )
1 Like

The bug for me is when using "model":"gpt-5.2-pro".

gpt-5.2 works fine for me.

1 Like

Hey everyone, Our engineering team took a look at this issue and have deployed a fix. This issue should now be resolved. Thank you!

3 Likes

Evaluation of the solution that someone was confident enough to assign to themselves:

Same “prompt” ID, in conjunction with top_p:

  • omitted: success now (top_p=0.98 echoed)
  • null value: success now
  • 1.00: Unsupported parameter: ‘top_p’ (same as before)
  • 0.98: success (top_p=0.98 echoed)

This means that any existing code you’ve written for reasoning models on Responses, which overrides unsupported parameters with echoed values of top_p=1 or temperature=1 (as seen on any other model) must be corrected to drop unsupported parameters completely. Except: now not in the case of reasoning.effort:“none” which does take them. Except: but not in “prompts” which still has no facility for altering temperature or top_p in conjunction with “none” reasoning. Why? Since you cannot anticipate further input default values (that you should be in control of but are not) being messed with as custom model run values picked for you.


PS: what the parameters of respecting developers who use your reasoning models and endpoint looks like: