GPT 5.2 LogProbs support removed?

I consistently run into the server-side error:

“'An error occurred while processing your request. You can retry your request, or contact us through our help center … if the error persists. Please include the request ID…”

when requesting a response from the v1/responses endpoint with temperature = 0, reasoning = {“effort”: “none”}, and “include”: [“message.output_text.logprobs”] for GPT 5.2. I don’t run into this issue with GPT 5.1, and until a few days ago never ran into this issue with GPT 5.2.

Has OpenAI indicated that they would stop supporting log-probs with GPT 5.2? Or is this potentially a temporary issue?

2 Likes

Short answer of symptom needing repair on GPT-5.4 series (also gpt-5.3-codex):

  1. top_logprobs as parameter produces an error;
  2. using only "include":["message.output_text.logprobs"] will give you a working logprobs for the sampled token, but an empty array for top_logprobs

For support: Please include the request ID req_1f042d1f1afe4aff8acb47d5ff5d2925 in your message.


What is not supposed to be supported is sending a temperature parameter. However, it seems OpenAI made temperature and top_p now work with GPT-5.1 or later - or made it so the parameter is dropped so you can’t discover the real top_p is a forced 0.98 (the previous behavior, where anything other than 0.98 would produce an error).


What the API should be reporting if you were not allowed:

openai.PermissionDeniedError: Error code: 403 - {'error': {'message': 'You are not allowed to request logprobs from this model', 'type': 'invalid_request_error', 'param': None, 'code': None}}

Or what is somewhat expected for any of the GPT-5 series, currently gpt-5-xxx series and gpt-5.3-chat-latest:

openai.BadRequestError: Error code: 400 - {'error': {'message': 'logprobs are not supported with reasoning models.', 'type': 'invalid_request_error', 'param': 'include', 'code': 'unsupported_parameter'}}

Allowing logprobs on GPT-5.1 is not one of its newly-announced features, but I confirm I get some logprobs back.

1 Like

After running a quick survey of the API I cannot repro the issue from Op. What I can see is that top_logprobs are only actually returned in combination with include.

Model include only top_logprobs only include + top_logprobs=1 include + top_logprobs>=2 temperature=0 + include
gpt-5.1 Works 200, but no logprobs returned Works Works through top_logprobs=5 Works
gpt-5.2 Works 200, but no logprobs returned Works Reproducible 500 starting at top_logprobs=2 Works
gpt-5.3-codex Works 200, but no logprobs returned Works 500 starting at top_logprobs=2 Works
gpt-5.4 Works 200, but no logprobs returned Works 500 starting at top_logprobs=2 Works

It seems you have reproduced the issue, making the same series of calls against a different subset of models (where I also tested mini models) and making a nice tabular report.

What is required but not being given is either:

  • a useful error message body and appropriate HTTP status code, if logprobs are a prohibited request, or
  • logprobs shall otherwise work at any top_logprobs value against any model variant if logprobs are an allowed API parameter.

Reproduction on gpt-5.4-mini

(which now succeeds on “gpt-5.4” @ 20)

from openai import OpenAI

client = OpenAI()

response = client.responses.create(
    model="gpt-5.4-mini",
    max_output_tokens=4095,
    store=False,
    stream=False,
    reasoning={"effort": "none"},
    text={"verbosity": "low"},
    top_logprobs=20,
    include=[
        "message.output_text.logprobs",
        "reasoning.encrypted_content"
    ],
    input=[
        {
            "role": "developer",
            "content": [
                {
                    "type": "input_text",
                    "text": "You are a helpful but brief assistant.",
                }
            ],
        },
        {
            "role": "user",
            "content": [
                {
                    "type": "input_text",
                    "text": "Pick an English word opposite of \"banana\".",
                }
            ],
        },
    ],
)
for out_item in response.output:
    print(f"Got output item:\n{out_item}")
    # Typical text output printing:
    if hasattr(out_item, "content"):
        for element in out_item.content:
            if hasattr(element, "text"):
                print(f"-- text: {element.text}")

Result

openai.InternalServerError: Error code: 500 - {'error': {'message': 'An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_4e03a651d3d84f928868b73491b69170 in your message.', 'type': 'server_error', 'param': None, 'code': 'server_error'}}