Missing logprobs in Chat Completions response

Hello,

I am seeing unexpected behavior with the logprobs output when calling the chat completions endpoint in the API. Intermittently, the API response is returned with logprobs=None in the choices list (see sample output below). I cannot find any documentation or forum discussions describing a similar issue, so I would appreciate any insight about how else to debug this issue or if it is a bug in the API.

Simple Example for Debugging

Using openai Version: 1.97.0

from openai import OpenAI

client = OpenAI()

def call_openai(text):
    messages = [
        {"role": "system", "content": "Summarize the following text: "},
        {"role": "user", "content": text},
    ]

    response = client.chat.completions.create(
        model = 'gpt-4.1-2025-04-14',
        messages = messages,
        max_tokens = 2000,
        temperature = 0.5,
        logprobs=True,
    )

    return response

And a sample response with missing logprobs output:

ChatCompletion(
    id='chatcmpl-BvmYYGXtfrpvdA573fpv9SAtCtrYx',
    choices=[
        Choice(
            finish_reason='stop',
            index=0,
            logprobs=None,
            message=ChatCompletionMessage(
                content='**Summary of the... {rest of summary omitted}',
                role='assistant'
            )
        )
    ],
    model='gpt-4.1-2025-04-14',
    usage=CompletionUsage(
        completion_tokens=879,
        prompt_tokens=50019,
        total_tokens=50898
    )
)

Experiments

  • I systematically varied prompt length and logged the presence of logprobs in the output and the response time.
  • See the attached charts showing prompt length vs. response time, colored by whether or not logprobs were present in the output for various input texts

Questions

  • Has anyone else seen this issue, and are there any workarounds?
  • Is there any documentation I am missing about input text causing an issue with logprobs?
  • Has anything changed recently that might have introduced this behavior?

Thank you for your help!

1 Like

Honestly, that API request should be failing on you.

There are now two parameters that need to be used in concert, that can be seen in this example Chat Completions payload:

api_parameters = {
    "messages": [system_message] + [user_message],
    "model": model,
    "max_completion_tokens": 3000,  # enough tokens for any internal reasoning also
    "top_p": 1,  # sampling parameter, less than 1 reduces the 'tail' of poor tokens (0-1)
    "temperature": 1,  # sampling parameter, less than 1 favors higher-certainty token choices (0-2)
    "logprobs": True,
    "top_logprobs": 4,  # number of logprob results, max 20
}

Logprobs will be turned off on you in certain cases, for example, when the AI calls a function.

Success dependent on size, if that is actually what is happening, would certainly be a defect.

Thanks for sharing this level of detail Bryan. Would you mind sharing this directly with support@openai.com if not already? Thank you.

1 Like

Hey @Bryan_Ryder

I’ve observed the exact same thing. Were you able to find the reason behind this?

I am also having this same issue, with model parameters set as

{“model”: “gpt-4o-mini”, “temperature”: 0, “seed”: 655258052, “max_tokens”: 1570, “logprobs”: true, “top_logprobs”: 5,...}

The image below shows the JSON returned by the batch request. I’ve marked the requests that returned null logprobs:

It’s hard for me to detect a pattern in the behavior. Has anyone figured this out?

I’ve figured out that it’s related to the number of items submitted to the batch request. (To add context: each of my requests asked the model to score a number of public comments on how supportive they are of a proposal.) Even though my requests were designed not to breach the the model’s context window, the sheer quantity of items to score in any one request would cause the model to go haywire (giving inconsistent answers and scoring comments that do not exist). It didn’t return log probabilities in these cases.

@Ajay_Shenoy I’m observing the same thing – logprobs is None specifically for requests with more prompt tokens. Did you ever end up sorting out root cause?

Hey Community members, Apologies that you are facing this issue. I was able to reproduce the issue on my end as well. I am discussing this internally with our engineering team and will get back soon with update. Thank you!

Thanks everyone for sharing your examples — we looked into this, and here’s what we can confirm.


The logprobs field in Chat Completions is optional by design. Even when logprobs=True is requested, the API schema allows logprobs to be returned as null in some cases. The current implementation also includes logic for handling situations where logprobs are not provided by the model, even when requested, which means this behavior is expected.


To summarise:


The API will return logprobs when they’re available.


In some scenarios, the model may omit them, and the field will appear as null.


Clients should be prepared to handle logprobs=None as a valid response state.


We know this can be surprising if you're relying on logprobs for downstream processing, so we appreciate everyone flagging their observations. We will work to improve our documentation to reflect the same and share this insights with the engineering team to implement better handling for future. Thank you!

1 Like

Sure, the response JSON shape allows no logprobs. It would have to if I don’t request logprobs.

The issue is that the AI model is expected to fulfill the request, not fail randomly.

There should be no “in some scenarios” apart from intentionally dropping them for non-disclosure, for example, in function call outputs.

Hey community member, Would anybody have a request id handy where logsprobs is enabled but is not appearing in the output. Our team is looking into this issue. Thank you!