I am seeing unexpected behavior with the logprobs output when calling the chat completions endpoint in the API. Intermittently, the API response is returned with logprobs=None in the choices list (see sample output below). I cannot find any documentation or forum discussions describing a similar issue, so I would appreciate any insight about how else to debug this issue or if it is a bug in the API.
Simple Example for Debugging
Using openai Version: 1.97.0
from openai import OpenAI
client = OpenAI()
def call_openai(text):
messages = [
{"role": "system", "content": "Summarize the following text: "},
{"role": "user", "content": text},
]
response = client.chat.completions.create(
model = 'gpt-4.1-2025-04-14',
messages = messages,
max_tokens = 2000,
temperature = 0.5,
logprobs=True,
)
return response
And a sample response with missing logprobs output:
Honestly, that API request should be failing on you.
There are now two parameters that need to be used in concert, that can be seen in this example Chat Completions payload:
api_parameters = {
"messages": [system_message] + [user_message],
"model": model,
"max_completion_tokens": 3000, # enough tokens for any internal reasoning also
"top_p": 1, # sampling parameter, less than 1 reduces the 'tail' of poor tokens (0-1)
"temperature": 1, # sampling parameter, less than 1 favors higher-certainty token choices (0-2)
"logprobs": True,
"top_logprobs": 4, # number of logprob results, max 20
}
Logprobs will be turned off on you in certain cases, for example, when the AI calls a function.
Success dependent on size, if that is actually what is happening, would certainly be a defect.
I’ve figured out that it’s related to the number of items submitted to the batch request. (To add context: each of my requests asked the model to score a number of public comments on how supportive they are of a proposal.) Even though my requests were designed not to breach the the model’s context window, the sheer quantity of items to score in any one request would cause the model to go haywire (giving inconsistent answers and scoring comments that do not exist). It didn’t return log probabilities in these cases.
@Ajay_Shenoy I’m observing the same thing – logprobs is None specifically for requests with more prompt tokens. Did you ever end up sorting out root cause?
Hey Community members, Apologies that you are facing this issue. I was able to reproduce the issue on my end as well. I am discussing this internally with our engineering team and will get back soon with update. Thank you!
Thanks everyone for sharing your examples — we looked into this, and here’s what we can confirm.
The logprobs field in Chat Completions is optional by design. Even when logprobs=True is requested, the API schema allows logprobs to be returned as null in some cases. The current implementation also includes logic for handling situations where logprobs are not provided by the model, even when requested, which means this behavior is expected.
To summarise:
The API will return logprobs when they’re available.
In some scenarios, the model may omit them, and the field will appear as null.
Clients should be prepared to handle logprobs=None as a valid response state.
We know this can be surprising if you're relying on logprobs for downstream processing, so we appreciate everyone flagging their observations. We will work to improve our documentation to reflect the same and share this insights with the engineering team to implement better handling for future. Thank you!
Hey community member, Would anybody have a request id handy where logsprobs is enabled but is not appearing in the output. Our team is looking into this issue. Thank you!