Hi all,
I’m trying to use token logprobs as a lightweight “confidence” signal for classification labels. I can reliably get message.output_text.logprobs when Structured Outputs are NOT enabled, but when I enable Structured Outputs (json_schema), GPT-5.1 (and GPT-5.2 in my tests) returns logprobs=[] even though I request them via include=["message.output_text.logprobs"].
This seems related to other reports about missing logprobs / logprobs behavior:
-
Related: discussion 1 (missing logprobs in responses)
-
Related: discussion_2 (structured outputs thread)
What I’m doing
-
Endpoint:
POST /v1/responsesviaopenai-python(AsyncOpenAI) -
Requesting logprobs:
include=["message.output_text.logprobs"] -
top_logprobs=1 -
For GPT-5.* I also set
reasoning={"effort":"none"}(since logprobs are not supported for reasoning output)
Expected
If I request include=["message.output_text.logprobs"], I expect output[0].content[0].logprobs to contain token logprobs for the generated output text.
Actual
-
GPT-5.1 without structured outputs: logprobs present
-
GPT-5.1 with structured outputs (
json_schema):logprobs=[] -
GPT-4.1 with structured outputs (
json_schema): logprobs present
Minimal reproducible example
from openai import AsyncOpenAI
async_client = AsyncOpenAI()
common = dict(
model="gpt-5.1",
input="Hello!",
include=["message.output_text.logprobs"],
top_logprobs=1,
reasoning={"effort": "none"},
max_output_tokens=64,
temperature=1.0,
)
# no structured outputs
r1 = await async_client.responses.create(**common)
# structured outputs
r2 = await async_client.responses.create(
**common,
text={
"format": {
"type": "json_schema",
"name": "HelloSchema",
"schema": {
"type": "object",
"properties": {"x": {"type": "string"}},
"required": ["x"],
"additionalProperties": False,
},
"strict": True,
},
"verbosity": "low",
},
)
print(f'{len(r1.output[0].content[0].logprobs)}, {len(r2.output[0].content[0].logprobs)}')
Output: 9, 0