Enabling encrypted reasoning items is great, super useful for gaining more context control on reasoning models.
Background mode is also very helpful for slow models like o3-pro
. But… when we use that "background": true
, "store": true
is required, which is disallowed when using reasoning.encrypted_content
. 
https://platform.openai.com/docs/guides/reasoning/how-reasoning-works?api-mode=responses#encrypted-reasoning-items
https://platform.openai.com/docs/guides/background
Could we have this restriction loosened? (Resorting to the Batch API isn’t doable on Azure OpenAI yet sadly for o3-pro
.)
1 Like
Yeah, by the nature of background mode, store has to be set to true. We’re working on some better methods here in this area, though. Wonder if you can share any more about your use-case?
3 Likes
Yep, totally okay with me!
Sure! Here’s one example I don’t mind sharing publicly:
I am asking many questions (coding related) in parallel (>100) about the same goal/task (but with different approaches), and the vast majority of responses do not have a correct answer (which I am able to check by running the code). If I had to use previous_response_id
, this would make my script >100 times slower, and I would have to pay extra in input tokens for all the verifiably incorrect answers/reasoning objects.
Once I have a few valid answers, I concat the reasoning and outputs only from the verifiably correct answers. I then add a final user message to combine the best parts of each approach. (Previously, I was only using the text outputs and not the reasoning objects, which resulted in worse perf and higher reasoning token usage.)
I’d be happy to hop on a call if you’d like me to show my other reasonings (pun intended).
It’s basically some creative logic to squeeze out better reasoning perf for hard problems (with the tradeoff being cost and latency).
1 Like
Would webhooks help here?
1 Like
I’d love webhooks in general, but I’m not sure I see how it would solve this specific issue.
Unless you’re suggesting that this would be possible:
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "o3-pro-2025-06-10",
"input": "Hello",
"include": ["reasoning.encrypted_content"],
"background": true,
"store": false,
"webhook": {
"url": "https://example.ai.moda/webhook123",
"method": "POST",
"headers": {
"Authorization": "Bearer $MY_WEBHOOK_API_KEY"
},
"include": ["response"]
}
}' -v
And then OpenAI would POST the full response object to my webhook endpoint when it’s ready. IMO this would be nice since this would allow users to give OpenAI a presigned S3 URL for logging (even when background mode isn’t required/used).
1 Like
For this specific problem though, this would be an easier solution (if it was allowed):
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "o3-pro-2025-06-10",
"input": "Hello",
"include": ["reasoning.encrypted_content"],
"background": true,
"store": true
}' -v
Gotcha, makes sense. Encrypted items require stateless, so there isn’t an easy way to enable them with background mode. Running reasoning in foreground (no background mode) when using encrypted content, and relying on polling/previous_response_id are the only workarounds now. Sorry 
2 Likes
Is there a technical reason why we can’t have both?