I have a process that generally takes ~ 5 - 6 mins in normal situations… However, my server will timeout before this && I do not need to wait on the results.
The ideal solution appears to be background:true and then waiting on response.* in the wenhook so that I can do what i need to do with the completed response.
I noticed that two distinct outcomes.
When text.format is set to zodTextFormat(…) => I get the initial 200 and the resp_id but then very quickly, I get the webhook being hit with response.failed and the super helpful server_error with not further explanation.
When text.format is undefined and defaults to just receiving raw text back, this works perfectly as intended. However, this is not perfect or even really viable for my use case, I need the structured output.
For more context, If I set the timeout large enough on the OpenAI client instantiation && the job finishes before my server times out… The completed response with the structured_output is perfect.
If I timeout, the response will still complete eventually, as I can see it in my openai LOGS, and its perfect…. but the webhook doesn’t pick up on it…
Anyone else running into this?
Am I missing something obvious?
using gpt-5-nano or gpt-5-mini
I havent seen anything about this whilst googling or looking on GH… All LLMs that I use for other tasks (chatGPT, Claude, Grok, etc) are all outdated and dont even know that gpt-5-* exists and are not helpful.
The goal is to be able to submit async jobs (with structured_output) that the webhook will pick up on completion.