Background mode requests stuck in 'queued' forever - Responses API

Same issue was reported a long time ago we’re going to have to 100% churn from OpenAI if you don’t resolve it actually unacceptable for any production use case for this to be happening.

1 Like
This problem has been with version 5 since the very beginning... So much time and still no change!

1 Like

Same issue here. Background requests to the response API continue to fail even today (2 November).

Our logs indicate the issues started happening the morning of 30 October around 8:30am Mountain Time. And continues in earnest even now. Newer behavior we’re seeing is that background requests fail much faster than they were two days ago. And this is happening for gpt-4.1 and gpt-5 requests; from what I can tell the model parameter doesn’t matter.

Note, the same requests, when “background” is omitted, do return synchronously with expected results. Of course, blocking a thread for minutes to wait for a reasoning model to respond is an untenable solution in Production systems.

3 Likes

I am experiencing this issue as well.

Just as @Jeremy_Thomas said, the issue has changed a bit since last week. The requests now fail immediately and return the “An error occurred while processing your request. […]” error message. When I don’t run in background mode, everything works fine.

Also experiencing them today again.

Hey all, Steve here from the OAI eng team. Apologies for the disruptions–do you all have request ids we could look at?

This is ID :req_bfec629df2534c529bbb4a7ddfc598a0
params:
{
model: ‘gpt-5’,
input: ‘hello’,
stream: true,
background: true,
tools: [
{
type: ‘mcp’,
server_label: ‘deepwiki’,
server_url: ‘``https://mcp.deepwiki.com/mcp’``,
require_approval: ‘never’
}
]
}

only error when background true and has mcp server
Am I using it wrong?

Here are some request IDs that have received those errors:

  • wfr_019a4bb7706f77b79d8c9082c1c1aa35
  • wfr_019a4b7aa7d47d9c88f6570eacf128a2
  • wfr_019a4b3ba0e878b5baf4909c14e5b209
  • wfr_019a4abaeeb97415a2a0c61af512b63b
  • wfr_019a4abc8af87ce293cc759a1689a60c

Please let me know if you need any more.

Where did “wfr_” request IDs come from?

The actual request ID is in the API response header, which OpenAI SDKs don’t expose easily. You have to use the .with_raw_response.create() method, and then parse the output differently (making it far easier simply not to use the offered libraries and instead code your own API calls.)

@_j The error messages from the logs in the OpenAI dashboard sometimes include the wfr_ ids.

@stevecoffey If you’re looking for more, here are some:

  • resp_0bbed3056914ed94006909070835e881968be3af375a085b71: wfr_019a4b4383767db0b201026cf982847f
  • resp_02f5734ea7d7863b006909070923c481909496be441688036d: wfr_019a4b4382837cac9fd38dd14d4f41b6
  • resp_0e4aa68725270f8000690907084d2081969d7e7e3724eee5e7: wfr_019a4b4384707e9883c8808656311f3f
  • resp_0aac93add8483f910069090708667c8190832993ad8dfb7bc8: wfr_019a4b437ec27163ace45c5b24e9bc33
1 Like

Those request IDs were included in the error messages from the logs in the OpenAI dashboard. Here are some other IDs listed:

  • resp_00abb0a47ad1829a0069094fba97508194844e83e9931a1e84
  • resp_0a3f92daf98cb4c7006909207347ec819584ac3d074075e57e
  • resp_0641de4c9913e44f0069091a031e8c8196a08ed4fc6cea619d
  • resp_04b71dd6a98592b0006908eaa33ee48193aca696d68e30904d
  • resp_0f462da653a11522006908e28bb61881979a261d397e5acc38

wfr_019a4edd92397f248488945123835789

wfr_019a4e85e06377058a47bfcef0f1eedd

More if you need them:

resp_094d899a43fd2d32006903db1d83f0819789a791f72bf83897

resp_08feeb6f34e5dcdc0069039c6d3af08190ac9ce25e49734a1d

resp_0a1daa081fff8b58006903807beacc8197b166897cffa9cccd

resp_05affa1612ab36ee006903d2a469bc8196bbae663bbd4d2dc1

resp_02a4771bee77c2f3006903af371ccc8197a9c52b7aac30a188

resp_03961de2c59fd47700690350cf27088194b1a7464efa4b2259

I’m going to guess that you all have a DNS resolution error in your queueing system, where when your workers attempt to list_tools on our MCP servers, they can’t resolve the MCP domain name. Just a hunch.

Hey all! Thanks so much for providing the request ids. Found a bug here in MCP–will aim to get a fix out today

5 Likes

hey @stevecoffey could you please let us know when you push the fix.
thanks!