Requests remain "Queued" forever

We are testing the responses API with “background”: true set and we started having requests that hang forever in the “Queued” status and never return any result.

We can find the requests logged in the platform but no output is ever returned of any kind even after days.

In the example log below you can see a request from last Friday 3 October 2025 and it is still hanging. What are we missing? Any suggestions? Is this a known bug?

{
    "id": "resp_03f406eeb96bfb920068dff9cf585c819692e2c0a733398505",
    "object": "response",
    "created_at": 1759508943,
    "status": "queued",
    "background": true,
    "error": null,
    "incomplete_details": null,
    "instructions": [
        {
            "type": "message",
            "content": [
                {
                    "type": "input_text",
                    "text": "....OUR PROMPT GOES HERE...."
                }
            ],
            "role": "developer"
        }
    ],
    "max_output_tokens": null,
    "max_tool_calls": null,
    "model": "gpt-5-2025-08-07",
    "output": [],
    "parallel_tool_calls": true,
    "previous_response_id": null,
    "prompt": {
        "id": "pmpt_68907a3e741c8194bd788cc017daa8e50bff20f319e3c759",
        "variables": {
            "input1": {
                "type": "input_text",
                "text": ""
            },
            "input2": {
                "type": "input_text",
                "text": ""
            },
            "input3": {
                "type": "input_text",
                "text": ""
            },
            "input4": {
                "type": "input_text",
                "text": "Some data"
            },
            "input5": {
                "type": "input_text",
                "text": ""
            },
            "input6": {
                "type": "input_text",
                "text": ""
            }
        },
        "version": "15"
    },
    "prompt_cache_key": null,
    "reasoning": {
        "effort": "medium",
        "summary": null
    },
    "safety_identifier": null,
    "service_tier": "auto",
    "store": true,
    "temperature": 1.0,
    "text": {
        "format": {
            "type": "text"
        },
        "verbosity": "medium"
    },
    "tool_choice": "auto",
    "tools": [
        {
            "type": "file_search",
            "filters": null,
            "max_num_results": 20,
            "ranking_options": {
                "ranker": "auto",
                "score_threshold": 0.0
            },
            "vector_store_ids": [
                "vs_68c842ee384081919f873d0ba181fcd9"
            ]
        },
        {
            "type": "web_search_preview",
            "search_context_size": "medium",
            "user_location": {
                "type": "approximate",
                "city": null,
                "country": "IT",
                "region": null,
                "timezone": null
            }
        }
    ],
    "top_logprobs": 0,
    "top_p": 1.0,
    "truncation": "disabled",
    "usage": null,
    "user": null,
    "metadata": {}
}

Also getting this issue….

1 Like

In the example log below – I can see you exercising about every API parameter there is, almost like a test of the robustness of the API:

  • prompt id;
  • prompt id variables, but showing them unfulfilled;
  • internal tools web search, and file search with a vector store;
  • instructions (there showing an array, not a valid input per API reference which is only string, but straight from bizarro-land, the API reporting that this as a return object should validate like a ‘input item list’, as far as even returning impossible audio and a “status” of an input)
  • and of course, ‘background’ (which would fail to even add the response to a conversation object until recently).

But the notable exception to sensibility:

  • no actual input parameter

The prompt you constructed I’d have to guess at, it may have multiple messages, and also variables, so we can’t actually see what “input” is being run for you, and if it compensates for you sending no “input.” However, you only show one “instruction” being employed as developer message, and it is not clear if it is placed here because you actually used that as an API call parameter.

The input field really should only be omitted when using ‘conversations’ and preloading conversation with messages to be answered - the case for which it is made optional. An API call that is completely powered by only variables placed into a prompt and only a developer message is kind of an anti-pattern. I can’t absolutely blame that as the source of issues, but you might re-think what you are doing, and at least have “user” ask for the task.

What can hold up a call is the vector store not being ready. That can be file attachments that never finish with their extraction and attaching, a ‘completed’ status that you should poll for when uploading and connecting files to a vector store. Proceed when the file_counts of the vector store match for “total” and “completed”.

One thing is pretty clear: the API URL for you to use now is
POST https://api.openai.com/v1/responses/resp_03f406eeb96bfb920068dff9cf585c819692e2c0a733398505/cancel

Sorry for the confusion: I posted the response we get from https://api.openai.com/v1/responses/resp_03f406eeb96bfb920068dff9cf585c819692e2c0a733398505/
('I’ve just removed the prompt)

We are actually sending an input and it is visible in the input section of the logs on the platform.
We are using prompt IDs so we are not sending an array.

1 Like

This is happening to us as well, but it just started happening this morning. And we’ve built an entire framework for our app around the background API, it will be very hard to switch it to an approach that waits for each response synchronously. :frowning:

2 Likes

Our same exact issue. Unfortunately the same request/prompt can work most of the time and SOMETIMES just get stuck. You can ask the same request twice and once it will go and once it won’t.

We’ve also had at least one case in which it got stuck while “in progress” forever.

Commenting to add that we have been experiencing the same thing, with background requests through the Responses API being indefinitely stuck with a “queued” or “in_progress” status. Re-submitting an identical request works basically every time, so it doesn’t seem related to the parameters of our requests.

Same issue. Fix this OpenAI! This is critical