That sounds like a race condition; you’re probably doing something dumb.
Just kidding.
Storing a response id is an essential prerequisite on the Responses endpoint to using a server state recurring conversation (but not the newer conversations endpoint), and is a mandatory part of the contract in using it.
Making a tool call return API call with the response ID just received, and getting ‘response ID not found’ is a failure of the endpoint. No doubt about it.
There’s plenty of other “dumb” errors you can get, such as ‘reasoning items’ missing by not being in the exact order in self-management, or returning parallel tool calls interleaved with the prior output. You get the error on the ‘rs_xxx’ item though, not THE ENTIRE RESPONSE ID MISSING.
You can ensure you are using the latest ID in your function iterator, as the AI might call functions more than once, where you need to loop and return the correct latest response_id, and also assume because of preambles that “message” can also happen with there being a “tool_call” to fulfill. Still, if you were sending inputs wrong, you’d get an error about mismatched tool IDs, not a missing response object.
There was a bit different issue reported yesterday - someone trying to “resume” a failed stream connection. It had an assumption that the Responses would always be working like ‘background’ mode even if you stream, and NOT deliver the promise of terminating the generation if you close the connection (which it might not…), so is likely something that never could have worked.
So I would say, if your previous call was status 200, used "store":true, returned a response ID seen throughout the events until .done, and it can’t be called as a prior and is also not in the platform site’s logs - responses is broken, and your conversation is now broken unless you roll back to an earlier response ID and restart again completely. The danger of trusting someone else to persist your data.
Note: you should be using only one of either previous_response_id or conversation_id as a chat history mechanism. It’s understandable that you are trying the other method if the first was also failing on you.
an unrelated code snippet I happen to have open
# all not implemented on Responses, making it a garbage endpoint
responses_body.pop("frequency_penalty", None)
responses_body.pop("presence_penalty", None)
responses_body.pop("logit_bias", None)
responses_body.pop("modalities", None)
responses_body.pop("audio", None)
responses_body.pop("n", None)
responses_body.pop("stop", None)
responses_body.pop("prediction", None)