"Container is expired" error in OpenAI `Responses API Stream`

I am using OpenAI Responses API Stream to chat with AI, I also passes the “previous_response_id” to make a complete chat, but after sometimes after few messages that chat starts giving me an error which is this:

Error code: 400 - {'error': {'message': 'Container is expired.', 'type': 'invalid_request_error', 'param': None, 'code': None}}

Whenever i send a new message with the previous_response_id it says “Container is expired”, Can anyone help me what should i do to avoid it and why am i having this error, For reference below is my code:

stream = self.client.responses.create(
                        model=model,
                        previous_response_id=previous_response_id,
                        input=[
                            {   "role": "developer", 
                                "content": [
                                    {"type": "input_text", "text": system_message}
                                ]
                            },
                            {   "role": "user", 
                                "content": [
                                    {"type": "input_text", "text": user_message}
                                ]
                            }
                        ],
                        tools=[
                            {"type": "web_search_preview"},
                            {
                                "type": "code_interpreter",
                                "container": {"type": "auto", "file_ids": file_ids}
                            }
                        ],
                        stream=True
                    )
2 Likes

Containers are ephemeral, they expire after 20 minutes if not used.

Which means that you either have to call it again within this period, or call the container API to keep it alive by polling it through constant refresh.

feature-request
Although there is the auto parameter, it only means atm that it creates the container when the tool is called, but it doesn’t automatically recreate another when it expires. It would be nice if there was a feature to do it, implied within the auto parameter.

I just experienced the same issue, so I came here. Nice to see you, lol!

If you look at self.client.containers, there is some code for creating it custom. Should I use this?

Yes you can create a container beforehand and pass its ID. The issue though is that when parameter is set to auto, it might be misleading people to think it is always automatic.

Below is an example on how to forcefully create a new container.

from openai import OpenAI
client = OpenAI()

container = client.containers.create(name="test-container")

response = client.responses.create(
    model="gpt-4.1",
    tools=[{
        "type": "code_interpreter",
        "container": container.id
    }],
    tool_choice="required",
    input="use the python tool to calculate what is 4 * 3.82. and then find its square root and then find the square root of that result"
)

print(response.output_text)

Then it seems like we should manage the containers with a database… By calculating expired_at and creating a new one if it’s expired, and so on.

Thanks for clarifying how the auto container setting works. I have a follow-up question: if I manually create a new container after the previous one has expired, will the conversation context still persist? Or would it be treated as a completely new chat where passing the previous_response_id no longer has any effect?

I haven’t tried, theoretically it should but there is one issue: since containers are ephemeral they aren’t fully compatible with a stateful conversation.

Meaning, any content like files in the expired container will be unavailable, having the developer to handle saving any data before the container expires and reupload into the newly created container when the conversation is continued.

This might cause some inconsistency in the conversation.

1 Like

I’ve implemented periodic retrieve() calls to keep containers alive and avoid the 20-minute idle timeout, and this approach seems to work well in the short term.

However, I’ve also observed that even with consistent activity, containers eventually expire — possibly after around 24 hours. Not sure if anyone else has encountered this, but it would be really helpful if OpenAI could provide more concrete guidance on the full container lifecycle. It would be essential for the architectural decisions when designing long-lived agents or workflows that rely on a persistent container context.

I’ve also shared a more detailed question in this related post:
What is the best practice for keeping containers alive?

Hopefully the OpenAI team sees these related threads and can offer some clarification!

1 Like

I did create a new container, since the old one was expired, however, passing the new container together with `previous_response_id` creates an error: “Multiple mismatching container_ids found“, so basically, there is no recovery here, if you want to have OpenAI automatically manage state.

1 Like

I ran into the same issue. Instead of using previous_response_id to chain responses, I used conversationId, which essentially serves the same purpose. However, when I tried to add another response to the conversation, I encountered the same error.

In my case, I had used the code_interpreter tool to generate a file, and one of the conversation items was of type code_interpreter_call. It seems that the AI attempts to access this container to retrieve additional context from the conversation, but since the container has expired, it triggers the “container is expired” error.

My workaround:

  1. Retrieve the conversation history.

  2. Locate the message with type code_interpreter_call.

  3. Delete this message from the conversation using the Delete Item from Conversation API call:
    https://platform.openai.com/docs/api-reference/conversations/delete-item

  4. Try sending the message again.

Notes:

  • I’m using conversationId to maintain the message chain.

  • The AI may lose some context from the code interpreter. In my case, this wasn’t a problem, since that message only contained the instruction for generating the file — the message responsible for downloading the file was still present.

4 Likes