Container ids referenced in Completions conversations are retained after the container is removed?

My application, at times, causes files to be generated, and stored in containers then available with the files API (actually containers.files.content.retrieve). The logic will remove the container after the file(s) are downloaded to save on local storage. The responses id is also saved in a database to retain the conversation. I’ve just noticed errors such as the following when going back to a saved conversation which had a prior associated container id for a container that was deleted, and they are like the following:

Error: 404 Container with id 'cntr_1234567' not found.

But the conversation still seems to be active as context is there when I converse. My questions are:

  1. Is there a retained association between a responses conversation and a container that was created to store file(s) for that conversation?
  2. When a container is processed and no longer needed, and is removed using the containers API, is the prior association with a responses conversation retained? And if it is retained, is there any way to disassociate the responses conversation from the deleted container so I don’t get these 404 errors?

I assume you are referring to code interpreter, is that correct?

I think there is a certain incompatibility between code interpreter’s containers (ephemeral) and ResponsesAPI (stateful).

After about 20 minutes of inactivity the containers along with all files inside become inaccessible. Although there is an auto parameter, it only creates a container but doesn’t recreates an expired one, breaking the link between the annotations and files that were inside the lost containers.

  1. Probably container id references and file annotations pointing to expired references.
  2. Yes. Manually rebuild the inputs, instead of using previous_response_id. And if needed, upload the saved files into a new container. (It is a lot of work).

Issues with any attempt to “rebuild” after automatic expiry and failure due to container ID in the internal chat state of previous_response_id.

Yes, code_interpreter.

It seems like a change could be made to one of the APIs to permit a responses conversation to disassociate itself from a deleted/expired container in lieu of returning 404’s (possibly automatically after issuing the 404). In my case with the deletion of the contained file and the container itself after file retrieval, I’ve left the dangling container reference almost immediately.

Yes, that’s one solution; but as you mention it is extra work. The value of saving a responses_id in a database seems somewhat nullified if this container issue occurs. It’s also possible just to ignore the received 404’s, but that may limit the conversation with the file contents no longer associated.

Thank you for your response.

So “rebuild” really means cause the AI model to regenerate the file and re-stash it in a newly created, and conversation-associated container. But will that regain the same conversation context? From your referenced thread, there is no guarantee.

Thanks for the link, Jay.

Scenario

user: using your python notebook environment, create a list array literal with 10 random numbers called “my_randoms”. Return and report on the first two.
assistant: (analyzing) Done. I made integers, and you’ve got 33 and 38 as the first two.
[time and expiry]
user: Actually, it looks like my upcoming project will need that list extended with 10 more!
(API: FAIL container_id)
(or your attempts: assistant: “I’m sorry, it looks like the notebook has no persistence, so I don’t trust multi-turn uses of it”)

Nothing can recreate the container state itself, not even replaying all the task inputs due to AI model non-determinism.

The most reliable action is to maintain your own container and chat for inputs (forcing container creation cost even if not used), optionally keep the container busy with alterations while the user is active, and make a new developer message addition in your restarted chat history: “python tool notebook state and files reset due to 20 minute timeout; start over if needed.”.