Hi everyone, I’m migrating our code interpreter from the Assistants API style to the Responses API style. I understand that I can upload files through container uploads for operations with models. However, I also see that I can use file IDs obtained from the /v1/files endpoint, which don’t expire like containers. I’m confused about the relationship between files uploaded via container URLs and those accessed via /v1/files. Are there any documents that explain this relationship? Thanks!
Those files are completely separate https://platform.openai.com/docs/api-reference/container-files
As far as I can see there is no relationship between the two. And at the moment, as per the documentation containers expire in 20 minutes, so indeed the files are not persisted.
Also in my experience currently using Responses API it is virtually impossible to get a correct reference to a container file in annotations - I have solved it by always using the container file list to get all the (in my case) created files.
@jlvanhulst can you elaborate on the issue with container file annotations? Based on the thread titled “Reliably retrieving code interpreter files from the container?” (unable to include link due to trust level) if the file is not annotated by the model, it also won’t be included in the container (this is also my experience). Thoughts/advice on this?
I have not had a problem with the file being referenced in the container (output) files. But in the response output it is/was often missing in the annotations. So that was my challenge.
I does seem that at the moment this works much better at the moment.
I was actually surprised to see today a response that was using the image_generation tool where the resulting file was provided as a Base64 string and properly annotated in the JSON output response! (Nothing to do with container files, but everything with annotations) So it seems that things have gotten better in the last few weeks!
@jlvanhulst hello - do you ever run into issues with the file not being generated at all ?
I get back the sandbox link with a document name and if i look on the container, sometimes the file is not there. If I ask to try again, then the file gets created.
Using the Code Interpreter with the Response API seems buggy right now
I have one report prompt that sometimes does not create the chart I want to (that is why it is the only one that is still on gpt-4.1. So I would suspect it to be about your prompt/model combination. I don’t think this has something to do with the way the model/api currently handles than taking those outputs and putting them in results properly.
Thank you. It’s really annoying.
I was reading your comments on other posts. Like you, now I am just getting the container Id and listing /getting generated files from there.
Yesterday I was trying to add special identifiers to the files that it generated so that I could identify them by user.. it worked but then it would mess up the mnt data path, so I could not get the files.
I feel like they rushed this implementation… for parity with the assistants API .. but this is not solid at all
Why do you mean’ by user’ ? I would expect a single response API ‘thread’ to be a single user?
so that is where I am confused I guess .. when the container is set to auto, doesn’t the api reuse a container out there that is not expired ? Meaning a file gets generated for user a potentially in the same container as user b ? How does the API determine different threads ? I though that concept was gone? Or does the API separate files from different users ?
The responses API certainly does not have a ‘user’ concept. (But you could add a ‘user’ to metadata). I am pretty sure that a container is never re-used in a new Responses API call - unless you do so by attaching it, which would never happen when using the ‘Auto’ settings. With Auto you will have to retrieve your files before the sandbox expires (20 minutes). So sandbox files are attached to a sandbox that will gnerally expire in 20 minutes.