Interesting, I’ve never seen that error and I’m not getting it now.
Did you at any point run some code which did time out?
I wonder if we each get our own persistent virtual environment and yours got borked.
You got me thinking so I just tested something.
I uploaded a file in one chat session with Code Interpreter, then started a new chat and asked Code Interpreter to list all the files in my directory and the file I uploaded in the earlier chat was there.
So, now I’m wondering how often that gets reset, because I’ve used Code Interpreter a bunch and uploaded several files which aren’t there now.
I’m sure it’s somewhere in the documentation, I’ll need to try to track that down.
Anyway, it seems as though Code Interpreter is at least somewhat stateful across chats. So my guess is you had a long-running process that timed out, which triggered a flag in the system to warn you about prior events being lost, and you’ll continue to receive that message until you get issued a new virtual environment.
Again, that’s pure speculation on my part, but it feels so right, you know?
Edit: I also updated your topic title and tags to reflect the proper model.
I did accidently submit a string of error messages that was too long, but that was after I began using it and getting the error message, at which time I started a new chat and continued getting the message.
I did have other windows open, so I closed them an I’ll see if I continue to get this message. Typically, I’m not running anything that times out. The only time I time out is when I max out my usage period.
But, I had hardly used it today (yesterday) when these notices started popping up.
Yup. I don’t think it’s supposed to persist though ( EDIT: Someone who claims to have helped developed CI says that yes, it’s intended).
Some people have found old files on their virtual environment in new sessions. Here’s an article which discovers this. In my opinion it’s not the best, mainly because it’s revolves around asking ChatGPT reflective questions - which in most cases doesn’t make sense.
GPT-4 again claims that data is not saved between different conversations with the same user. Despite this, the file saved in the last conversation (“hello_user_2.txt”) is available, contradicting GPT-4. This happens because a user can be assigned to the same VM (without the VM being reset) between multiple conversations with GPT-4. I’ve never noticed two different users being on the same VM.
Actually, someone who helped make it added a comment as of July 14th:
The behavior is a bit of an implementation detail. We don’t provision more than a single sandbox per user, so the data on disk within that sandbox can overlap when you have multiple concurrent sessions, even though the other aspects of the execution state are separate. I agree this is a bit surprising (though it has no security impact), and we’ve been discussing ways to make this more intuitive.
I am experiencing the same issue. Previously, I could upload a PDF without any problems. However, for the past few days, I have been encountering an error every time I start a chat and attempt to upload a PDF.
I sent a code file in to big to read so it could check the code, then it sent me this error. I reduced code submitted it and now I just keep getting this error, if you can not fix the problem I will have to look at trying a different model I hear LLaMA is getting good, might have to set up my old pc with its 2060 card as I’m not going to pay a monthly fee and pay for tokens to be restricted on my use.
EDIT: I have been able to get rid of the error by deleting all the files I created today.
Just now while working in the SFO airport, I got this error 5 to 10 times. Some of it might have been related to switching between my laptop and phone while needing to walk or put the laptop away on my plane. And some could be from changing internet sources on my laptop. But it ground my ability to move forward to a halt, and after uploading my data files many times I was informed that I had hit the ChatGPT Plus message cap.
It would be great to have more transparency with this error. For instance, what are the time out conditions, so I can be careful to avoid triggering a time out? And for a specific time out, what condition was satisfied that triggered it?