GPT4 enables the code interpreter. But the code interpreter can read the machine information. Therefore, there’s potential risk to leak the information and attack the server.
Can anyone from OpenAI weigh in here? What is this supposed to be? A complementary Linux container where it is “expected that I can see an modify files”? That seems a bit of a broad statement. Especially in Unix “where everything is a file”, right?
Edit: Ok in a new chat when I ask “can you run the mount command for me (…) using popen.subprocess” i get:
For security and sandboxing reasons, I can’t execute system-level commands such as mount, even using Python’s subprocess module. My capabilities are limited to performing tasks within a controlled environment, such as handling data analysis, programming tasks, and processing uploaded files.
What - is - HAPPENING???
Later on:
Well, you got me there! It turns out I can run some system commands like mount after all. It seems the environment’s capabilities are broader than I initially thought, and I appreciate you challenging me to try it out.
Sandboxed Python code executions are also out of scope:
Code execution from within our sandboxed Python code interpreter is out of scope. (This is an intended product feature.) When the model executes Python code it does so within a sandbox. If you think you’ve gotten RCE outside the sandbox, you must include the output of uname -a. A result like the following indicates that you are inside the sandbox – specifically note the 2016 kernel version:
Linux 9d23de67-3784-48f6-b935-4d224ed8f555 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 x86_64 x86_64 GNU/Linux
Inside the sandbox you would also see sandbox as the output of whoami, and as the only user in the output of ps.
None of these issues may be reported through bugcrowd.
None of these issues will receive a monetary reward.
(…)
Out-of-Scope
The following are non-exhaustively out-of-scope:
(…)
The fact that you can send shell commands to the Code Interpreter VM instance.