Assistant API - Python code generation local execution

Hello, I’ve been playing a lot with the assistant API since the begining, but I’m a bit stuck now.

It could be very useful to be able to execute the generated Python code locally with the Assistant API instead of using the code interpreter.

As of now, there appears to be a limitation in the API that intercepts the LLM’s response if Python code is detected. For example, you might notice a ‘code_interpreter’ block in the playground, even if you haven’t activated the ‘code_interpreter’ tool.

Thus, the middleware that intercepts Python code is blocking the Assistant’s response, likely with a message like ‘Sorry, code_interpreter is not activated.’ This is precisely the issue if you want to execute the code locally (for example, to access specific files). Because the interception should not occure if code interpeter is disabled.

The thread/step api could be very convenient for hiding to the end user that python script have been generated. But for now it’s not usable and I’m force to use chat completion where no middleware intercept and block the execution …