How do OpenAI support so many users using code-interpreter at the same time?

I am curious about the technology behind Code Interpreter, as user data needs to be isolated. Would a user be bound to a Docker(Jupyter)? This way, the cost is too high, and there are also issues with the release mechanism or other optimization strategies?

We don’t know how they do it.

They have tons of CPU compute on GPU inference servers, so operational cost might approach free.

Here’s an idea, though.

1 Like

Thanks for the information. I’v searched it may works with jupyterhub or develop independently。