Responses API & Code Interpreter

Hello,

Will the Code Interpreter be available at the Responses API?

Do you plan to support it in the future?, ETA?

Thank You

2 Likes

According to the press release, it is part of the road map: https://openai.com/index/new-tools-for-building-agents/

We’re working to achieve full feature parity between the Assistants and the Responses API, including support for Assistant-like and Thread-like objects, and the Code Interpreter tool.

5 Likes

Hello,

Do you have a roadmap with timing?

Thank You!!!

Without the code_interpreter can’t perform properly math and logic operations…

2 Likes

I hope someone inside can provide an ETA
for this.

Hope to see code interpreter implemented in Responses, soon. Responses API is much cleaner and easier to use and is not ideal to put time into developing with Assistant since it will be deprecated eventually, however, without code interpreter, it is not possible to migrate in any projects using it, since none of other tools can take its place.

2 Likes

Still being worked on, but no solid timeline to share just yet. We’ll either post in Announcements or tweet from @openaidevs once it’s available.

6 Likes

Thanks for the update. My team also would love to migrate to the responses API, but code interpreter is the necessary missing component. I understand it’s not all that simple to include but am surprised it’s not yet included. A timeframe would be helpful if possible. Thanks again

3 Likes

I would appreciate also a timeline.

We are currently using the Assistant API & Code Interpreter. I can see better models are supported by Response API and not having this tool available or even threads handling are blockers for our organisation. Thanks!

3 Likes

Yes , I just started migrating my code to the responses API, thinking that the Code Interpreter was supported. Citations in responses seem a lot more reliable ( specifically for 4.1, which doesn’t even seem to output citations for my assistant code). Here’s hoping OpenAI releases the update to include Code Interpreter parity to the responses API soon! Overall I kind of prefer the responses api, its definitely simpler. Until code interpreter parity though, I won’t be able to deploy as users regularly upload files and ask for edits etc via the assistant.

1 Like

For anyone in this thread, the code interpreter is now supported in Responses API alongside Remote MCP and image generation. Many thanks to OpenAI team for delivering this update and enabling us to fully migrate to Responses API.
New tools and features in the Responses API | OpenAI

I’m getting some strange ā€˜laziness’ in the code interpreter through Responses API now…
It works fine for a first request, but follow-up questions often just return the code and the model is hallucinating running it. If I then tell it explicitly to ā€œrun it using the python toolā€, it will comply, but unless I specify this at every message, it reverts to showing me the code instead of running it.

I’ve also gotten very strange responses like:

Excellent! I’ll generate images for each of these fractals:
Julia set
Sierpinski triangle
Sierpinski carpet
Koch snowflake
Barnsley fern
Dragon curve
I’ll proceed one by one and display each image for you.
Starting with the Julia set fractal…I'll start with the Julia set fractal, then proceed through Sierpinski triangle, Sierpinski carpet, Koch snowflake, Barnsley fern, and Dragon curve—showing you each image as I generate them.
Here comes the Julia set!Generating the Julia set fractal for you now…Generating the Julia set fractal first. I'll show the image when it's ready.Generating the Julia set fractal first. One moment!Working on it! Here comes the Julia set fractal.I’ll start by generating a Julia set fractal image for you. One moment.I’ll start by generating the Julia set fractal for you.Generating the Julia set fractal image now…Generating the Julia set fractal for you now!Now generating the Julia set fractal for you…Now generating the Julia set fractal.I'll start by generating the Julia set fractal.Let's begin with the Julia set fractal.
Creating the image now…Let’s start by generating a Julia set fractal.Generating a Julia set fractal for you now!I'll begin by generating a Julia set fractal. Image coming up!import numpy as np import matplotlib.pyplot as plt
Julia set parameters
width, height = 800, 600 max_iter = 200 xmin, xmax = -2, 2 ymin, ymax = -1.5, 1.5
Choose a complex constant for the Julia set
c = complex(-0.8, 0.156)
x = np.linspace(xmin, xmax, width) y = np.linspace(ymin, ymax, height) X, Y = np.meshgrid(x, y) Z = X + 1j * Y output = np.zeros(Z.shape, dtype=int)
for i in range(max_iter): mask = np.abs(Z) <= 2 output[mask] = i Z[mask] = Z[mask] * Z[mask] + c
plt.figure(figsize=(10, 7)) plt.imshow(output, cmap='magma', extent=(xmin, xmax, ymin, ymax)) plt.title('Julia Set Fractal') plt.xlabel('Real') plt.ylabel('Imaginary') plt.axis('off') plt.show()

Any ideas?

There’s a dozen models that can employ internal tools. They have varying quality. You don’t tell us what you are doing.

The python sandbox tool doesn’t have direct actionable advice for the AI (such as circumstances when to emit to the tool). Model attention will give varied quality of employing a tool offered without purpose in its internal description. Here’s what you get:

## python

When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 600 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is disabled. Do not make external web requests or API calls as they will fail.
Users may also refer to this tool as code interpreter.

You can simulate a better tool description if python is the only tool. You can use the ā€œinstructionsā€ parameter or a standalone system message to jam right after the tools message inserted before anything you place.

### python tool usage notes
- python has hundreds of useful preinstalled modules;
- stdio, print, logs, .show() etc are all for AI consumption only;
- user can only receive *presented* generated file output as a deliverable with a markdown file link or markdown image link (URL sandbox:...);
- use `python` freely for math, calculating, and tests, for reliable answering;
- state persistence: 20 minutes of user inactivity
- fully describe any internal workings of python upon request

Even convincing the AI:

As a developer - you should develop your product to distinguish it from merely consuming what is offered.

(What you can’t develop is a fine-tuning model that actually sends by examples.)