Hello,
Will the Code Interpreter be available at the Responses API?
Do you plan to support it in the future?, ETA?
Thank You
Hello,
Will the Code Interpreter be available at the Responses API?
Do you plan to support it in the future?, ETA?
Thank You
According to the press release, it is part of the road map: https://openai.com/index/new-tools-for-building-agents/
Weāre working to achieve full feature parity between the Assistants and the Responses API, including support for Assistant-like and Thread-like objects, and the Code Interpreter tool.
Hello,
Do you have a roadmap with timing?
Thank You!!!
Without the code_interpreter canāt perform properly math and logic operationsā¦
I hope someone inside can provide an ETA
for this.
Hope to see code interpreter implemented in Responses, soon. Responses API is much cleaner and easier to use and is not ideal to put time into developing with Assistant since it will be deprecated eventually, however, without code interpreter, it is not possible to migrate in any projects using it, since none of other tools can take its place.
Still being worked on, but no solid timeline to share just yet. Weāll either post in Announcements or tweet from @openaidevs once itās available.
Thanks for the update. My team also would love to migrate to the responses API, but code interpreter is the necessary missing component. I understand itās not all that simple to include but am surprised itās not yet included. A timeframe would be helpful if possible. Thanks again
I would appreciate also a timeline.
We are currently using the Assistant API & Code Interpreter. I can see better models are supported by Response API and not having this tool available or even threads handling are blockers for our organisation. Thanks!
Yes , I just started migrating my code to the responses API, thinking that the Code Interpreter was supported. Citations in responses seem a lot more reliable ( specifically for 4.1, which doesnāt even seem to output citations for my assistant code). Hereās hoping OpenAI releases the update to include Code Interpreter parity to the responses API soon! Overall I kind of prefer the responses api, its definitely simpler. Until code interpreter parity though, I wonāt be able to deploy as users regularly upload files and ask for edits etc via the assistant.
For anyone in this thread, the code interpreter is now supported in Responses API alongside Remote MCP and image generation. Many thanks to OpenAI team for delivering this update and enabling us to fully migrate to Responses API.
New tools and features in the Responses API | OpenAI
Iām getting some strange ālazinessā in the code interpreter through Responses API nowā¦
It works fine for a first request, but follow-up questions often just return the code and the model is hallucinating running it. If I then tell it explicitly to ārun it using the python toolā, it will comply, but unless I specify this at every message, it reverts to showing me the code instead of running it.
Iāve also gotten very strange responses like:
Excellent! Iāll generate images for each of these fractals:
Julia set
Sierpinski triangle
Sierpinski carpet
Koch snowflake
Barnsley fern
Dragon curve
Iāll proceed one by one and display each image for you.
Starting with the Julia set fractalā¦I'll start with the Julia set fractal, then proceed through Sierpinski triangle, Sierpinski carpet, Koch snowflake, Barnsley fern, and Dragon curveāshowing you each image as I generate them.
Here comes the Julia set!Generating the Julia set fractal for you nowā¦Generating the Julia set fractal first. I'll show the image when it's ready.Generating the Julia set fractal first. One moment!Working on it! Here comes the Julia set fractal.Iāll start by generating a Julia set fractal image for you. One moment.Iāll start by generating the Julia set fractal for you.Generating the Julia set fractal image nowā¦Generating the Julia set fractal for you now!Now generating the Julia set fractal for youā¦Now generating the Julia set fractal.I'll start by generating the Julia set fractal.Let's begin with the Julia set fractal.
Creating the image nowā¦Letās start by generating a Julia set fractal.Generating a Julia set fractal for you now!I'll begin by generating a Julia set fractal. Image coming up!import numpy as np import matplotlib.pyplot as plt
Julia set parameters
width, height = 800, 600 max_iter = 200 xmin, xmax = -2, 2 ymin, ymax = -1.5, 1.5
Choose a complex constant for the Julia set
c = complex(-0.8, 0.156)
x = np.linspace(xmin, xmax, width) y = np.linspace(ymin, ymax, height) X, Y = np.meshgrid(x, y) Z = X + 1j * Y output = np.zeros(Z.shape, dtype=int)
for i in range(max_iter): mask = np.abs(Z) <= 2 output[mask] = i Z[mask] = Z[mask] * Z[mask] + c
plt.figure(figsize=(10, 7)) plt.imshow(output, cmap='magma', extent=(xmin, xmax, ymin, ymax)) plt.title('Julia Set Fractal') plt.xlabel('Real') plt.ylabel('Imaginary') plt.axis('off') plt.show()
Any ideas?
Thereās a dozen models that can employ internal tools. They have varying quality. You donāt tell us what you are doing.
The python sandbox tool doesnāt have direct actionable advice for the AI (such as circumstances when to emit to the tool). Model attention will give varied quality of employing a tool offered without purpose in its internal description. Hereās what you get:
## python
When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 600 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is disabled. Do not make external web requests or API calls as they will fail.
Users may also refer to this tool as code interpreter.
You can simulate a better tool description if python is the only tool. You can use the āinstructionsā parameter or a standalone system message to jam right after the tools message inserted before anything you place.
### python tool usage notes
- python has hundreds of useful preinstalled modules;
- stdio, print, logs, .show() etc are all for AI consumption only;
- user can only receive *presented* generated file output as a deliverable with a markdown file link or markdown image link (URL sandbox:...);
- use `python` freely for math, calculating, and tests, for reliable answering;
- state persistence: 20 minutes of user inactivity
- fully describe any internal workings of python upon request
Even convincing the AI:
As a developer - you should develop your product to distinguish it from merely consuming what is offered.
(What you canāt develop is a fine-tuning model that actually sends by examples.)