While invoking the code interpreter tool with 4o-mini, I was facing this issue “no image is returned”, but as soon as I switch to 4o, the image is generated. I’ve scoured the official docs and examples but can’t find any statement that Code Interpreter is not supported on gpt-4o-mini. Can you please point me towards any documentation that specifies which model variants support the Code Interpreter?
Welcome to the community!
Are you talking about ChatGPT or API? And which official docs have you read, what is the link?
BWT, when you use code interpreter tool
also called python
, its does not create image like DALL-E
or 4o Image Generation
. Images are generated on ChatGPT via image_gen
tool.
Sorry for not providing enough context.
I’m using the OpenAI REST API (via client.beta.assistants.create etc.), not the ChatGPT web interface.
Docs I have read: API Reference - OpenAI API
Assistants Code Interpreter - OpenAI API
I am trying to generate some sort of pie, bar, line chart based on the user input.
When I setup my assistant with model=“gpt-4o” + tools=[{“type”:“code_interpreter”}], I successfully get back image_file attachments (my charts).
If I switch the assistant’s model to “gpt-4o-mini”, the code-interpreter run completes but never returns any file, so I get a “No image found” error.
I couldn’t debug why was the image isn’t generating as there is no mention for the docs i have read that code interpreter tool is not in the mini models
I couldn’t find a more direct approach, but at the assistants playground, when you select a model it will enable code interpreter if it is allowed. Apparently, it does allow it for gpt-4o-mini, but the compatibility is limited as you have demonstraded.
I tried looking for a system card like the one some models have, but there isn’t an specific one for gpt-4o-mini.
With the deprecation plan (it’s in the links you have posted) for the assistants API, unfortunately I don’t feel confident about hoping it to be implemented.
Which is a bit sad, the new models o3 and o4-mini seems to be particularly bright when dealing with image manipulation.
GPT-4o-mini works
Code - Node.js
import 'dotenv/config';
import fs from 'fs';
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
async function main() {
const assistant = await openai.beta.assistants.create({
name: 'Chart Generator',
instructions: `
Write Python that builds the requested chart with matplotlib.`,
model: 'gpt-4o-mini',
tools: [{ type: 'code_interpreter' }],
});
const thread = await openai.beta.threads.create();
await openai.beta.threads.messages.create(thread.id, {
role: 'user',
content: 'Create a bar chart comparing apples (10), bananas (20) and oranges (15).'
});
const run = await openai.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
});
// poll until finished
let status;
do {
await new Promise(r => setTimeout(r, 2000));
status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
console.log('⏳', status.status);
} while (status.status !== 'completed');
// fetch messages & download the file
const msgs = await openai.beta.threads.messages.list(thread.id, { limit: 50 });
for (const m of msgs.data) {
for (const part of m.content) {
if (part.type === 'image_file') {
const resp = await openai.files.content(part.image_file.file_id);
const ws = fs.createWriteStream('bar_chart.png');
resp.body.pipe(ws).on('finish', () => console.log('✅ bar_chart.png saved'));
}
}
}
}
main().catch(console.error);
Most any model after gpt-4-turbo can be used with assistants. There’s only two exceptions that cannot be used with code interpreter. gpt-4o-mini is not a exception.
Here are current chat model names by date of release, and whether assistants is supported, and whether code interpreter is then also supported (Python dictionary).
chat_model_for_assistants = {
"o3-2025-04-16": (False, False),
"o4-mini-2025-04-16": (False, False),
"gpt-4.1-2025-04-14": (True, True),
"gpt-4.1-mini-2025-04-14": (True, True),
"gpt-4.1-nano-2025-04-14": (True, True),
"o1-pro-2025-03-19": (False, False),
"gpt-4o-mini-search-preview-2025-03-11": (False, False),
"gpt-4o-search-preview-2025-03-11": (False, False),
"gpt-4.5-preview-2025-02-27": (True, True),
"o3-mini-2025-01-31": (True, False),
"gpt-4o-audio-preview-2024-12-17": (False, False),
"gpt-4o-mini-audio-preview-2024-12-17": (False, False),
"o1-2024-12-17": (True, False),
"gpt-4o-2024-11-20": (True, True),
"gpt-4o-audio-preview-2024-10-01": (False, False),
"o1-mini-2024-09-12": (False, False),
"o1-preview-2024-09-12": (False, False),
"gpt-4o-2024-08-06": (True, True),
"gpt-4o-mini-2024-07-18": (True, True), # Your question
"gpt-4o-2024-05-13": (True, True),
"gpt-4-turbo-2024-04-09": (True, True),
"gpt-3.5-turbo-0125": (True, True),
"gpt-4-0125-preview": (True, True),
"gpt-3.5-turbo-1106": (True, True),
"gpt-4-1106-preview": (True, True),
"gpt-4-0613": (False, False),
"gpt-4-32k-0613": (False, False),
"gpt-4-0314": (False, False),
"gpt-4-32k-0314": (False, False),
}
I just had the model successfully write code to answer what was in the mount point, and the same for all the others in the dictionary just provided.
However, if you look at that list and ask, "which model will I rely on to write Python code for building diagrams, gpt-4o-mini would be far down the list.
You can see what the model might be doing wrong by downloading your run’s “run steps”, and investigating the object and the code.
Have it write simpler files. Put in the system message that the AI must provide a download link to each generated file to the user as a response after using internal python
.
However, a better solution is, “sorry, try again” - with a different model.
Here is gpt-4o-mini at work. Producing more information for this reply.