Cannot find images in generated response
I am using the Responses API to generate data visualizations from a given input using the Code Interpreter tool. After those visualization images are generated, I store them to then immediately display on a UI.
Using the Assistants API was easy since if content.type === "image_file"
the file ID would show up under content.image_file.file_id
and then I could download the file content, get a Buffer from the response body, and then convert it to base64:
// 8. Process the messages
const result: DATA_ANALYSIS_RESULT__OPENAI = {
textContent: '',
imageFiles: []
}
// Process the messages
for (const message of assistantMessages) {
for (const content of message.content) {
// Process the text content
if (content.type === 'text') {
result.textContent += content.text.value + '\n'
// Process the image file
} else if (content.type === 'image_file') {
// Fetch actual file bytes using .content()
const fileId = content.image_file.file_id
// Download file content as a stream (Node.js, OpenAI SDK v4+)
const fileResponse: Response = await client.files.content(fileId)
// fileResponse.body is a ReadableStream, get Buffer
const fileBuffer = await readableStreamToBuffer(
LOG_TYPE,
FILE_NAME,
fileResponse.body
)
// Now push base64
result.imageFiles.push(fileBuffer.toString('base64'))
}
}
}
The above logic works to get the files and so that I can store them in a Cloudflare R2 bucket.
The issue I have is that I am now using the Responses API to do the exact same thing, but I cannot get the extract the file IDs to get the generated files from the response.
I do see the generated images in the Logs dashboard in the browser:
But I do not see any file IDs nor any images in the response. Here’s an example response that I got from my API endpoint which where the OpenAI calls are being made:
[SERVER LOG: --logTimestamp="1749162341914149017" --file-path="src/services/open-ai/index.ts --function-name="analyzeData()"]: response: {
id: 'resp_68421965f694819e9e1ea2b2e7b355a103863b8e8734260b',
object: 'response',
created_at: 1749162341,
status: 'completed',
background: false,
error: null,
incomplete_details: null,
instructions: null,
max_output_tokens: 2048,
model: 'gpt-4.1-nano-2025-04-14',
output: [
{
id: 'ci_684219696b00819eaf509d7b095c416503863b8e8734260b',
type: 'code_interpreter_call',
status: 'completed',
code: 'import pandas as pd\n' +
'import json\n' +
'\n' +
'# Load the CSV data\n' +
'sales_data_path = "/mnt/data/file-XKGft2h8AemB7tZPGDsguk-enriched_sales_data.csv"\n' +
'sales_data = pd.read_csv(sales_data_path)\n' +
'\n' +
'# Output the first few rows for preview\n' +
'sales_data_preview = sales_data.head()\n' +
'\n' +
'# Load the JSON data\n' +
'employee_data_path = "/mnt/data/file-PSGSehK617fjhVQTw8meZz-employee_performance_data.json"\n' +
"with open(employee_data_path, 'r') as f:\n" +
' employee_performance_data = json.load(f)\n' +
'\n' +
'# Output the first few entries to understand its structure\n' +
'employee_performance_data_preview = employee_performance_data[:5]\n' +
'\n' +
'sales_data_preview, employee_performance_data_preview',
container_id: 'cntr_684219682364819194ad5e9119a36c0e04bcd980a1ae7113',
outputs: null
},
{
id: 'msg_68421971a9e4819eadf56ac7d5dc06d503863b8e8734260b',
type: 'message',
status: 'completed',
content: [Array],
role: 'assistant'
},
{
id: 'ci_68421972b764819e887fb80e797da48603863b8e8734260b',
type: 'code_interpreter_call',
status: 'completed',
code: 'import matplotlib.pyplot as plt\n' +
'import seaborn as sns\n' +
'import numpy as np\n' +
'import base64\n' +
'from io import BytesIO\n' +
'\n' +
'# Set style for seaborn\n' +
'sns.set(style="whitegrid")\n' +
'\n' +
'# 1. Sales over time line plot\n' +
'plt.figure(figsize=(10, 6))\n' +
"sales_data['Date'] = pd.to_datetime(sales_data['Date'])\n" +
"monthly_sales = sales_data.resample('M', on='Date')['Sales ($)'].sum()\n" +
'sns.lineplot(x=monthly_sales.index, y=monthly_sales.values)\n' +
"plt.title('Total Sales Over Time (Monthly)')\n" +
"plt.xlabel('Month')\n" +
"plt.ylabel('Sales ($)')\n" +
'plt.tight_layout()\n' +
'\n' +
'# Save figure\n' +
'buf = BytesIO()\n' +
"plt.savefig(buf, format='png')\n" +
'buf.seek(0)\n' +
'img_data1 = base64.b64encode(buf.read()).decode()\n' +
'\n' +
'# 2. Sales by Region - Bar Chart\n' +
"region_sales = sales_data.groupby('Region')['Sales ($)'].sum()\n" +
'plt.figure(figsize=(8, 5))\n' +
'sns.barplot(x=region_sales.index, y=region_sales.values)\n' +
"plt.title('Sales by Region')\n" +
"plt.xlabel('Region')\n" +
"plt.ylabel('Sales ($)')\n" +
'plt.tight_layout()\n' +
'\n' +
'# Save figure\n' +
'buf = BytesIO()\n' +
"plt.savefig(buf, format='png')\n" +
'buf.seek(0)\n' +
'img_data2 = base64.b64encode(buf.read()).decode()\n' +
'\n' +
'# 3. Employee Overall Ratings - Bar Chart\n' +
"employee_names = [emp['name'] for emp in employee_performance_data]\n" +
"overall_ratings = [emp['overallRating'] for emp in employee_performance_data]\n" +
'plt.figure(figsize=(10, 6))\n' +
'sns.barplot(x=employee_names, y=overall_ratings)\n' +
"plt.title('Employee Overall Performance Ratings')\n" +
"plt.xlabel('Employee')\n" +
"plt.ylabel('Rating')\n" +
'plt.xticks(rotation=45)\n' +
'plt.tight_layout()\n' +
'\n' +
'# Save figure\n' +
'buf = BytesIO()\n' +
"plt.savefig(buf, format='png')\n" +
'buf.seek(0)\n' +
'img_data3 = base64.b64encode(buf.read()).decode()\n' +
'\n' +
'# 4. Scores across different categories for a specific employee (e.g., Alice Jennings)\n' +
"employee = next(emp for emp in employee_performance_data if emp['name'] == 'Alice Jennings')\n" +
"categories = list(employee['scores'].keys())\n" +
"scores = list(employee['scores'].values())\n" +
'\n' +
'plt.figure(figsize=(8, 5))\n' +
'sns.barplot(x=categories, y=scores)\n' +
"plt.title('Performance Scores for Alice Jennings')\n" +
"plt.xlabel('Category')\n" +
"plt.ylabel('Score')\n" +
'plt.ylim(0, 5)\n' +
'plt.xticks(rotation=45)\n' +
'plt.tight_layout()\n' +
'\n' +
'# Save figure\n' +
'buf = BytesIO()\n' +
"plt.savefig(buf, format='png')\n" +
'buf.seek(0)\n' +
'img_data4 = base64.b64encode(buf.read()).decode()\n' +
'\n' +
'# Return all images as base64 encoded images\n' +
'outputs = [\n' +
' {"type": "image", "url": f"data:image/png;base64,{img_data1}"},\n' +
' {"type": "image", "url": f"data:image/png;base64,{img_data2}"},\n' +
' {"type": "image", "url": f"data:image/png;base64,{img_data3}"},\n' +
' {"type": "image", "url": f"data:image/png;base64,{img_data4}"}\n' +
']\n' +
'\n' +
'outputs',
container_id: 'cntr_684219682364819194ad5e9119a36c0e04bcd980a1ae7113',
outputs: null
},
{
id: 'msg_684219831e90819e92dde11350148c0303863b8e8734260b',
type: 'message',
status: 'completed',
content: [Array],
role: 'assistant'
}
],
parallel_tool_calls: true,
previous_response_id: null,
reasoning: { effort: null, summary: null },
service_tier: 'default',
store: true,
temperature: 1,
text: { format: { type: 'text' } },
tool_choice: 'auto',
tools: [ { type: 'code_interpreter', container: [Object] } ],
top_p: 1,
truncation: 'disabled',
usage: {
input_tokens: 37822,
input_tokens_details: { cached_tokens: 4188 },
output_tokens: 1030,
output_tokens_details: { reasoning_tokens: 0 },
total_tokens: 38852
},
user: null,
metadata: {},
output_text: 'The data has been successfully loaded and previewed. Here are the key points:\n' +
'\n' +
'- The sales data includes metrics such as sales amount, products sold, region, and manager details.\n' +
'- The employee performance data provides scores across multiple competencies, overall ratings, and feedback for key employees.\n' +
'\n' +
'Next, I will generate meaningful visualizations based on this data, such as sales trends, regional performance, and employee ratings.I have generated four visualizations based on the data:\n' +
'\n' +
'1. Total sales over time (Monthly) - Line Chart\n' +
'2. Sales by Region - Bar Chart\n' +
'3. Employee Overall Performance Ratings - Bar Chart\n' +
'4. Performance scores for Alice Jennings across different categories - Bar Chart\n' +
'\n' +
'You can view and analyze these key insights into your sales trends, regional performance, and employee evaluation highlights through these charts. If you need further analysis or specific custom visualizations, please let me know!'
}
Are the images being saved in output[i].content
if output[i].type === "message"
?
Here’s my new logic to attempt to extract the text content and images from the response:
type ExtractionResult = {
textContent: string
imageFiles: string[]
}
/**
* @dev Recursively extracts text content and images from any OpenAI
* response structure.
*/
function extractOpenAIResponse(
obj: any,
acc: ExtractionResult = { textContent: '', imageFiles: [] }
): ExtractionResult {
if (Array.isArray(obj)) {
for (const item of obj) {
extractOpenAIResponse(item, acc)
}
} else if (obj && typeof obj === 'object') {
// Extract text from message/content types
if (
(obj.type === 'text' || obj.type === 'output_text') &&
typeof obj.text === 'string'
) {
acc.textContent += `${obj.text}\n`
}
// Extract images from code interpreter outputs
if (
obj.type === 'image' &&
typeof obj.url === 'string'
) {
acc.imageFiles.push(obj.url)
}
// Sometimes, `output_text` can appear at the top level
if (typeof obj.output_text === 'string') {
acc.textContent += `${obj.output_text}\n`
}
// Recursively search all properties
for (const key of Object.keys(obj)) {
extractOpenAIResponse(obj[key], acc)
}
}
return acc
}
// Example usage after your response:
const result = extractOpenAIResponse(response)
console.log(
`${ getConsoleMetadata(LOG_TYPE, true, 'analyzeData()' ) } result: `,
result
)
TL;DR
While using the Responses API, in the returned response, I am not sure where to look for the generated images.
Any help would be greatly appreciated