Responses API: streaming bugs?

Did something change with the responses API for streaming (endpoint https://api.openai.com/v1/responses)? Previously I could use 4-o:

{“input”:[{“role”:“user”,“content”:[{“text”:“hi”,“type”:“input_text”}]}],“instructions”:“be nice”,“stream”:true,“model”:“gpt-4.o”}

now that fails with status code 400.

It works with gpt-4.1:

{“input”:[{“role”:“user”,“content”:[{“text”:“hi”,“type”:“input_text”}]}],“instructions”:“be nice”,“stream”:true,“model”:“gpt-4.1”}

It also looks like the streaming model changed with image generation. Previously I could call “tools”:[{“type”: “image_generation”}] - but now that fails with status 400, and instead this is required: “tools”:[{“type”: “image_generation”, “partial_images”: 1}], and I see no way to suppress partial_images.

i.e. this fails:

{“input”:[{“role”:“user”,“content”:[{“text”:“hi”,“type”:“input_text”}]}],“instructions”:“be nice”,“stream”:true,“model”:“gpt-4.1”,“tools”:[{“type”: “image_generation”}]}

and this works:

{“input”:[{“role”:“user”,“content”:[{“text”:“hi”,“type”:“input_text”}]}],“instructions”:“be nice”,“stream”:true,“model”:“gpt-4.1”,“tools”:[{“type”: “image_generation”, “partial_images”: 1}]}

It’s not gpt-4.o, but gpt-4o

1 Like

A bad model does raise a 400 error “bad request” - and shows that they have goofball “allow” logic precedence in the validation.

b'{\n "error": {\n "message": "Hosted tool \'image_generation\' is not supported with gpt-4.o.",\n "type": "invalid_request_error",\n "param": "tools",\n "code": null\n }\n}'

no tool:

b'{\n "error": {\n "message": "The requested model \'gpt-4.o\' does not exist.",\n "type": "invalid_request_error",\n "param": "model",\n "code": "model_not_found"\n }\n}'


The API will gladly repeat back any model sent, giving a false impression that your bad model exists with the error produced…

{
“error”: {
“message”: “Hosted tool ‘image_generation’ is not supported with code as stupid as yours!!.”,
“type”: “invalid_request_error”,
“param”: “tools”,
“code”: null
}
}

Ah thanks- typo with model. gpt-4o is much better, but still looks like there has been a change with streaming and all models-

This breaks:
“tools”:[{“type”: “image_generation”}]

This works:
“tools”:[{“type”: “image_generation”, “partial_images”: 1}]

And I still don’t see how to omit partial_images.

This works, no partial_images parameter (I commented out the value 0 that was there)

{
  "model": "gpt-4.1",
  "instructions": "You are Bob",
  "input": [
    {
      "role": "user",
      "content": [
        {
          "type": "input_text",
          "text": "Just say 'Hello'."
        }
      ]
    }
  ],
  "tools": [
    {
      "type": "image_generation",
      "background": "opaque",
      "input_image_mask": null,
      "model": "gpt-image-1",
      "moderation": "low",
      "output_compression": 100,
      "output_format": "png",
      "quality": "low",
      "size": "1024x1024"
    }
  ],
  "text": {
    "format": {
      "type": "text"
    }
  },
  "temperature": 1,
  "top_p": 0.05,
  "stream": false,
  "max_output_tokens": 2000,
  "store": false
}

as well as just the tool name and letting them bill whatever quality and size “suits the bill”:

{
  "model": "gpt-4.1",
  "instructions": "You are Bob",
  "input": [
    {
      "role": "user",
      "content": [
        {
          "type": "input_text",
          "text": "Just say 'Hello'."
        }
      ]
    }
  ],
  "tools": [
    {
      "type": "image_generation"
    }
  ],
  "text": {
    "format": {
      "type": "text"
    }
  },
  "temperature": 1,
  "top_p": 0.05,
  "stream": false,
  "max_output_tokens": 2000,
  "store": false
}

[‘Hello.’]
{
“input_tokens”: 274,
“input_tokens_details”: {
“cached_tokens”: 0
},
“output_tokens”: 4,
“output_tokens_details”: {
“reasoning_tokens”: 0
},
“total_tokens”: 278
}

Which was obtained using tools and a call this way:

payload["tools"] += [{
    "type": "image_generation",}]
try:
    response = httpx.post(
        "https://api.openai.com/v1/responses",
        json=payload, ...

Ensure if you are making JSON strings yourself, that you are not leaving any trailing commas.


But indeed: Add streaming:

“message”: “Streaming must have non-zero partial images.”

AKA responses will cost you another $0.004-$0.012 on top of chatting with an AI instead of just using the images generate endpoint. $0.011 for the cheapest becomes $0.015.

I was working on this in the weekend. Below is my working code for using Streaming API with Gpt-image-1.

#!/usr/bin/env python3
"""
Minimal demo of using GPT-4.1 with GPT-image-1 for image generation with streaming.
Shows how to handle partial updates and final image generation.
"""

import os
from openai import OpenAI
import base64
from PIL import Image
from io import BytesIO
from datetime import datetime
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Initialize OpenAI client
client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))

try:
    # Load input image (replace with your image path)
    input_image_path = "path/to/your/product.jpg"
    with Image.open(input_image_path) as img:
        # Convert to RGB if needed
        if img.mode not in ('RGB'):
            img = img.convert('RGB')
        
        # Prepare image for API
        buffer = BytesIO()
        img.save(buffer, format='PNG')
        buffer.seek(0)
        img_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')

    # Simple prompt for demo
    prompt = """Create a professional product photo with:
    1. Clean, minimal background
    2. Product centered and well-lit
    3. Subtle shadows for depth
    Keep it simple and modern."""

    print("Starting image generation stream...")
    
    # Call the responses API with streaming
    stream = client.responses.create(
        model="gpt-4.1",  
        input=[{
            "role": "user",
            "content": [
                {"type": "input_text", "text": prompt},
                {"type": "input_image", "image_url": f"data:image/png;base64,{img_base64}"}
            ]
        }],
        tools=[{
            "type": "image_generation",
            "partial_images": 3,  # Get 3 partial updates during generation
            "model": "gpt-image-1",
            "output_format": "jpeg",
            "quality": "medium",
            "size": "1024x1024"
        }],
        stream=True
    )

    # Create output directory
    output_dir = "generated_images"
    os.makedirs(output_dir, exist_ok=True)

    # Process the stream
    for event in stream:
        if event.type == "response.image_generation_call.partial_image":
            try:
                # Handle partial image updates
                if hasattr(event, 'partial_image_b64'):
                    partial_image = event.partial_image_b64
                    if partial_image:
                        # Clean base64 data
                        if partial_image.startswith('data:'):
                            partial_image = partial_image.split(',')[1]
                        
                        # Save partial image
                        timestamp = datetime.now().strftime('%Y%m%d_%H%M%S_%f')
                        filepath = os.path.join(output_dir, f"partial_{event.partial_image_index}_{timestamp}.jpg")
                        
                        with open(filepath, 'wb') as f:
                            f.write(base64.b64decode(partial_image))
                        print(f"Saved partial image: {filepath}")
                
            except Exception as e:
                print(f"Error processing partial image: {str(e)}")
                raise
        
        elif event.type == "response.output_item.done":
            try:
                # Handle final image
                if type(event.item).__name__ == "ImageGenerationCall" and hasattr(event.item, 'result'):
                    # Save the final image
                    timestamp = datetime.now().strftime('%Y%m%d_%H%M%S_%f')
                    filepath = os.path.join(output_dir, f"final_{timestamp}.jpg")
                    
                    with open(filepath, 'wb') as f:
                        f.write(base64.b64decode(event.item.result))
                    print(f"✨ Final image saved: {filepath}")
                    
            except Exception as e:
                print(f"Error saving final image: {str(e)}")
                raise

except Exception as e:
    print(f"Error in image generation process: {str(e)}")
    raise

print("Image generation complete!")