Did something change with the responses API for streaming (endpoint https://api.openai.com/v1/responses)? Previously I could use 4-o:
{“input”:[{“role”:“user”,“content”:[{“text”:“hi”,“type”:“input_text”}]}],“instructions”:“be nice”,“stream”:true,“model”:“gpt-4.o”}
now that fails with status code 400.
It works with gpt-4.1:
{“input”:[{“role”:“user”,“content”:[{“text”:“hi”,“type”:“input_text”}]}],“instructions”:“be nice”,“stream”:true,“model”:“gpt-4.1”}
It also looks like the streaming model changed with image generation. Previously I could call “tools”:[{“type”: “image_generation”}] - but now that fails with status 400, and instead this is required: “tools”:[{“type”: “image_generation”, “partial_images”: 1}], and I see no way to suppress partial_images.
i.e. this fails:
{“input”:[{“role”:“user”,“content”:[{“text”:“hi”,“type”:“input_text”}]}],“instructions”:“be nice”,“stream”:true,“model”:“gpt-4.1”,“tools”:[{“type”: “image_generation”}]}
and this works:
{“input”:[{“role”:“user”,“content”:[{“text”:“hi”,“type”:“input_text”}]}],“instructions”:“be nice”,“stream”:true,“model”:“gpt-4.1”,“tools”:[{“type”: “image_generation”, “partial_images”: 1}]}