Base Model: gpt-4o-2024-08-06 Endpoint: /v1/chat/completions SDK: openai==2.24.0 (also reproduced on 0.27.7)
The Problem
Our fine-tuned vision model (based on gpt-4o-2024-08-06) has started intermittently rejecting image_url inputs on /v1/chat/completions with:
400 — "Invalid content type. image_url is only supported by certain models."
Param: messages.[1].content.[1].type
This was working before. No code changes on our end. The same request payload succeeds one minute and fails the next. This is impacting our production system.
Proof: Same Model, Same Payload, Different Results
| Failing Request | Successful Request | |
|---|---|---|
| Timestamp | 2026-03-04T15:05:32Z | 2026-03-04T14:48:07Z |
| Processing Time | 22ms (never reached the model) | 3,911ms (full inference) |
| Status | 400 Bad Request | 200 OK |
| Model | Same fine-tuned model ID | Same fine-tuned model ID |
| Payload | Identical | Identical |
| Tokens | N/A (rejected) | 5,180 prompt / 81 completion |
Identical payload. 17 minutes apart. The 22ms rejection on the failing request confirms it was blocked at the validation/routing layer before ever reaching the model — this is not a model-level issue.
Request Structure (redacted)
{
"model": "<fine-tuned-gpt-4o-2024-08-06>",
"messages": [
{"role": "system", "content": "<instructions>"},
{"role": "user", "content": [
{"type": "text", "text": "<redacted>"},
{"type": "image_url", "image_url": {"url": "data:image/jpeg;base64,..."}}
]}
],
"max_tokens": 400,
"temperature": 0.0,
"response_format": {"type": "json_schema", "json_schema": {"name": "...", "strict": true, "schema": {"...": "..."}}}
}
This format matches OpenAI’s documentation for vision fine-tuned models.
What We’ve Ruled Out
-
No code or configuration changes on our end
-
SDK upgraded from
0.27.7to2.24.0— same error persists -
Always passing the exact fine-tuned model ID, never an alias
-
No model switching during retries or fallbacks
-
The base model
gpt-4o-2024-08-06officially supports vision fine-tuning per OpenAI docs -
OpenAI support confirmed vision is supported on this model and there’s no posted outage
What Points to a Backend Routing Issue
-
Intermittent failures — identical requests randomly succeed or fail
-
22ms rejection — far too fast for model inference, meaning the request is rejected at a routing/validation layer
-
No client-side changes — this started happening without any changes on our end
-
OpenAI support acknowledged they “can’t confirm or rule out a backend routing inconsistency”
This strongly suggests some backend instances are correctly recognizing the fine-tuned model as vision-capable while others are not.
Questions for the Community
-
Has anyone else experienced intermittent vision rejection on fine-tuned
gpt-4o-2024-08-06models recently? -
Is there a known workaround beyond retry logic?
-
Can any OpenAI staff escalate this? We have an open support case but have only received AI agent responses so far.
Any help or visibility is appreciated — this is blocking production.