Intermittent 400 "image_url is only supported by certain models" on Fine-tuned gpt-4o-2024-08-06 — Was Working Previously

Base Model: gpt-4o-2024-08-06 Endpoint: /v1/chat/completions SDK: openai==2.24.0 (also reproduced on 0.27.7)

The Problem

Our fine-tuned vision model (based on gpt-4o-2024-08-06) has started intermittently rejecting image_url inputs on /v1/chat/completions with:

400 — "Invalid content type. image_url is only supported by certain models."
Param: messages.[1].content.[1].type

This was working before. No code changes on our end. The same request payload succeeds one minute and fails the next. This is impacting our production system.

Proof: Same Model, Same Payload, Different Results

Failing Request Successful Request
Timestamp 2026-03-04T15:05:32Z 2026-03-04T14:48:07Z
Processing Time 22ms (never reached the model) 3,911ms (full inference)
Status 400 Bad Request 200 OK
Model Same fine-tuned model ID Same fine-tuned model ID
Payload Identical Identical
Tokens N/A (rejected) 5,180 prompt / 81 completion

Identical payload. 17 minutes apart. The 22ms rejection on the failing request confirms it was blocked at the validation/routing layer before ever reaching the model — this is not a model-level issue.

Request Structure (redacted)

{
  "model": "<fine-tuned-gpt-4o-2024-08-06>",
  "messages": [
    {"role": "system", "content": "<instructions>"},
    {"role": "user", "content": [
      {"type": "text", "text": "<redacted>"},
      {"type": "image_url", "image_url": {"url": "data:image/jpeg;base64,..."}}
    ]}
  ],
  "max_tokens": 400,
  "temperature": 0.0,
  "response_format": {"type": "json_schema", "json_schema": {"name": "...", "strict": true, "schema": {"...": "..."}}}
}

This format matches OpenAI’s documentation for vision fine-tuned models.

What We’ve Ruled Out

  • No code or configuration changes on our end

  • SDK upgraded from 0.27.7 to 2.24.0 — same error persists

  • Always passing the exact fine-tuned model ID, never an alias

  • No model switching during retries or fallbacks

  • The base model gpt-4o-2024-08-06 officially supports vision fine-tuning per OpenAI docs

  • OpenAI support confirmed vision is supported on this model and there’s no posted outage

What Points to a Backend Routing Issue

  1. Intermittent failures — identical requests randomly succeed or fail

  2. 22ms rejection — far too fast for model inference, meaning the request is rejected at a routing/validation layer

  3. No client-side changes — this started happening without any changes on our end

  4. OpenAI support acknowledged they “can’t confirm or rule out a backend routing inconsistency”

This strongly suggests some backend instances are correctly recognizing the fine-tuned model as vision-capable while others are not.

Questions for the Community

  • Has anyone else experienced intermittent vision rejection on fine-tuned gpt-4o-2024-08-06 models recently?

  • Is there a known workaround beyond retry logic?

  • Can any OpenAI staff escalate this? We have an open support case but have only received AI agent responses so far.

Any help or visibility is appreciated — this is blocking production.

1 Like

@OpenAI_Support

Yes, it has been noted and reported. The only workaround is to be persistent and error tolerant with retries, until the issue gets so bad that the fine tuning model you paid to train is essentially useless.

It’s barely been a week of API fine-tuning gpt-4o models failing, with multiple reports and immediate replication ability. The last time the exact symptom yet complete outage happened on another class of AI models a month ago, it took two weeks.

Here is the current ongoing issue that you can join in the party on:

Apparently randomly breaking API developer applications over and over again without support response is going to be the new modus operandi.

1 Like

Welcome to the club. Doesn’t feel like they’re going to admit a fault from their side. Just be careful you don’t end up losing $800+ for trusting their broken API :skull: