Increased Error Rate on GPT-4o Vision

We’re seeing an increased error rate with GPT-4o vision. Est. ~5-10% of requests are failing (in production!). This started to spike in the last 3h.

The error returned from the completion:

BadRequestError: 400 You uploaded an unsupported image. Please make sure your image is below 20 MB in size and is of one the following formats: ['png', 'jpeg', 'gif', 'webp'].

Images are valid (type and size).

Example image:

"type":"image/jpeg",
"sizeMB":0.448

Completion request:

const response = await openai.chat.completions.create({
				model: "gpt-4o",
				messages: [
				  {
					role: "user",
					content: [
					  {...},
					  {
						type: "image_url",
						image_url: {
							url: base64String,
							detail: "low",
						},
					  },
					],
				  },
				],
			  });

Seems impossible to debug from our end. We’ve inspected the images manually - all valid (content, type, size).

Status page doesn’t show any issues/outage. Anyone else experiencing this?

1 Like

Is there possibly something wrong with the way you’re encoding the image to base64?

Can you verify if it is failing on the same request every time or if you send the same request a second time it may fail or pass independently of the previous results?

I’ve just tried this with 8 images that failed all 8 worked when retried.

And it’s the exact same API call? You’re not reencoding the image each time? That is, the base64 string is identical in a failed and successful call.

That’s right. I reuse the exact same base64 string and the same completion call.

This is an old thread, but I ran into the same error and discovered the culprit: images in portrait orientation. I tested this and very consistently got errors with portrait orientation pictures, and no issue with landscape. Behavior is consistent across gpt-4o, gpt-4o-mini, and gpt-4-turbo.