Hi everyone,
Started seeing the same issue at 2025-06-12T00:46:21.37Z (June 11th, 2025, 5:46 PM PT). The error is currently representing around 10% of the calls we are making to the Completions API.
It seems to be the exact problem, as the images we are sending are mostly <500kb, all JPEG, and all valid, opening up perfectly on multiple devices. I tested it out through the Completions API both through the image url (which is what we use) and also sending the images as base 64 data, both resulted in the same error. The error persists with those images, it simply doesn’t work.
A pattern I noticed after opening some images and testing manually is they all seem to be digital (i.e not camera pictures). Did not go through all of them, but the ones I opened were that way. Tried spotting something in EXIF/Metadata comparing the failing images to succeding images but did not see any differences.
Already reported this today through support, got escalated to a human, but they kind of dismissed it. Linked this post and other posts on Reddit reporting a similiar problem on ChatGPT there. For everyone affected too, please leave your reports here and contact support.
Thanks,
Everton.
– Update –
Implemented a workaround that made the failing images work. Of course this isn’t permanent and can’t be, as I’m having to download images on my side, process them, and send them in Base64 to OpenAI. But it does work, and our infra made it easy to implement. If anyone has a similar config, used Python PIL basically with this:
async def sanitize_image_to_b64(
url: str,
*,
png: bool = False,
client: httpx.AsyncClient | None = None,
) -> str:
"""
Downloads → re-encodes → returns a data-URL.
• keeps everything in RAM (BytesIO)
• breaks the repetitive-byte pattern that triggers `image_parse_error`
• works inside any asyncio service
"""
# ---- networking -------------------------------------------------------
own_client = client is None
if own_client:
client = httpx.AsyncClient(timeout=10)
try:
resp = await client.get(url)
resp.raise_for_status()
finally:
if own_client:
await client.aclose()
# ---- Pillow processing ------------------------------------------------
buf = BytesIO(resp.content)
img = Image.open(buf)
# normalise orientation + strip alpha
img = ImageOps.exif_transpose(img).convert("RGB") # [oai_citation:0‡pillow.readthedocs.io](https://pillow.readthedocs.io/en/stable/_modules/PIL/ImageOps.html?utm_source=chatgpt.com)
# re-encode in memory
out = BytesIO()
if png:
img.save(out, format="PNG", optimize=True) # lossless
mime = "image/png"
else:
img.save(out, format="JPEG", quality=90, optimize=True) # [oai_citation:1‡pillow.readthedocs.io](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html?utm_source=chatgpt.com)
mime = "image/jpeg"
# base-64 encode for the API
b64 = base64.b64encode(out.getvalue()).decode()
return f"data:{mime};base64,{b64}"
– Update 2 –
The workaround mentioned above only solved it for a portion of failing images. The only workaround that’s failproof right now was to set up another LLM provider when requests to OpenAI fail.