Error: You uploaded an unsupported image. Please make sure your image is below 20 MB in size and is of one the following formats: ['png', 'jpeg', 'gif', 'webp']

With some images the API is failing. I did a base64 analyse and see that the API can’t handle to much A markers at the begin of an base64 code. This is the begin of my base 64:

/9j/4AAQSkZJRgABAgAAAQABAAD/2wBDAAUEBAUEAwUFBAUGBgUGCA0JCAcHCBAMDAoNExEUFBMREhIVGB4aFRYdFxISGiQbHR8gIiIiFBkkKCQgKB4gIiD/2wBDAQYGBggHCBAJCRAgFhIWICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICD//g/+AHwAAELegmY
[I have redacted a few hundred A’s to make this post easier to read] D//gD+REhBVvsAAAAAAAAAAAAA/IoMw2H+ogB0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGRoYXYAAAD8/8AAEQgEOAeAAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8Aojn1p4HSmqOP/rU5eBxQWSL704dKavPHUU/qKQCgcUtApaAG4oPH1pevSmn1oEBPpSdaPpQQf/rUxBxR1

If i decode my base64 on other website or API’s this is no problem. But if i try this on the openAI API it fails over and over again. The fault code unknown JPEG marker 0 is also a fault code that i get with this kind of images. I think this is a bug… When i send an other image without so much A markers the API is responding great. The images (JPEG) are within the rules of the API likes size etc.

While OpenAI should be testing against all sorts of files, this likely comes from part of metadata (like EXIF) that has a bunch of 00 null byte padding for in-place amendment by a device.

Since your code has possession of the JPEG file, you can re-encode, and also optimize the size for network transmission and to fit 512px tiling and 768px maximum short side dimension used by API in the process.

Thanks for your reaction. I have been testing the base 64 code again with re-encode like you told within the dimension. Here i have two base64 codes from a camera at my home.

One base64 code is accepted by the API, while the other code is not.

When I look at the base64 codes of the images that do work, I see that there are many A’s at the beginning (the 0 markers).

When I look at the base64 code of the images that do not work, I see that these also start with many A’s but there is extra data between the A’s. This is detection data embedded in the camera. This is also reflected in the screenshot of the analysis I conducted.

The API seems to consistently reject images where there is extra data between the A’s. I have used this before with other APIs, and it was never a problem. Do you have any idea how I can adjust the base64 so that the images with the extra data are still accepted? Ultimately, it is only about transferring the image; the extra data has no added value in my case. I still think this is a bug.

Start of base 64 that works:
/9j/4AAQSkZJRgABAgAAAQABAAD/2wBDAAUEBAUEAwUFBAUGBgUGCA0JCAcHCBAMDAoNExEUFBMREhIVGB4aFRYdFxISGiQbHR8gIiIiFBkkKCQgKB4gIiD/2wBDAQYGBggHCBAJCRAgFhIWICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICD//hJeAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA…

Start of Base64 that don’t work (with metadata between de A’s):
/9j/4AAQSkZJRgABAgAAAQABAAD/2wBDAAUEBAUEAwUFBAUGBgUGCA0JCAcHCBAMDAoNExEUFBMREhIVGB4aFRYdFxISGiQbHR8gIiIiFBkkKCQgKB4gIiD/2wBDAQYGBggHCBAJCRAgFhIWICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICD//g/+AHwCAEAnhGYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJYU21hcnRNb3Rpb25IdW1hbgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJYVmlkZW9Nb3Rpb24AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

  1. Post the code snippet where you are encoding the image
  2. What device are you using to take the photos
  3. Where are you located (country)
// convert buffer to/from base64 encoded string
const base64Encode = (img) => Buffer.from(img).toString('base64');
const base64Decode = (base64) => Buffer.from(base64, 'base64');

// De bufferToStream-functie converteert een Buffer naar een Readable-stream.
const bufferToStream = (buffer) => {
    const stream = Readable.from(buffer);
    return stream;
};

// De streamToBuffer-functie zet een readable stream om naar een Buffer.
const streamToBuffer = (stream) => new Promise((resolve, reject) => {
    const buffers = [];
    stream.on('error', reject);
    stream.on('data', (data) => buffers.push(data));
    stream.on('end', () => resolve(Buffer.concat(buffers)));
});
  1. The photos are from Dahua IP camera’s via ONVIF.

  2. The Netherlands

As @_j has said, you may want to process the image first before immediately encoding it as b64. This would involve stripping all the metadata and ensuring that only the image data is kept.

You have confirmed that when you save this buffer locally as a jpeg you can view it?

Thanks but i already tried it with jpeg-js but that doesn’t work. Do you have any other tips that i can use? Yes when i save it locally i can view the image.

I just looked at jpeg-js and just thought it was worth mentioning that they recommend using something that isn’t CPU-blocking like sharp

NOTE: this is a synchronous (i.e. CPU-blocking) library that is much slower than native alternatives. If you don’t need a pure javascript implementation, consider using async alternatives like sharp in node or the Canvas API in the browser.

I think regardless it is a good idea to process the image first before sending it over.

Can you upload a bad image on here for us to try out? It could be an issue on their end, would be nice to confirm it

Thanks for the sharp tip. When I want to test sharp, I get the message:

“Please upgrade Node.js:
Found 16.14.2
Requires ^18.17.0 || ^20.3.0 || >=21.0.0”

But my Node.js version is definitely v20.15.0. Do you have any idea where this comes from? Attached is the photo that is not being accepted. I am curious if you are experiencing this issue as well.

base64 is a deterministic binary encoding of the original file. 3 bytes to 4. It represents the file contents in a reduced set of characters.

It is possible that you could tweak with the contents in an indirect manner, poking at the middle of AAAA sequences and altering them in unpredictable ways. It depends on if the metadata is checksummed and then checked.

Instead, for files that break less instead of more, you would use an image handling library like pillow (PIL) in Python to decode the image to pixels. You can then resave (to a virtual bytesio file if needed) as a JPG that as been stripped of anything except imagery (or PNG to cause no second-generation loss)

In the last weeks, i have continued testing and debugging, and i’am now convinced that there is a bug in OpenAI’s API. The API cannot process all images, especially those where, for instance, a camera adds something to the JPEG file like a crossline. Not all software can handle this, including OpenAI. I have now found a workaround by first sending the image to Cloudinary’s API and then forwarding the URL to OpenAI’s API. For now, this works, although it is not an ideal solution because it takes quite a long time. I also looked into sharp, etc., but unfortunately, i’am bound to a native solution. I hope OpenAI will adjust the API to accept other JPEG files as well. This would make my program much faster. Thanks to everyone for the input and tips! I will keep an eye on any changes.

1 Like

this nodejs package helped me: exifremove

// Clean metadata
const exifFreeBuffer = exifremove.remove(request.file.buffer);

// Convert image buffer to base64
const imageBase64 = `data:image/jpeg;base64,${exifFreeBuffer.toString('base64')}`;

request.file.buffer comes from express’ multer

Hi all. I can confirm same issue. Photo taken with iPhone and converted from .heic to .jpeg (with EXIF data) fails with error stated in title. Removing exif fixes the issue.
image

Here is an image which open ai can’t handle =>
imgtest

The error says “You uploaded an unsupported image. Please make sure your image is below 20 MB in size and is of one the following formats: [‘png’, ‘jpeg’, ‘gif’, ‘webp’].” but it is under 20MB and jpeg format.
BTW original file which is in cloudfront works but bunny cdn path does not!
But why?
Is this really an open ai bug?