BUG: "Error: 413 The data value transmitted exceeds the capacity limit." when calling v1/images/edits

Hi, when calling both post v1/images/edits as well as v1/images/variations following error is returned:
→ “Error: 413 The data value transmitted exceeds the capacity limit.”

No issue with using v1/images/generations

Sample code causing issue:

const bufferImage = Buffer.from(media.data, 'base64');
bufferImage.name = "image.png";

                const image = await openai.images.edit(

Error is repeatable (even with 0 size buffer) and occurs with both Buffer as well as “fs.createReadStream” directly from a file. I have also added a checkpoint to ensure image size is not larger than 4MB.
This is from node.js.


Welcome to the forum.

Still happening? Can you reproduce with a second image?

Thanks Paul, yes, also happening with different pictures. Happening with image data directly from Buffer, or from file read after saving the png.

1 Like

Hrm. I’ve not seen this one before. Were you processing a lot? Maybe it’s a new rate limit for images?

I’d reach out to help.openai.com and give them as many details as you can. I’ll keep my ears open…

1 Like

I’m getting the same error. I’m passing several images (one each time) less than 4MB (just 0.1MB) and had the same issue.

Did you manage to solve it? @Eraxorice

Also getting this error with small images (~8kb)

You must send a particular image as multipart form-data to ensure success:

  • fully-formed file with file header, as if bytes were read from storage
  • PNG format
  • square
  • 1024x1024, or the reduced multiples allowed by API
  • 8 bit color
  • colormode 6 - RGBA (32 bit with alpha transparency channel)

If your image is greater than 1024x1024x4 = 4MB, then you are doing it wrong. PNG compression should more than compensate for the format’s slight overhead, especially because of the binary transparency you should send.

Here’s a full png library:

other libraries such as image-js also must be set to preserve your alpha created by whatever editing tool you are presenting to the user or was used on an original file.

If resizing, to then either letterbox or crop, you must reprocess alpha channel using “nearest neighbor” techniques, or set the final resized alpha channel values to 0 or 255 for full or no transparency. Basically, re-encode regardless, considering any input file an adversarial attack.


Yep! Multi-part form data solved the problem! Thank you!

Thanks for all input. I resolved the issue by saving the file prior to sending request and then reading the file via openAI function before passing to API.
I guess that is also the solution/issue _j recommended. Here is the working code snippet using toFile().

import OpenAI, {toFile} from 'openai';

const file = await toFile(fs.createReadStream(filePath))
const image = await openai.images.edit({
	image: file,
	prompt: message.body.substring(1),
	n: 1,
	size: "1024x1024",
	response_format: 'b64_json'

I’m running in the same issue right now. I can make it work using fs.createReadStream but not in-memory with a buffer.
Code below:

const resizedImageBuffer = await sharp(req.file.buffer)
        .resize({ width: 512, height: 512 })

      resizedImageBuffer.name = "image.png";

      console.log(`Buffer length: ${resizedImageBuffer.length} bytes`);

      // Call OpenAI API to generate image variations
      const response = await openai.images.createVariation({
        model: "dall-e-2",
        image: resizedImageBuffer,
        n: 1,
        size: "512x512",

You must only send PNG files, not image data or image objects. You’ll need to use a technique to “save” out a png library file to memory if not a temporary file to create proper header and file construction.