Hi, when calling both post v1/images/edits as well as v1/images/variations following error is returned:
→ “Error: 413 The data value transmitted exceeds the capacity limit.”
Error is repeatable (even with 0 size buffer) and occurs with both Buffer as well as “fs.createReadStream” directly from a file. I have also added a checkpoint to ensure image size is not larger than 4MB.
This is from node.js.
You must send a particular image as multipart form-data to ensure success:
fully-formed file with file header, as if bytes were read from storage
PNG format
square
1024x1024, or the reduced multiples allowed by API
8 bit color
colormode 6 - RGBA (32 bit with alpha transparency channel)
If your image is greater than 1024x1024x4 = 4MB, then you are doing it wrong. PNG compression should more than compensate for the format’s slight overhead, especially because of the binary transparency you should send.
Here’s a full png library:
other libraries such as image-js also must be set to preserve your alpha created by whatever editing tool you are presenting to the user or was used on an original file.
If resizing, to then either letterbox or crop, you must reprocess alpha channel using “nearest neighbor” techniques, or set the final resized alpha channel values to 0 or 255 for full or no transparency. Basically, re-encode regardless, considering any input file an adversarial attack.
Thanks for all input. I resolved the issue by saving the file prior to sending request and then reading the file via openAI function before passing to API.
I guess that is also the solution/issue _j recommended. Here is the working code snippet using toFile().
You must only send PNG files, not image data or image objects. You’ll need to use a technique to “save” out a png library file to memory if not a temporary file to create proper header and file construction.