413 Payload Too Large Error with Images Edit API Despite Being Under Size Limit

I’m encountering a consistent 413 Payload Too Large error when using the /v1/images/edit endpoint, even though my image is well under the documented size limits. I’m hoping someone from the community or OpenAI team can provide some insight.

Environment

  • API Version: Latest (May 2025)
  • Model: gpt-image-1
  • Endpoint: /v1/images/edit
  • Image Format: PNG
  • Image Size: 1.84MB (1,930,822 bytes)
  • Request Method: POST with multipart/form-data

Issue Details

According to the documentation, the size limits are:

  • For dall-e-2: Square PNG file less than 4MB
  • For gpt-image-1: PNG, WebP, or JPG file less than 25MB

My image is 1.84MB, which is well under both limits. However, I consistently receive a 413 Request Entity Too Large error from nginx when attempting to make the request.

Here’s the error response:

<html>
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx</center>
</body>
</html>

What I’ve Tried

  1. Image Optimization: I’ve already implemented image optimization to reduce the size:

    • Resized the image from 1024px to 768px
    • Reduced quality from 100 to 85
    • Ensured proper PNG format
  2. Request Formatting: I’m using standard FormData to construct the request:

    const formData = new FormData();
    formData.append("model", "gpt-image-1");
    formData.append("prompt", "Edit the image to change the woman's hair color to bleach blonde while keeping everything else the same.");
    formData.append("size", "1024x1024");
    formData.append("response_format", "b64_json");
    formData.append("n", "1");
    formData.append("image", imageBlob, "image.png");
    
  3. Debugging: I’ve confirmed the image blob size is 1.84MB before sending, which should be acceptable according to the documentation.

Questions

  1. Is there a discrepancy between the documented size limits and the actual implementation?

  2. Does the size limit apply to the entire request payload (including FormData overhead, boundaries, headers, etc.) rather than just the image file itself?

  3. Are there different limits for the /images/edit endpoint compared to other image endpoints?

  4. Are there any undocumented requirements or best practices for reducing request size when using the images edit API?

Additional Context

I’m using a server-side implementation with a Cloudflare Worker that fetches the image, transforms it (using Supabase Storage transformations), and then sends it to the OpenAI API. The logs confirm the image size is 1.84MB before sending.

Any insights or suggestions would be greatly appreciated. Thank you!


Code Sample (for reference)

Here’s a simplified version of my image processing and API call:

async function handleGenerateImage(request) {
  // Parse request body and validate
  const { prompt, images, size = "1024x1024" } = await request.json();
  
  // Process image
  const imageBlob = await fetchImageAsBlob(images[0]);
  console.log("Image blob size:", imageBlob.size, "bytes", (imageBlob.size / (1024 * 1024)).toFixed(2) + "MB");
  
  // Create FormData
  const formData = new FormData();
  formData.append("model", "gpt-image-1");
  formData.append("prompt", prompt);
  formData.append("size", size);
  formData.append("response_format", "b64_json");
  formData.append("n", "1");
  formData.append("image", imageBlob, "image.png");
  
  // Send to OpenAI
  const response = await fetch("https://api.openai.com/v1/images/edit", {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${apiKey}`,
    },
    body: formData,
  });
  
  // This is where we get the 413 error
  console.log("Response status:", response.status);
  const text = await response.text();
  console.log("Response text:", text);
  
  // Process response...
}

// Image processing function
async function fetchImageAsBlob(imageUrl) {
  // Remove query params and transform image
  const baseUrl = imageUrl.split("?")[0];
  
  // Use smaller size for large images
  let finalSize = 768; // Reduced from 1024
  
  // Transform URL with quality reduction
  const transformedUrl = `${baseUrl}?transform=resize&width=${finalSize}&height=${finalSize}&resize=cover&format=png&quality=85`;
  
  // Fetch and return blob
  const resp = await fetch(transformedUrl);
  const blob = await resp.blob();
  
  return blob;
}

@_j

I could type out what’s going wrong there, but it’s more productive to tell GPT-4.5 what’s going wrong there. And then fix what it thinks it knows about API parameters.

OpenAI’s image edit API explicitly requires multipart form data with the image provided as an actual file upload (with the correct MIME type and filename).

Mistake:

  • Sending incorrect image data (blob or incorrect file-type headers)
  • Incorrect ordering or structure of multipart form-data

Correct canonical structure for OpenAI image edits:

  • Use multipart/form-data encoding.
  • Provide the image file explicitly as a file upload with a filename.
  • Set the MIME type explicitly to image/png.
  • The first field should typically be the image file (image), followed by other form fields (model, prompt, size, etc.).

Corrected canonical JavaScript implementation (Cloudflare Worker compatible):

async function handleGenerateImage(request) {
  const { prompt, images, size = "1024x1024" } = await request.json();

  // Fetch and properly structure the file
  const imageFile = await fetchImageAsFile(images[0], "image.png");

  // Create multipart/form-data
  const formData = new FormData();
  formData.append("image", imageFile); // file first
  formData.append("model", "gpt-image-1");
  formData.append("prompt", prompt);
  formData.append("size", size);

  // Send to OpenAI
  const response = await fetch("https://api.openai.com/v1/images/edit", {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${apiKey}`,
    },
    body: formData,
  });

  console.log("Response status:", response.status);
  const text = await response.text();
  console.log("Response text:", text);

  return response;
}

// Properly fetch image as File, not Blob
async function fetchImageAsFile(imageUrl, fileName) {
  // Transform URL with correct parameters to reduce size if necessary
  const baseUrl = imageUrl.split("?")[0];
  const finalSize = 1024; // OpenAI expects 1024x1024 ideally
  const transformedUrl = `${baseUrl}?transform=resize&width=${finalSize}&height=${finalSize}&resize=cover&format=png&quality=80`;

  // Fetch as blob
  const resp = await fetch(transformedUrl);
  const blob = await resp.blob();

  // Crucially: Convert Blob to File with explicit name and type
  const file = new File([blob], fileName, { type: "image/png" });

  return file;
}

Key adjustments and points:

  • Use a File instead of a Blob:
    OpenAI API expects an actual file (image/png) with proper filename and MIME type. Wrapping the blob in a File object explicitly sets filename and MIME type, crucial to correct multipart submission.

    const file = new File([blob], "image.png", { type: "image/png" });
    
  • Order of multipart fields:
    Always put the file upload (image) first, then follow it by textual parameters.

  • Correct Content-Type:
    Do not manually set the Content-Type header for multipart requests. Let the browser or runtime environment (Cloudflare Worker) set this automatically to include the boundary correctly.

  • Payload size:
    Using Cloudflare/Supabase image transformations (format=png, resize=cover, quality=80) will significantly help control the file size below OpenAI’s practical limits.
    Ensure the transformed image isn’t exceeding ~4MB—OpenAI’s practical limit for uploads (often ~4MB max per image).

These adjustments resolve the 413 Payload Too Large errors by correctly formatting the file upload according to OpenAI’s API expectations and ensuring the payload conforms to size and format requirements.

I do not have an environment to run the code, if it would barf trying to resize images on you, which you should not need to do now except if quite large as images are a vision-like input, but it should fix the obvious.

2 Likes