EDIT Endpoint - /images/edits refusing gpt-image models

Trying to use the image editing feature as described in the official OpenAI image generation documentation.

However every attempt at using the Python SDK client.images.edit functionality fails with a HTTP 400 bad request.

At first I was getting errors for input parameters the SDK supports and even the API endpoint AND code examples in the documentation use. For example: ‘input_fidelity’, ‘format’, ‘quality’.

I removed all of them and eventually got an error that only supported model is ‘dall-e-2’. But this cannot be right, as documentation and the SDK list ‘gpt-image-1’, ‘gpt-image-1.5’ and others as well.

openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value: 'gpt-image-1'. Value must be 'dall-e-2'.", 'type': 'invalid_request_error', 'param': 'model', 'code': 'invalid_value'}}

OpenAI Python SDK images.edit

image

openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value: 'gpt...ini'. Value must be 'dall-e-2'.", 'type': 'invalid_request_error', 'param': 'model', 'code': 'invalid_value'}}

Apparently a model name requires an elide in the middle of an error response, since it was only supposed to be 6 characters?

import json, base64
from openai import OpenAI
client = OpenAI()

infile = "mayor.png"

response = client.images.edit(
  image=open(infile, "rb"),
  prompt="Outfill: complete the rest of the top of the image",
  response_format="b64_json",
  size="1024x1024",
  model="gpt-image-1"
)
print(response.data[0].b64_json[:80])
img_bytes = base64.b64decode(response.data[0].b64_json)

with open(infile + "_edit.png", "wb") as f:
    f.write(img_bytes)

EDIT: this issue is that response_format=“b64_json”, or any response_format at all, cannot be sent to gpt-image models. They are automatically and only b64-json, yet this is not a parameter name that can be sent. Neither the error message nor the API reference for edits helps diagnose the problem if simply switching from dall-e-2 to gpt-image-1 or others.

The issue is response_format. For client.images.edit with GPT image models, remove response_format entirely and read the base64 image from response.data[0].b64_json.
output_format is optional and only controls file encoding (png/jpeg/webp).

I hope this helps.

Ps. Can you maybe link me to the docs page where the reference to response_format is coming from? I would like to have this fixed or clarified.

  import base64
  from openai import OpenAI

  client = OpenAI()
  infile = "mayor.png"

  with open(infile, "rb") as img:
      response = client.images.edit(
          image=img,
          prompt="Outfill: complete the rest of the top of the image",
          size="1024x1024",
          model="gpt-image-1",
          # output_format="png"
      )

  print(response.data[0].b64_json[:80])
  img_bytes = base64.b64decode(response.data[0].b64_json)

  with open(infile + "_edit.png", "wb") as f:
      f.write(img_bytes)
1 Like

You are correct: I just mirrored what was being reported in making a quick “test” without checking the API validation. I was at a different PC with an aging “edits” script ready to be edited for gpt-image. But where would you check? You are correct that API ref is degraded.

The odd thing is the OP report “I removed them all” didn’t go far enough in truly removing them all.

The parameter “response_format” must be dropped on gpt-image models, even when it indicates the “b64_json” that you could specify with dall-e-2. In contrast, it is mandatory if you want to not instead get a URL link for downloading images with DALL-E 2.

You can see that the error message is completely unhelpful, where model sent should indicate the invalid parameters for it, and not a parameter indicates invalid model.

Further, you will also observe that the API documentation IS damaged, by not having the “response_format” parameter documented at all any more for the edits endpoint. It’s like OpenAI wants to cause maximum friction.

The API reference used to say the usage, when to include, when to remove, that you’d get a URL from dall-e-2.

You can see I remove the response_format when it is then only base64 on image 1 https://community.openai.com/t/dall-e-2-3-create-dall-e-2-edit-tool-in-python-for-you/1149307:

{
  "model": "gpt-image-1-mini",
  "prompt": "Now put a puppy there!",
  "size": "1024x1024",
  "output_format": "png",
  "user": "image-editor-user",
  "quality": "medium",
  "background": "opaque",
  "n": 1,
  "image": [
    "<canvas image.png bytes>"
  ]
}

1 Like

The error message is correct with regards to response_format being accepted by Dall-E 2 only?
Even though the documentation provides links to the specifics of this model and it’s successor these links lead back to the same page.

I have flagged this to the team.

1 Like

The error message should be “invalid parameter “response_format” for model {model}”, but generating it might mean re-sorting the API endpoint’s validation order.

2026-05-12 is when dall-e-2 will finally be shut off. You can experiment with how badly OpenAI damaged the model from its original quality until then.

My issue isn’t with response_format as I never used it. I followed the Image generation documentation from OpenAI. I cannot post the link as the forum does not allow me to.

Given

I receive:

openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “Unknown parameter: ‘input_fidelity’.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘input_fidelity’, ‘code’: ‘unknown_parameter’}}

I remove input_fidelity and I receive:

openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “Unknown parameter: ‘output_format’.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘output_format’, ‘code’: ‘unknown_parameter’}}

I remove output_format and I receive:

openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “Unknown parameter: ‘quality’.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘quality’, ‘code’: ‘unknown_parameter’}}

Now when I follow this pattern of removing parameters that cause errors and I end up with:

openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “Invalid value: ‘gpt…1.5’. Value must be ‘dall-e-2’.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘model’, ‘code’: ‘invalid_value’}}

I get the very same error even with the most basic inputs such as model, image, prompt.

Which version of the Python SDK are you using? The current version is 2.24.0 .

I am using 2.24.0.

I did more testing and it seems issue is caused by the fact that I don’t pass the image as a file object, but binary data from memory. This is actually quite terrible and what’s worse is that none of this behaviour is documented.

EDIT: I honestly don’t get why the only accepted input is the file object returned from open(file, ‘rb’)for the gpt-image models. Most of the time you won’t have the image as an actual file, but rather as input in the form of image URL, binary data or base64 encoded.

Aha, that is fair feedback. This behavior should be documented more clearly.

Thanks for raising this issue.

I suppose the solution is that if the image is in memory either:

  • send it as a multipart file part (io.BytesIO with a filename), or
  • upload it first and pass the file_id in JSON.
1 Like

I actually tried io.BytesIO earlier, just without the name property. Adding the name property makes it work. Yippiee!

2 Likes