Error when attempting to access gpt-image-1.5 in images/edits

Hi All,

I am encountering an issue when attempting to access any model other than dall-e-2 via the images/edits API endpoint. Here is the return….

Error: { “error”: { “message”: “Invalid value: ‘gpt…1.5’. Value must be ‘dall-e-2’.”, “type”: “invalid_request_error”, “param”: “model”, “code”: “invalid_value” } }

The organization is verified and on a pay-as-you-go plan and I’ve been round and round and round through the docs. Calling from PHP using curl. Anyone have have any suggestions on how to resolve this? TIA!

Bryan

1 Like

Hi and welcome to the Community!

Here is a curl.exe command you can adapt to your needs. It will take a sample.png and return a base64 encoded image in body.json. The main point is to show that it works as expected.
Hope this helps!

curl.exe -sS ^
-D headers.txt ^
-o body.json ^
–max-time 300 ^
-X POST “https://api.openai.com/v1/images/edits” ^
-H “Authorization: Bearer %OPENAI_API_KEY%” ^
-F “model=gpt-image-1.5” ^
-F “prompt=Slightly increase contrast and add a tiny red square in the top-left corner.” ^
-F “image=@sample.png;type=image/png” ^
-w “HTTP_STATUS:%%{http_code}\n”

1 Like

This is absolutely useless. The issue is the /v1/images/edits endpoint doesn’t support majority of parameters and arguments it should. Keep getting HTTP 400 responses for unsupported parameter or invalid parameter value, just as the OP.

It does not work: The Image edit API is screwed up and refusing all GPT image models.

See:

Thanks for both replies as they got me pointed in the right direction.

thatonemario is absolutely correct that the call gets rejected for any model except dall-e-2 if certain parameters are passed.

It seems like maybe the docs have been updated since I started making calls as ‘response_format’ is no longer included in the Create Image Edit page (won’t allow me to include link) and I swear it was in the list the other day and that appears to be what was causing the rejection when calling gpt-image-1.5. The rest of the params on that page appear to be working now but I am sure I tried all combinations over the last few days including without ‘response_format’ with the previously reported results.

Working now and thanks again for the replies!

1 Like

This is the actual solution here. Either use output_format or read the base64 image from response.data[0].b64_json

Well…. Yes and No.

Ultimately, Yes, don’t pass ‘response_format’ to any model other than dall-e-2 but, No, ‘response_format’ and ‘output_format‘ are completely different things….

response_format: {‘url, ‘b64_json’}

output_format: {‘png’, ‘jpeg’, ‘webp’}

1 Like

See my reply below.

I can add to my reply above that url is supported as well. Since all methods are optional that’s no problem. Thanks for pointing it out.

@qlss actually that’s incorrect. url is only a valid return type for Dall-e 2 or Dall-e 3.
Can you link me to the version of the docs you are referencing?

Like I posted in my initial reply, it appears that the docs have changed since the last time I checked so response_format is no longer listed on the Create Image Edit docs page. It’s very interesting working with shifting sand!

I don’t know. You are probably right. As of today response_format is below output_compression and above partial_images.

Now I see it, too :slight_smile:

Will flag that as well to the team.

The simple fact is: even with dall-e-3, OpenAI changed the accepted and rejected parameters so significantly, that it should have been put on a different endpoint URL. Same for gpt-image models on “edits”: you should not simply switch the model and have your output change from a URL link to file data in the response.

Since I’m a record-keeper, here’s the edits API parameter that STILL IS IN EFFECT.


response_format

type: string
enum:

  • url
  • b64_json

default: url
example: url
nullable: true

description:
The format in which generated images with dall-e-2 and dall-e-3 are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated. This parameter isn’t supported for gpt-image-1 which will always return base64-encoded images.


Data:

Name Type Description
b64_json string The base64-encoded JSON of the generated image. Default value for gpt-image-1, and only present if response_format is set to b64_json for dall-e-2 and dall-e-3.
url string When using dall-e-2 or dall-e-3, the URL of the generated image if response_format is set to url (default value). Unsupported for gpt-image-1.

(the edits docs was never “merged” with the two models at once, the OpenAPI spec was simply replaced)

Yes, but the initial report is about an issue when trying to generate/edit an image using gpt-image-1.5 . The docs never said that response_format is an accepted parameter for this model according to your records.

I wonder why this was reported twice today.

The docs for “edits” simply “never said” what was needed at all - the prior set of parameters is still around, and it is easy to make a mistaken assumption that if you want ‘b64’ you still might ask for it. Missing:

Then, if you are forced to infer parameter usage from the “generations” docs, you might also wonder why “quality” of gpt-image models doesn’t work on the edits API, either.

Here’s the “edits” API documentation with “response_format” that was wiped, instead of properly modified with notations.

model:
  anyOf:
    - type: string
    - type: string
      enum:
        - dall-e-2
      x-stainless-const: true
  x-oaiTypeLabel: string
  nullable: true
  description: The model to use for image generation. Only `dall-e-2` is supported at this time.
'n':
  type: integer
  minimum: 1
  maximum: 10
  default: 1
  example: 1
  nullable: true
  description: The number of images to generate. Must be between 1 and 10.
response_format:
  type: string
  enum:
    - url
    - b64_json
  default: url
  example: url
  nullable: true
  description: >-
    The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs
    are only valid for 60 minutes after the image has been generated.
1 Like

What am I missing?

What I understand is that there is one endpoint each for image creation and image edits, but the supported models require different parameters. That is confusing. Especially because all parameters suddenly appear optional and are listed both in the request reference and in the created object. One can find all the information in the descriptions but it’s easy to miss something here.

In addition, the documentation for the DALL·E models appears to be temporarily missing, and the response_format parameter is also missing from the Image Edits API reference.

You aren’t missing anything. The documentation is what is missing a parameter that still works, for a model that that still works.

OpenAI just completely wiped the dall-e-2 version of the edits API reference and replaced it with a version only applicable to gpt-image-1.

Suppose: You look at a working edits API script with “response_format”: “b64_json” - and have to fail first to discover the problem with a model ID upgrade. That was my own method of writing gpt-image-1 software for the edit endpoint: failing over and over, and logging what comes back from the API when it does work.

This is a institutional symptom seen elsewhere: ruined documentation with no version history - for what you might actively be using on the API.

I think one other problem is that the YAML OpenAPI source has a limit of 2 megabytes, so they are always sacrificing something to include something else these days.

Solution

The API reference, in the style of other endpoint parameters such as “functions” on Chat Completions, could include response_format marked as legacy, written up with the same quality in CreateImageEditRequest as #/components/schemas/CreateImageRequest' for /images/generations

My calls are rejected for any model even when I pass only the most basic parameters as model, image, prompt.

Another point that annoyed me is that the documentation lists that images/edit accepts image as an URL, but the Python SDK does not. Which is pretty odd. But this has nothing to do with the issue I am having with the endpoint refusing every call with gpt-image models.

Although this thread is likely to scroll and disappear like they all do, I typed up a little “code as edits API documentation” Python - specifically for using the somewhat superfluous library module which doesn’t offer much in this case.

With multiple input images as vision understanding, and also showing collection of the multiple outputs you can request, along with bytesio in-memory objects both in front and after the actual API use, hopefully this code is right-sized to communicate a foundation in understanding the parameters without needing to look elsewhere.

import base64
from io import BytesIO
from pathlib import Path
import openai

edit_prompt = """Outfill: complete the rest of the top of the image."""
infile_list = ["input_file.png", ]  # list[str] - file paths, 16 maximum
outfile_format = "png"   # "png", "jpeg", or "webp" - also verify output_compression

# ---- Load files into memory as BytesIO (application state example) ----
# FileTypes as input: file bytes, an io.IOBase instance, PathLike or a tuple
input_file_objects: list[BytesIO] = []
for path_str in infile_list:
    path = Path(path_str)
    with path.open("rb") as f:
        file_bytes = f.read()
    bio = BytesIO(file_bytes)
    bio.name = path.name  # ensures filename metadata for API multipart handling
    input_file_objects.append(bio)

print(f"Sending image edit request using {infile_list}")
client = openai.OpenAI()
response = client.images.edit(

    model="gpt-image-1.5",          # Union[str, ImageModel, None] -- model id; default dall-e-2
    prompt=edit_prompt,             # str - required prompt text
    image=input_file_objects,       # Union[FileTypes, SequenceNotStr[FileTypes]] - single file or list
    input_fidelity="low",           # Optional[Literal["high", "low"]] - gpt-image-1.5 forces "high"
    # mask=maskfile                 # FileTypes | Omit - image mask file using transparency alpha

    n=1,                            # Optional[int] | Omit - number of images (1–10); None or omit uses default 1
    size="1024x1024",               # Optional[Literal["256x256","512x512","1024x1024","1536x1024","1024x1536","auto"]]
    quality="low",                  # Optional[Literal["standard","low","medium","high","auto"]]
    output_format=outfile_format,   # Optional[Literal["png","jpeg","webp"]] | Omit - encoded output format
    # output_compression=95,        # Optional[int] | Omit - compression level for jpeg/webp only (0–100)
    background="opaque",            # Optional[Literal["transparent","opaque","auto"]] | Omit - jpeg unsupported

    stream=False,                   # Optional[Literal[False]] | Literal[True] | Omit - enable streaming events
    partial_images=0,               # Optional[int] | Omit - streaming-only progressive images count
    user="myCustomer",              # str | Omit - end-user identifier for tracking/abuse signals

    # response_format= xxx          # ONLY dall-e-2 -- Optional[Literal["url", "b64_json"]] | Omit
)
# note: Omit is the OpenAI library sentinel for nullable

output_images_in_memory: list[bytes] = []
if response.data:
    for img in response.data:
        if img.b64_json:
            img_bytes = base64.b64decode(img.b64_json)
            output_images_in_memory.append(img_bytes)
print(f"Received {len(output_images_in_memory)} edited image(s)")

save_index = 0
for original_path in infile_list:
    original_stem = Path(original_path).stem

    for _ in range(len(output_images_in_memory) // len(infile_list)):
        if save_index >= len(output_images_in_memory):
            break
        outfile_name = f"{original_stem}_edit{save_index}.{outfile_format}"
        with open(outfile_name, "wb") as f:
            f.write(output_images_in_memory[save_index])
        print(f"Saved {outfile_name}")
        save_index += 1

print(response.usage.model_dump())


response__model_fields = """
ImagesResponse:
{
    'created': FieldInfo(annotation=int, required=True),
    'background': FieldInfo(annotation=Union[Literal['transparent', 'opaque'], NoneType], required=False, default=None),
    'data': FieldInfo(annotation=Union[List[Image], NoneType], required=False, default=None),
    'output_format': FieldInfo(annotation=Union[Literal['png', 'webp', 'jpeg'], NoneType], required=False, default=None),
    'quality': FieldInfo(annotation=Union[Literal['low', 'medium', 'high'], NoneType], required=False, default=None),
    'size': FieldInfo(annotation=Union[Literal['1024x1024', '1024x1536', '1536x1024'], NoneType], required=False, default=None),
    'usage': FieldInfo(annotation=Union[Usage, NoneType], required=False, default=None)
}
Image:
{
    'b64_json': FieldInfo(annotation=Union[str, NoneType], required=False, default=None),
    'revised_prompt': FieldInfo(annotation=Union[str, NoneType], required=False, default=None),
    'url': FieldInfo(annotation=Union[str, NoneType], required=False, default=None)
}
"""

A notebook might be better communication, but not on this forum..

1 Like

Following up…. I found the open browser tab in my pile of open tabs, that had the API docs I was originally referring to:

Create image edit | OpenAI API Reference

I guess I didn’t realize that the accepted params list was different depending on the calling language that was selected. Never run into that before with any other API, unless the docs were bad :wink: