The images.edit method(dall-e-2) doesn't work

Nice to meet you.
Recently, the image.edit API stopped working.
Has anyone experienced a similar issue or knows the cause?

import io
import PIL as pil  # pip3 install Pillow
import openai

print('OpenAI Version:', openai.__version__)

image = pil.Image.open('unit_test/data_in/black_cat_rgb.png')
binary_data = None
with io.BytesIO() as byte_stream:
    image.save(byte_stream, format='PNG')
    binary_data = byte_stream.getvalue()

client = openai.OpenAI()
client.images.edit(
    image=binary_data,
    model='dall-e-2',
    prompt='A cute baby sea otter wearing a beret',
)

OpenAI Version: 1.75.0

---------------------------------------------------------------------------
BadRequestError                           Traceback (most recent call last)
Cell In[3], line 16
     13     binary_data = byte_stream.getvalue()  # ăƒă‚€ăƒŠăƒȘć–ćŸ—
     15 client = openai.OpenAI()
---> 16 client.images.edit(
     17     image=binary_data,
     18     model='dall-e-2',
     19     prompt='A cute baby sea otter wearing a beret',
     20 )

File ~/workspace/orcas_proj/venv/lib/python3.12/site-packages/openai/resources/images.py:195, in Images.edit(self, image, prompt, mask, model, n, response_format, size, user, extra_headers, extra_query, extra_body, timeout)
    191 # It should be noted that the actual Content-Type header that will be
    192 # sent to the server will contain a `boundary` parameter, e.g.
    193 # multipart/form-data; boundary=---abc--
    194 extra_headers = {"Content-Type": "multipart/form-data", **(extra_headers or {})}
--> 195 return self._post(
    196     "/images/edits",
    197     body=maybe_transform(body, image_edit_params.ImageEditParams),
    198     files=files,
    199     options=make_request_options(
    200         extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    201     ),
    202     cast_to=ImagesResponse,
    203 )

File ~/workspace/orcas_proj/venv/lib/python3.12/site-packages/openai/_base_client.py:1276, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1262 def post(
   1263     self,
   1264     path: str,
   (...)   1271     stream_cls: type[_StreamT] | None = None,
   1272 ) -> ResponseT | _StreamT:
   1273     opts = FinalRequestOptions.construct(
   1274         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1275     )
-> 1276     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File ~/workspace/orcas_proj/venv/lib/python3.12/site-packages/openai/_base_client.py:949, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    946 else:
    947     retries_taken = 0
--> 949 return self._request(
    950     cast_to=cast_to,
    951     options=options,
    952     stream=stream,
    953     stream_cls=stream_cls,
    954     retries_taken=retries_taken,
    955 )

File ~/workspace/orcas_proj/venv/lib/python3.12/site-packages/openai/_base_client.py:1057, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
   1054         err.response.read()
   1056     log.debug("Re-raising status error")
-> 1057     raise self._make_status_error_from_response(err.response) from None
   1059 return self._process_response(
   1060     cast_to=cast_to,
   1061     options=options,
   (...)   1065     retries_taken=retries_taken,
   1066 )

BadRequestError: Error code: 400 - {'error': {'message': "Invalid file 'image': unsupported mimetype ('application/octet-stream'). Supported file formats are 'image/png'.", 'type': 'invalid_request_error', 'param': 'image', 'code': 'unsupported_file_mimetype'}}
1 Like

Endpoint doesn’t work? Rather, it is your implementation that doesn’t work.

Tell that AI that wrote your little code snippet that it will have to do better: that it is a file that must be sent, not binary data, and it must have a filename attached so that mime type discriminator can properly construct a well-formed multipart/form-type request as is performed in the OpenAI SDK when it employs the httpx library as its client and transport mechanism.

Might hit you back with exact what you said was the issue



Nee bother, pet —​here’s what’s gannin’ on.

What the SDK is looking for

When the Python SDK sees a value for image=
 it checks:

  1. Is it a file‑like object?
    Something with a .read() method.
  2. Does that object have a .name attribute?
    The filename lets httpx guess the MIME type (mimetypes.guess_type).
    3.If there’s no name it falls back to application/octet-stream.*

A plain bytes object (what you get from byte_stream.getvalue()) has neither a read() nor a name, so it’s treated as raw bytes and lands in the request as octet‑stream, which the Images endpoint quite rightly rejects.


You also aren’t going to be able to “edit” a black cat that is in RGB. You must have an alpha channel of pure transparency for the AI to fill in, a mask in 32 bit color.

Like, say, your 32 bit RGBA cat was a bit too polydactyl for your tastes


1 Like

It worked the following way.
Thank you very much.

Until now, bytes data was allowed, as in _types.py[line45],
but it seems the specifications have changed.

  • code
import openai

client = openai.OpenAI()
client.images.edit(
    image=open('unit_test/data_in/black_cat_rgba.png', 'rb'),
    # mask=open('unit_test/data_in/mask_circle.bmp', 'rb'),  # Optional
    model='dall-e-2',
    prompt='A cute baby sea otter wearing a beret'
)
  • result
ImagesResponse(created=1744962523, data=[Image(b64_json=None, revised_prompt=None, url='https://oaidalleapiprodscus.blob.core.windows.net/private/org-HwU......79a00BlxQ%3D')])
  • reference(Excerpt from _types.py)
if TYPE_CHECKING:
    Base64FileInput = Union[IO[bytes], PathLike[str]]
    FileContent = Union[IO[bytes], bytes, PathLike[str]]
else:
    Base64FileInput = Union[IO[bytes], PathLike]
    FileContent = Union[IO[bytes], bytes, PathLike]  # PathLike is not subscriptable in Python 3.8.
FileTypes = Union[
    # file (or bytes)
    FileContent,
    # (filename, file (or bytes))
    Tuple[Optional[str], FileContent],
    # (filename, file (or bytes), content_type)
    Tuple[Optional[str], FileContent, Optional[str]],
    # (filename, file (or bytes), content_type, headers)
    Tuple[Optional[str], FileContent, Optional[str], Mapping[str, str]],
]


Hello ,
I’m having a similar issue. They definitely have changed something (probably for security reasons) since I started getting the “unsupported mimetype ('application/octet-stream')” error overnight, whereas it had been working for months.

I still haven’t found a way to resolve this issue, even though I’m already sending a “File” object to the images.edit endpoint. The API doesn’t seem to recognize the mimeType of my File.

Perhaps for “supporting something new” reasons


Would you have any idea why this code works locally but still gets the “unsupported mimetype” error when trying to call the API from a cloud function?

const filename = 'myimage.png'; 
const mimeType = 'image/png';  
                          
const file = new File([buffer], filename, { type: mimeType });

const result = await openaiClient.images.edit({
  model: 'dall-e-2',
  image: file,
  prompt: prompt,
  n: 1,
  size: '1024x1024',
});

I registered this issue on GitHub, but it was marked as Closed as not planned and no action was taken.
Do I have to give up?
Could you please tell me a good way to do this.

https://github.com/openai/openai-python/issues/2341