Openai-image inpainting example is not working

https://platform.openai.com/docs/guides/image-generation#edit-an-image-using-a-mask-inpainting

The inpainting example for the new openai-image model doesn’t seem to be working, it always changes the entire image, as if the mask was not provided.

From the 3 example images (input, mask, output) you can see that even here the output image is actually not correct since the entire image was changed, it only looks similar to the input image, but it’s a different sunlit lounge.

1 Like

The prior conclusion that what you’d get is similar to inpainting an existing “image-1/gpt-4o” image in ChatGPT is backed by further evidence like yours.

A sunlit indoor space features a cozy seating area with wicker furniture and a small pool, surrounded by large windows and lush plants. (Captioned by AI)

The gpt-image-1 example, also used in the API reference, shows no need for a mask or a base image to remix several images into a new one.

Unaltered original? dall-e-2 gives near pixel accuracy and with just minor glitches sometimes around the mask area. Grabbing their picture just now.

And with DALL-E 2 just as broken as it has been, for going on a month:

Perhaps okay for infill if you didn’t use the polar bear prompt to now produce an unrecognizable blob.

1 Like