The inpainting example for the new openai-image model doesn’t seem to be working, it always changes the entire image, as if the mask was not provided.
From the 3 example images (input, mask, output) you can see that even here the output image is actually not correct since the entire image was changed, it only looks similar to the input image, but it’s a different sunlit lounge.
The prior conclusion that what you’d get is similar to inpainting an existing “image-1/gpt-4o” image in ChatGPT is backed by further evidence like yours.
The gpt-image-1 example, also used in the API reference, shows no need for a mask or a base image to remix several images into a new one.
Unaltered original? dall-e-2 gives near pixel accuracy and with just minor glitches sometimes around the mask area. Grabbing their picture just now.