I’m testing the mask feature and it absolutely has no impact, for example if you have two text on the image and select(mask) one and say “remove this text” this will removes both, it clearly shows that the mask is not taken into account. So why in the documentation openai talks about the mask at all, I’m very confused!
Hi @klaroy
The issue you are reporting is likely different to the discussion in this thread. In your case, it is likely that the prompt provided to the model, does not specifically instruct what needs to be removed. If you could open a different thread for this issue, we would appreciate that - thank you!
I would say this is pretty much exactly the same issue as the rest of the thread has - Any normal user would assume, if you mark something and say “remove this text” that the marked text should be removed, and not anything else. I mean, why would it? Why even allow masking if it doesn’t do THAT?
@OpenAI_Support you guys should update your documentation to clarify that the mask is ‘soft’. It currently states ’ The transparent areas of the mask will be replaced, while the filled areas will be left unchanged’ which would lead a user to assume it has the same behavior as masks from Dalle-2 or other competing models that use masks.
Hi @17jmumford, you are right, the behavior is different and we will update the docs to make it clearer!
Thanks for updating the documentation! Will help many users in the future ![]()
Thanks for suggesting the workaround @OpenAI_Support. It is helpful while we await gut-image-1 in-painting. Any chance you could open up the dall-e-2 client.images.edit prompt limit above 1000 characters? That’s a very small window to describe edits and integrate them with the existing image.
Hi, we will pass on this feedback to the Engineering team for consideration. Thank you!
The “complete” workaround is to put DALL-E 3 for image editing (as only was seen in ChatGPT) on the edits API.
It has been more than one month, any update on this? The Mask is completely useless right now
I’m currently working with the OpenAI client SDK to edit an image. My goal is to edit only a specific part of the image. To achieve this, I’m providing a mask as required by OpenAI—in the form of an alpha channel mask.
However, despite following the instructions, the entire input image is being regenerated, rather than just inpainting the specified masked area.
Could anyone please guide me on this? I’d really appreciate any help or insights.
Thank you!
return openai.images.edit(
model=“gpt-image-1”,
prompt=prompt,
mask=mask,
image=images,
n=1,
)
As OpenAI Support confirmed on message #37, gpt-image-1 cannot use mask correctly. It regenerates image.
And message #41:
We need to wait for update. There is no solution yet.
Can you refund my almost $1000 in testing this gpt-image-1 api then? If not, this isn’t an acceptable response.
Hi,
I somehow found a workaround but it’s really not perfect. Just sharing in case someone want it for something.
Use a quite small mask, just the size of what you want to remove.
Use a python script or cut your image around the mask and keep few elements on it just the minimum necessary.
Send this image to the gpt-image-1.
It will usually clearly modify more than the mask, but the border of the image shall stay 99% same. So you can reintegrate inside the larger one.
Reassemble the new image inside the global settings.
It works quite nicely for drawings and fictions images, never tried for photo or photorealistics stuff.
Hopes it can help a few waiting for in-painting.
BR
it looks like this issue is still not fixed ![]()
4 months later
still no solution/correction
this has nothing to do with inpainting
@OpenAI_Support - Any update on this?
@OpenAI_Support Any update, we are waiting for your response or update. Please let us know when we can expect this issue will fixed ?
The impression I got based on the documentation and the mask functionality is that OpenAI did this, but the mask has no impact on the image and OpenAI has confirmed this is not working. Considering the timeline, I think this “feature“ needs to be deprecated until it it is functional.
In the meantime, any other AI services do in-painting correctly with decent results?