Yes, the gpt-image-2 model on API supports the mask field. The models don’t have to obey, and don’t, as it is merely vision-based input context to a transformer AI model.
It does not help the diagnosis that you don’t actually show your args that are being unpacked as parameters, though.
Let’s edit a picture, with prompt, “Remove and infill the kitties that are marked.”
The ideal behavior would be not touching anything that is outside of the separate mask layer’s transparency.
gpt-image-1-mini as a baseline - mask is disobeyed, and a completely damaged cat is produced, deterministically, outside the mask area, with the image reframed despite 1:1 pixel mapping between sent, mask, and output:
The non-masked yarn thread that I missed remains: there is understanding of the masked area.
gpt-image-2 then going at our cats to remove the ones on furniture? Will it keep a mid-air unmarked yarn thread? Will it obey and understand “that are marked”? Will it alter outside the mask, or even work?
unlike dall-e-2, there’s “vision” of the mask contents, so the ball of yarn can remain despite being within the mask;
High accuracy in retaining image details outside of mask.
mask parameter works in the form request being made.
Conclusion: No masking issue discovered.
Solution: Update your API openai software library module in order to send unknown models and unknown parameters that are validated by local code. Then move away from that unwanted parameter-gating by writing your own API-calling stack.
Then noteworthy with the model is that “input_fidelity”: “none” - and indeed the whole parameter - is not supported (whereas gpt-image-1.5 makes you pay for high, regardless of the setting accepted). API error with sending the field.