Is this being rolled out in a staggered manner in the US? Hitting the models endpoint only shows gpt-image-1 available and trying to use gpt-image–1.5 with the v1/images/generation endpoint just results in:
Supported values are: 'gpt-image-1', 'gpt-image-1-mini', 'dall-e-2', and 'dall-e-3'.
An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_09zdklasdlfaoipsdf in your message.
“Enterprises and startups across industries, including creative tools, e-commerce, marketing software, and more are already using GPT Image 1.5”. (aka Now you as a developer in lower esteem can do so also.)
It’s stronger at image preservation and editing than GPT Image 1.
You’ll see more consistent preservation of branded logos and key visuals across edits, making it well suited for marketing and brand work like graphics and logo creation, and for ecommerce teams generating full product image catalogs (variants, scenes, and angles) from a single-source image.
Image inputs and outputs are now 20% cheaper in GPT Image 1.5 as compared to GPT Image 1, so you can generate and iterate on more images with the same budget.
Let’s test it out as an “edits” model - can it outfill a mask with compatibility with the old image, and not alter unmasked areas (like DALL-E 2 could do)? What has been awaited for a long time?
1. Mask area
By resize and reposition an input in my utility built for DALL-E 2:
Yeah… logos regenerating completely nullifies the logo…
But the claim was ‘more consistent preservation,’ which maybe we read differently.
My testing vector starts with the question now:
Does it handle logos differently when specified, or are they just referring to more consistent to reference input? I don’t think they actually claim not to regenerate…?