I have noticed a significant increase in artifacts in gpt-image-1.5-2025-12-16 when using the endpoint for editing. To investigate this, I re-ran the prompt and rerendered the same images i had rendered in DEC, only to find that the quality has decreased considerably. Specifically, during editing, there are numerous artifacts, the text on packaging is no longer rendered correctly, and the dimensions are distorted.
Has there been a change that has reduced the model’s performance?
There has not been an intentional downgrade of image quality, but there have been changes in the editing pipeline and safety constraints that can negatively affect certain high-precision edit use cases, especially ones like yours.
Why December renders look better
Your December images benefited from:
Less aggressive edit safety
Higher tolerance for text fidelity
A closer “copy-then-modify” behavior during edits
Re-running the same prompt today does not recreate the same internal conditions.
JANUARY RENDER - EXACTLY SAME PROMPT - THE IMAGE IS GARBAGE - NO COMMERCIAL VALUE AT ALL…
Remote controls artifact
Feet of the coffee table, a total mess.
Original table had four wooden legs → enhanced gets a single black leg in the center.
Pillows on the couch - awful
Zero reliability - Zero commercial value.
We have dozens of other examples comparing renderings from December to those created yesterday 17/JAN. The results are awful and we can provide them upon request.
We trusted the model and OpenAI, and built an entire application on top of it. However, OpenAI changed the model parameters without any notice, and the results are no longer reliable. It was a mistake to invest time and money into developing this product based on this company. This lack of transparency could damage the company’s reputation and credibility. These are serious matters, and I cannot imagine how they could happen. These are not the tactics of a serious company.
Real answer, not from AI: this topic is in the wrong category.
ChatGPT is a consumer product. OpenAI doesn’t name “gpt-image-1.5” there. You get the “make pictures with ChatGPT” with whatever facility OpenAI wants to deliver.
Are you pointlessly asking ChatGPT? That’s what your screenshot shows. Is ChatGPT where you are asking for images also? Or are you truly making API calls, specifying the model, the size, and the output quality parameter?
The API should be relatively stable, but with the image models, OpenAI has a three-year pattern of modifying the same model name and the quality delivered also.
You only need to send a request made three years ago to DALL-E 2 and see junk come out of that model today, so there’s definitely not a hands-off policy on the API either.
You will notice one peculiar artifact with gpt-image-1.5 on the edits endpoint: the more you iterate, the more the image devolves into a blotchy mess of globs. Like someone painting with a wet sponge dabbed all over the image. There is no more “look at the person change race over 200 trials”, there is "look at the literal watermarks make a fake texture all over.
You should realize that image edits are not perfect and are heavily influenced by the prompt - and gpt-image-1.5 is very good with following prompt instructions.
When I refer to parameters, I mean internal functions rather than the settings we can adjust. The prompt we are using is 638 words, I can’t post it here. We developed it step by step, testing it on many photos of Airbnb listings until we achieved a perfect result - based on DEC renderings.
Thank you very much for the image and the prompt you sent. I used your prompt with the edit endpoint, which gave me the following result using a low setting which again has some artifacts - for example the pillows. When I increase the setting to medium, artifacts begin to appear. Which endpoint did you use? The 1/images/generations or v1/images/edits?
Result:
Recreate the image: Increase the brightness by 3%. @ v1/images/edits - low
Consider: use "quality":"low" … receive 400 “words” of semantic visual encoded information generated by the AI model that can encompass the entire image output to be produced. For 3MB of image pixels, that is going to be lossy.
However, in another part of the docs (docs/guides/image-generation?api=image#edit-images), it says:
Input fidelity
GPT Image models (gpt-image-1.5, gpt-image-1, and gpt-image-1-mini) support high input fidelity, which allows you to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image.
Even gpt-image-1-mini
The BUG is in the documentation! How to report this ?
PS: It seems that in December, when we were running the model, input fidelity wasn’t supported (as in api docs) and was set to high by default. Now might be supported so it defaults to low if ommited.