Over the past week, we have noticed considerably worse outputs from the gpt-image-1 /edit endpoint when it comes to reproducing products with their label/package being accurate.
Even in the webapp, the accuracy is also way down.
We have historical data across tons of prompts (with the same structure) and the same products, with outputs looking great - not 100% accurate, but good enough.
And about a week ago the output went way way down, like the labels are completely botched.
Has something changed / are there update feeds we can follow?
Not sure whats going on and if anyone else has seen this.
Thanks for posting this — you’re definitely not alone. I’ve seen similar posts lately, especially about GPT-4.1 and image-related outputs dropping in accuracy.
If your prompts and products haven’t changed, and the results suddenly got worse across multiple examples, it really does sound like something changed server-side.
I haven’t seen any official update feed mentioning changes to gpt-image-1 or the /edit endpoint, but a few users seem to be noticing the same pattern.
It’d be helpful if OpenAI could confirm whether there’s been a quiet update, or maybe some regression in image fidelity or label control.
Let’s keep each other posted if anyone finds more info.