Has there been updates to gpt-image-1 and the /edit endpoint in the past 7 days?

Over the past week, we have noticed considerably worse outputs from the gpt-image-1 /edit endpoint when it comes to reproducing products with their label/package being accurate.

Even in the webapp, the accuracy is also way down.

We have historical data across tons of prompts (with the same structure) and the same products, with outputs looking great - not 100% accurate, but good enough.

And about a week ago the output went way way down, like the labels are completely botched.

Has something changed / are there update feeds we can follow?

Not sure whats going on and if anyone else has seen this.

2 Likes

Even lower than declining quality reported 10 days ago??

OpenAI never says when they are providing a degraded-quality model with the same name at the same pricing (See DALL-E 2 these days…)

input_fidelity now: add $0.04 or $0.06 per input image to get better quality reproduction (that you might expect).

Yes.

And yes we’ve tried high input_fidelity, quality high, all that, still bad outputs in regards to this sort of thing.

Thanks for posting this — you’re definitely not alone. I’ve seen similar posts lately, especially about GPT-4.1 and image-related outputs dropping in accuracy.

If your prompts and products haven’t changed, and the results suddenly got worse across multiple examples, it really does sound like something changed server-side.

I haven’t seen any official update feed mentioning changes to gpt-image-1 or the /edit endpoint, but a few users seem to be noticing the same pattern.

It’d be helpful if OpenAI could confirm whether there’s been a quiet update, or maybe some regression in image fidelity or label control.

Let’s keep each other posted if anyone finds more info.

1 Like