Has GPT Image 1 quality declined?

Hi all,

Just thought I’d ask. I’ve been using GPT Image 1 for my product for a few months now. All of a sudden the output (even with Input Fidelity = High) are extremely bad and completely off. I use it for product photography and there are completely blatant mistakes on the product labels.

This was never the case till a few days back. I’ve kept it on High Output Quality and High Input Fidelity and it’s really bad.

Have they nerfed the product or pushed an update that has made it this bad?

1 Like

I’ve seen the exact same thing over the past 3-5 days. And the team that reviews that outputs has noticed the clear decline in quality as well - especially on product labels.

We’ve also tested high input fidelity and such.

When testing the same prompts in 4o with JSON prompting - the label comes out almost perfect. But via the gpt-image-1 api, not the case.

Wonder what happened on the API.

2 Likes

Yes, it’s been really bad on product labels. Never know about the JSON thing, thanks! If you don’t mind, could you share the JSON scheme you use?

yes it become almost completely unusable on the api side. I had switched an entire project over to chatgpt when they released the 4o model, and now that were are getting to final rendering after all our tests, the model has been completely nerfed and nuked and OpenAI has been entirely silent about it. Like how are they charging and taking money for this? I’m really irritated and dont know if I should wait for an update or move my project forward ugh

1 Like

yeah the pixel by pixel photo restoration that i do my business around just boofed and hasn’t returned :frowning:

Has anyone managed to overcome the decline in quality that began in recent weeks?