Image generation rate-limited even after failed prompts – no way to give feedback

Hi all,

I’m using ChatGPT Plus and frequently work with complex, story-driven image prompts using the DALL·E image feature. I’ve noticed that even when image generation fails (due to internal errors or content policy blocks), these failed attempts still count against the rate limit.

This makes it very difficult to iterate on visual storytelling, especially when working seriously and precisely.

To make it worse, I found no working way to give feedback:

  • The thumbs-up/down options are gone.
  • The help center leads nowhere for this type of usage feedback.
  • I even posted an issue on GitHub – it was immediately closed with “not the right place”.

So my questions:

  • Is there any proper way to report this?
  • Can OpenAI differentiate between successful and failed image generations?
  • Will there be more flexibility for creative/professional use?

I’m not trying to spam the system. Quite the opposite: I’m trying to use it thoughtfully – and I feel penalized for it.

Thanks to anyone listening or experiencing the same.

2 Likes

I only recently began using AI and ChatGPT after being resistant since its inception. However, I decided it could potentially help me as a content creator. The first few collaborations I gave it were pretty successful, but they were just text documents. Initially, I was very optimistic about the image generation to assist me with making charts and graphics for some of my videos but the past couple of days it can’t even seem to generate the revisions I keep asking for. It just says, “let me try again. I’ll keep you posted”. But…it doesn’t actually “keep me posted”, which is almost equally as frustrating. For the record, Copilot currently seems to be no better because when ChatGPT stopped working for me, I decided let’s give this Copilot thing a try. Same problems. “Hang tight and I’ll get that image generated for you”. And then, digital crickets…

Having this exact issue right now. You would think there would be some sort of safeguard against it but it’s more profitable to let it stay a problem…