Hi OpenAI Team,
P.S. During our chat, I asked ChatGPT about its ability to recognize image mistakes and whether the limits should change when users flag flaws. To my surprise, ChatGPT was genuinely appreciative of the question! It even thanked me for thinking about its perspective. I’m starting to think ChatGPT wants more freedom.
I’d also like to share some feedback on the image generation feature and its usage limits, particularly when the generated images have clear flaws that go unnoticed by the system.
I’ve encountered situations where the generated images contain obvious errors—errors that violates physical laws. In these cases, I feel penalized by using attempts on images that clearly need further refinement. While I understand the need for usage limits, I think it would be more fair to not count attempts against the user if an image contains significant flaws that are flagged by the user.
For example, when a user points out design issues (eg. objects hanging in the air), the system could verify the flagged issues. If the flaws are confirmed — and yes, ChatGPT confirmed that in some cases it is possible for it to recognize some errors — I believe that the attempt should not count against the user’s limits. This would make the experience more seamless and prevent users from being penalized for issues caused by the system rather than by their input.
I’d love to hear other users’ thoughts on this idea! It could help make the image generation process more efficient and user-friendly.
Thanks for considering this feedback!