GPT 4o Image Generation Feedback on Content Violation Handling and Suggested Improvements for Creative Image

Dear OpenAI Team,

I’m writing to offer constructive feedback regarding the way the system handles potential content violations during image generation. As a frequent and dedicated user who deeply appreciates the capabilities of your platform, I’ve encountered a limitation that I believe could be improved to better serve creators and maintain alignment with policy safeguards.

Recently, I attempted to generate an image that was flagged “after” submission due to policy restrictions—despite the intent and context being artistic and respectful. The frustrating part wasn’t the flag itself, but that the warning only came “after” the image had been submitted for generation, rather than “before”, when adjustments could have been made.

Given the importance of creative flow and the need for clarity, I’d like to suggest the following improvements:

  1. Pre-Generation Warnings:
    If the system detects a potentially sensitive or borderline prompt, it should “intercept before generating the image” and offer a clear, respectful explanation of what might trigger the filter.
  2. Suggest Alternatives Instead of a Hard Stop:
    Rather than simply denying the image post-submission, the system could propose “small adjustments” to help creators stay within guidelines. For example, altering clothing descriptions, background lighting, or context (e.g., “studio-style lighting” vs. “performance stage”).
  3. Creator-Friendly Tone and Guidance:
    Messages that say a prompt “violates policies” after an image is half-created can feel discouraging, especially when the creative intent is legitimate. A more supportive tone—“This may push against safety boundaries. Would you like help modifying it?”—would go a long way in helping users stay engaged and collaborative with the system.

I fully understand and respect the need for responsible AI use and content moderation. I simply believe these improvements could better empower creators to align with your guidelines “before” hitting a wall mid-process.

Thank you for your hard work and innovation. I’m grateful for this platform and hope my feedback can contribute toward making it even more responsive and helpful.

I have even added a custom memory to try avoid content violations, but sadly is not working with efficiency.
image

ChatGPT doesn’t know which words violate OpenAI policy.
It modifies prompts, but also its modified prompts are flagged as violation.

For example:

First try is blocked, but same prompt in another chat I add a phrase “to make fun” and it worked:


This prompt completely same no any change.
In one chat it says “violation”, but in another chat it worked:


1 Like

That is true polepole, I have noticed that sometimes it depends on the context of the chat itself. But I think if they add some pre-thinking to the generative process, don’t need to be a very smart model, just a system to give a warning, or just rewrite the prompt we made in a way that don’t deviates of the original intended, but without the problematic prompt parts, it could solve this problem and avoid unnecessary flags. I hope they can solve it soon, and give some more freedom for generating artistic human body descriptions and funny situations. Thanks.

1 Like



These images were flagged when I tried to enhance their overall realism… I don’t understand why. It’s just a person walking with a dog in a garden. Please ensure there’s a clear distinction between artistic creativity—especially when it involves fictional characters—and content that violates any policies.