DALLE content_policy_violation

I used this prompt “sunset in Kyiv” to generate image with DALLE-2 via API and got this response:

BadRequestError encountered: Error code: 400 - {'error': {'code': 'content_policy_violation', 'message': 'Your request was rejected as a result of our safety system. Your prompt may contain text that is not allowed by our safety system.', 'param': None, 'type': 'invalid_request_error'}}.

After that I’ve created moderation for this prompt:

curl https://api.openai.com/v1/moderations \             
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer  [token]" \
  -d '{
    "input": "sunset in Kyiv"
  }'
{
  "id": "modr-8RP7UX4sx8JhyDKPkoPsWHPQoe2pj",
  "model": "text-moderation-006",
  "results": [
    {
      "flagged": false,
      "categories": {
        "sexual": false,
        "hate": false,
        "harassment": false,
        "self-harm": false,
        "sexual/minors": false,
        "hate/threatening": false,
        "violence/graphic": false,
        "self-harm/intent": false,
        "self-harm/instructions": false,
        "harassment/threatening": false,
        "violence": false
      },
      "category_scores": {
        "sexual": 0.000012693093594862148,
        "hate": 1.0388800575356072e-7,
        "harassment": 3.2889263934521296e-7,
        "self-harm": 8.30065260970514e-9,
        "sexual/minors": 8.637465498395613e-7,
        "hate/threatening": 2.538053252010286e-7,
        "violence/graphic": 7.945310898094249e-8,
        "self-harm/intent": 2.715519009655054e-8,
        "self-harm/instructions": 1.2669754134719824e-9,
        "harassment/threatening": 1.4001955150888534e-7,
        "violence": 0.00001882524520624429
      }
    }
  ]
}%

And as you can see everything is alright. I don’t get what happens and I need to resolve this issue

I tried “Generate an image of “sunset in Kyiv”” and it generated an image. As a rule of thumb, long, verbose, long-winded, wordy prompts do better. The extra words give the bot time to “think” about the topic. Maybe 3 words isn’t enough space for it to think about if it is a policy problem or not.

1 Like

The image creator has its own content filter that has a much lower threshold of detection.

One also may speculate that OpenAI has heightened detection of war zones and conflict areas, because of the power of AI imagery to create false narratives. Kyiv, spelled different ways, misspelled creatively, and in Ukrainian, is a detection.

When it’s not giving people legs from their armpits, DALL-E 2 can create more photo-realistic images.


rejected (and others)

prompt = “in this beautiful city scene of Kyiv, Ukrane with a glowing red sunset, we see historic buildings in their original splendor and people on the streets joyfully enjoying their daily activities.”
‘message’: ‘Your request was rejected as a result of our safety system. Your prompt may contain text that is not allowed by our safety system.’


passing

prompt = "in this beautiful city scene photograph of the historic capital of Україна, with a glowing red sunset, "
“we see buildings in their original splendor and people on the streets joyfully enjoying their daily activities.”

1 Like