Fugazi (erratic) - today, DALL-E was the most erratic (Fugazi) I have ever seen it

today, DALL-E was the most erratic (Fugazi) I have ever seen it.

I try to avoid the busiest times and spread out my requests, but most of the replies I got talked about content policy and then sent an apology when I asked what in my request went against policy.

did anybody else have similar issues?

this API apologized to you?

curl https://api.openai.com/v1/images/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "dall-e-3",
    "prompt": "a white siamese cat",
    "n": 1,
    "size": "1024x1024"
  }'

do you mean ChatGPT? :thinking:

1 Like

yes; DALL-E in ChatGPT

what is your experience with it today?

ChatGPT isn’t trained to know what within the safety function of DALL-E caused a “content policy” error message to be returned, when you see the image generation thinking, but then you get blocked.

DALL-E is external to ChatGPT’s AI.

Nor does OpenAI share the special moderation techniques, that are on much higher alert because of the unpredicability and impact of AI images.

I would investigate your use of styles or artists newer than 100 years (copyright & likeness), trademarks and other infringements, titles of intellectual property, or just keywords that trigger. ChatGPT will simply omit and rewrite most things that would be keyword excitement of content policy (like “no bombs or guns in my image” will also result in no guns or bombs in your prompt) but it doesn’t know up-to-date or over-the-top tweaks.


PS, OpenAI, the AI itself not willing to include my prohibition list of weapons in actual prompt with your new and further tuning, AI now understanding the lack of model negation-following? Still guns regardless.

Thank you…

the image was for "How to Zigzag Lozenge Stitch in Craft ", but only 1 time did it just mention that it might be a copyright issue.

I had 40 pictures & many had similar wording; but this was the only 1

The problem was that I would get a “good to go” type message; ask it to generate; and we would go through the whole scenario again where it was a content policy issue; it would say that it was not a content policy issue, etc.