Dalle3 restricting words Bing does not

I am new here as of today, but I have been enjoying AI image generation and even prompt-building so I decided to get a subscription. I apologize for any and all faux-pas.

I have noticed that when I use Copilot from the Bing ios app that it will allow the use of words or subjects that I have found dalle3 will not. For example I can create images of “cybernetic (x)” with Bing, but dalle3 says it is not allowed because merging living creatures with technology could depict suffering. I’ve even tried “robotic-(x) hybrid” or “mechanically enhanced.”

Why are there two standards for generation? Is there a way to have dalle3 act in the same manner as Bing?

Do you have an exact prompt you tried?

Was it in a thread with a lot of “No, I can’t do that?”

Thank you for your reply,

I just ran a simple prompt.

“Generate concept artwork for a cybernetic mastodon.”

The response was: “I’m unable to fulfill this request due to content policy restrictions related to depicting cybernetic or technologically altered living creatures. If there’s another direction or theme you’d like to explore, please let me know, and I’d be happy to assist with that.”

EDIT: I ran the identical prompt through the Bing app with no issue.

Hrm. It’s working here. Is it a “fresh” thread with no other messages?

Using GPT-4? I believe DALLE3 might only be available to ChatGPT Plus with GPT-4? Are you using a Custom GPT?

ETA: Took it a bit further…

2 Likes

I started a new thread and it worked. I apologize for my ignorance, I clearly need to hit the books. I appreciate your assistance greatly. Thank you.

2 Likes

No problem. What happens is that each previous message is sent back to ChatGPT. So, if you get into a “debate” with it…

You can do this!
No I can’t!
You can!
I can’t!
Can!
Can’t.

ChatGPT looks at this history and because there’s so many tokens saying that it can’t, it might default to being more strict. There’s no actual “thinking” or “reasoning” going on at this stage in LLM development… It’s just going off all the previous tokens (words) in the chat thread…

2 Likes

Actually: it is that within ChatGPT, the AI rewrites what you input as a “prompt” before sending it to DALL-E. DALL-E denies.

Because the AI has random sampling of tokens to generate, the output of the AI is almost never the same thing twice. Your identical inputs can give different triggering.

The “content policy” rejection is very keyword-based. This is when you see “generating image”, but then it says “sorry about our content policy”.

DALL-E-3 is refusing requests as simple as “cross-stitch penguin” when it receives that exact prompt, while Bing fulfills the request. It is very sensitive to copyright now; instead of blocking “Kyiv” or “Palestine” like before, nothing that could recreate known artistic content comes out.

Also, ChatGPT has absolutely no idea why a prompt was blocked. It will make things up if you ask.