Why won't ChatGPT draw a stained glass rose?

It’s probably being flagged for Disney’s Beauty and the Beast, Paul is right. It’s in images at the beginning of the original movie and again in the Live action.

Since it is a “drawn, publicly displayed image” it is automatically copyrighted ©. This isn’t the same as a trademark ®. In the US, you’re given copyright for simply putting something on paper (that is your original idea) and transmitting it by some official means.

Disney does not have a copyright on everything, you’re right, but they’re also extra protective of anything visual. This image is probably heavily, and I mean heavily, copyrighted and jealously guarded. It’s in coloring books and allll over the place).

I don’t think Chat’s reticence to draw this particular image the least bit surprising.

This is a GREAT idea.

2 Likes

No doubt, and one can also learn to achieve their results more precisely. But I’m thinking of a simple user who doesn’t want to become a pro, but just wants to make a few images. And then they are accused of breaking rules and don’t understand why.

It’s also about the fact that here in the forum, you shouldn’t always have to answer the same questions over and over again, because it should be obvious that “rose” should not be blocked, and no one should need a multi-page essay to understand why and how to work around it.

And, it also doesn’t make much sense to include images in the training data and then try to suppress them again with on all levels dysfunctional blocklist. Don’t but it in the training data at the first place.
Big companies often flup up something, and then let the rest of the world deal with it. Sometimes it is understandable, and then i not complain because it is work in progress.

BUT this block list problem is now here for YEARS, and not only that they not fix it, instead it BECOMES MORE WORST.
I just hope that Disney puts A B C D E on character names. and they finally must fix it, if people can not use any letters anymore.

If somebody really steal, and makes money of something else stuff, you always can be sued. But there is even a point where greed becomes…
(I hear about a car company sued a restaurant on the top of a mountain, with 2 3 guests a week, for a horse in the shield! it was proofed that they used it for generations, long before the car company existed… there is a point… you know…)

2 Likes

I completely agree with you.
And yes, somehow you have to repeat yourself more often…

What could help would be the following:
If the GPT interacting with DALL-E could inform the user directly why an action cannot be performed and what can be done.

Well, the AIs interact with each other, here it could be considered to enable greater transparency between the systems themselves.

Instead of a vague ‘is flagged’ because of guidelines that normal users and the mediating chatbot cannot understand and comprehend and then the big search begins.


Indeed, this is not only the case with GPT DALL-E interactions.
Also in cases where documents are to be processed by GPTs.
Neither the user nor the GPT often understand why the function cannot be executed.
Is it due to a lack of access rights?
Is it due to policy restrictions?
But that’s not the point of this topic - sorry! :cherry_blossom:

2 Likes

No, actually it is on topic! :+1:

The systems should communicate with each other, and GPT should know more about its own functions. It can tell you about a tyrant living 1000 years ago, but know nothing about it self, its options and the systems it interacts.
Other companies using GPT for automatic trouble fixing and advices. But some how GPT is self not. At least the manual should be in the training data, and users should be informed why they be accused to violate the policy. (Maybe it could lead to a block of the account, if you try to find out what caused it. I don’t know. But some people maybe not even try because of the possibility.)

I wrote exactly this too, long time ago, in the same context.
GPT is self puts trigger words in the prompts before sending it to Dalle. And no surprise with trigger words like “Rose” and “snow white”.

2 Likes

That’s really beautiful, nice work! :heavy_heart_exclamation:

Thanks! You should give it your best shot if you haven’t already.

Little challenges like this are always fun to overcome.

Thanks and you’re right :+1::cherry_blossom:
After all, GPT is an AI - as you say, it tells us when which ruler lived 1000 years ago.

It would be obvious if GPT could also provide information about the circumstances why it can’t access. Or cannot generate something.

One advantage would be that GPT would make fewer false statements if it knew why it could not deliver the desired result. And the user would have a transparent answer and could act more efficiently instead of investing unnecessary time in ‘trying things out’.

2 Likes

1 Like