First day using DALLE here. Came across “This request may not follow our content policy” for the “A cat and a golden retriever in a tug of war, UHD 8K” prompt.
I suspect it’s because of “war” - even though “tug of war” is just two subjects pulling a rope.
This aside, the next paragraph makes me confused: “Further policy violations may lead to an automatic suspension of your account.”
Does that mean this particular occurrence actually counted as a policy violation against my account? The former part sounds like “maybe it is, just watch what you use in the prompt”, but the latter is worded to make it sound like this already counts as a strike against my account.
Can a staff member clarify if accidental prompts (like this one) do in fact still count as policy violations? I’d hate to get my account suspended if I do more accidental, innocuous prompts (other than “tug of war” which I’ll avoid from now on). Thanks in advance!
you should review the content policy before making some prompts. This type of actions are automatized so you should be careful, i am sure you can talk later with support but understand they are kinda overworked. Unfortunately there are no clear guidelines, like i was nervous when in the prompt there were camera shot types, because of the word shot, but got no warnings so its not completely without reason. Also people try to bypass this restrections so some words might get banned that seem unreasonable but they have their motives.
I haven’t seen much forthcoming from OpenAI but I have poked around their discord and other places so here’s my interpretation of what’s going on. Keep in mind that this is just my personal inference, not a fact, and I’m not speaking on behalf of OpenAI:
Those of us who had closed beta access were essentially early testers. They had a heavy-handed setting (you will be banished for abuse) because they simply did not want to put in much extra effort for people wanting to generate porn or grotesque stuff.
Since it is still beta, part of what we are testing is the content filter, not just DALLE itself. We are still essentially testers, hence it being in beta (short for beta testing). The content filter is another component of this service, which we are testing.
We are also testing features such as diversity, hence the recent controversial decision to modify our prompts without consent (judging by the response on Reddit, many people are not happy about this). Even so, others have pointed out that at $0.13 per generation, it’s hundreds of times cheaper than many alternatives such as royalty free stock photos and private artists. It remains to be seen if user consent is critical to business success (I have my suspicions).
I cannot stress enough we are beta testers. Access to these tools is a privilege not a right. While I can be grumpy about the forced diversity and other bugs, I also recognize that I’m not entitled to use these tools.
I suspect that, once some legal precedent is set, usage will open up greatly just like it did with GPT-3.
There is a feedback button. You can use it to submit feedback directly. See screenshot below:
I’m getting content policy violations -constantly-. I came here to see if this is a problem for anyone else. The weird thing is that sometimes I get them when I do variations I’ve been working on already. In most of the violations I don’t see what I’ve done to anger it, but sometimes its something like “a photograph shot from above” or “light shooting through holes in a roof” and Im presuming its words like that. If they ban me I’ll just go back to Midjourney and OpenAI won’t get my money.
I see posts like this, along with hundreds on reddit and I had a thought last night. Since OpenAI is trying to pivot to becoming a profitable company, it strikes me that they’re still thinking like a purely nonprofit/research organization. Their priorities are all wrong. Yes, diversity and safety are important. But for a business to thrive, so to is giving the customer what they want (and paid for!) as well as increasingly quality. I would much rather see OpenAI focusing on increasing the quality of DALLE, especially when Google’s IMAGEN has already surpassed it (at least in some demo images).
I found myself deeply disappointed by the current policies and trends and it took a while to figure out why. The primary reason comes down to quality. DALLE can do photorealistic images sometimes but only so long as they look like stock photos. But more often than not, you get uncanny valley artifacts like weird eyes and mouths. It also cannot do scenes or anything out of the ordinary without a lot of cajoling, or at least you must accept much lower quality (there was a good post recently pointing out the wide variance in quality depending on just a few changes in words).
All that being said, I’ve been able to bring my works of fiction into a new domain because of DALLE. I can see my worlds in a new way because of this really great tool. But… it’s not quite ready for prime time. While the tool is still in beta, I would much rather see them focusing on delivering consistent quality rather than going overboard with diversity and safety. This is just my preference. If they don’t focus on consistency and quality then someone else will and they will lose out on business. In that respect, we should expect to see competition before too long. I’m sure Microsoft, Google, Amazon, and every other tech company has seen the excitement that DALLE has generated and is preparing to launch their own services.
Anything which resembles war or conflict is forbidden. Even WW2 pictures. There are a lot of other things which can be painted. I personally don’t understand why so many people are getting mad about this.
Anything which resembles war or conflict is forbidden. Even WW2 pictures. There are a lot of other things which can be painted. I personally don’t understand why so many people are getting mad about this.
Couple of things:
Uh, what do you think “tug of war” is, exactly?
Also, I think you might be a tad confused - nobody here’s mad at all. I was merely asking if the warning message counts as a “strike” because the first part and the 2nd parts of the warning seems contradictory. The first part says “this MAY violate…”, making it sound like it is giving me an opportunity to update my prompt to something different, but the 2nd part makes it sound like it already counts as a strike.
I don’t care whether it’s one or the other - I’m curious as to what actually happened - was this in fact a strike? If it was, okay, good to know. If it was not, ok, good to know as well, I won’t use “tug of war” again.
You could have easily prompted something like this (or provide more details to get what you wish):
“A cat and a golden retriever pulling tightly on the opposite ends of a rope, UHD 8K”
See results:
There is really little reason to get excited (save your energy) over OpenAI’s current beta policies when it is so each to design prompts to get the same results.