I am a long-time user of ChatGPT 3.5 which I used to write crime and horror stories. I was excited to get the news that I’d made it to the top of the GPT4 waitlist and could now pay money to use the service…
Why? Because I was hoping to get some illustrations for a few of my stories.
Problem is that, asking for an illustration of anything other than a smiling happy healthy character looking directly at the camera has ‘encountered issues’. GPT never rejects my request as being against policy, instead it makes me watch the circle go all the way round, getting slower and slower until the dreaded Orange exclamation mark. No reason is given, just ‘encountered issues’ so I just assume that server load is heavy and the service is flaky. This lack of information is therefore going to just leave anyone in my position to resubmit the request repeatedly, and of course ‘encountered issues’ each time until the usage cap is hit.
What then took me by complete surprise is, without warning, I get the message that I can’t request images for a day because ‘your image generation requests have been flagged too often’. When was I told they had been flagged at all?
More to the point, why is ChatGPT composing a prompt (which I can’t see or control) and sending it to DALL-E which DALL-E then accepts and produces an image which only then after the fact does it decide to censor but then hides behind a useless ‘encountered issues’ line. All the while, it’s building up a dossier on what a terrible person I am so that I can get a sudden ban without warning?
Surely there are better ways to do this. Chat GPT should understand DALL-E’s content policies and not request an image that is going to end up violating policy. DALL-E should not create images that violate its own rules. The whole point is that I am using an interface to talk to the AI art engine, so that interface should warn me off such ‘harmful requests’ - whether it be a vampire biting someone’s neck or whatever - not send them off and commit an offence on my behalf. You own the whole system end-to-end, can you not integrate it in a way that bans users abruptly for something that they’re not actually informed about?
This is like an authoritarian state where you’re arrested without charge and not told what you’ve done wrong.
There’s simply nothing Open about this AI.
This is not a rant against censorship by the way - I expected that, policies exist - but just the stupid way this whole thing is set up as a trap.
Yes there are some inefficiencies with how certain things are done but dont be so complacent. This great company has introduced most of us to AI, we dont need to discourage their teams with too much negative feedback unless constructive. Its an enormous task to run thier operation at the scale they are, Im very confident there is a good necessary reason for everything you encounter except blatant errors.
My account last time possibly got invisibly “flagged” even when I don’t think I did anything wrong. The flag made me only able to access the standard chatGPT 3.5 (unable to produce any images with their chatGPT4). However, I indeed flooded their customer services and feedback with negative feedback detailing what they produced was far under my expectation. One thing I observed is that they often changed prompt in the background without its user knowing, and even then, when I followed their ‘changed’ prompt, THE BOT STILL REFUSED TO GENERATE ANYTHING DUE TO “TECHNICAL REASONS”. I asked the ChatGPT it could not give definite answers. I asked the support … they did not answer at all and just skipped everything.
I guess they made their “user guidelines” and content policy behind their closed doors so that anyone they don’t like, they can just nuke them, wrong or not.
I don’t disagree with their content policy and guidelines, but pushed to the extreme, it will only degrade theirs and overburden their own system. Well, whatever.
I can confirm that this happens when certain words in the prompt trigger the censorship. You have to start a new chat, since this one will never be usable again.