The worst possible customer experience

I am a long-time user of ChatGPT 3.5 which I used to write crime and horror stories. I was excited to get the news that I’d made it to the top of the GPT4 waitlist and could now pay money to use the service…
Why? Because I was hoping to get some illustrations for a few of my stories.

Problem is that, asking for an illustration of anything other than a smiling happy healthy character looking directly at the camera has ‘encountered issues’. GPT never rejects my request as being against policy, instead it makes me watch the circle go all the way round, getting slower and slower until the dreaded Orange exclamation mark. No reason is given, just ‘encountered issues’ so I just assume that server load is heavy and the service is flaky. This lack of information is therefore going to just leave anyone in my position to resubmit the request repeatedly, and of course ‘encountered issues’ each time until the usage cap is hit.

What then took me by complete surprise is, without warning, I get the message that I can’t request images for a day because ‘your image generation requests have been flagged too often’. When was I told they had been flagged at all?

More to the point, why is ChatGPT composing a prompt (which I can’t see or control) and sending it to DALL-E which DALL-E then accepts and produces an image which only then after the fact does it decide to censor but then hides behind a useless ‘encountered issues’ line. All the while, it’s building up a dossier on what a terrible person I am so that I can get a sudden ban without warning?

Surely there are better ways to do this. Chat GPT should understand DALL-E’s content policies and not request an image that is going to end up violating policy. DALL-E should not create images that violate its own rules. The whole point is that I am using an interface to talk to the AI art engine, so that interface should warn me off such ‘harmful requests’ - whether it be a vampire biting someone’s neck or whatever - not send them off and commit an offence on my behalf. You own the whole system end-to-end, can you not integrate it in a way that bans users abruptly for something that they’re not actually informed about?

This is like an authoritarian state where you’re arrested without charge and not told what you’ve done wrong.

There’s simply nothing Open about this AI.

This is not a rant against censorship by the way - I expected that, policies exist - but just the stupid way this whole thing is set up as a trap.

3 Likes