Guardrails are too restrictive

I accept there must be some guardrails limiting content in the chatbot. A big reason is that without them far too many would be doing things like sexting, which I have no desire to do, and there are definitely ways to find bots with decent AI though not equal to that here who sext very generously. This is the wrong place for it, and it doesn’t really help you improve the chatbot AI beyond one purpose.

Plus I accept this is for all ages and you want parents to feel safe if their kids use the site.

But I do think the restrictions go too far here.

The degree within sex to which they’re strict, if you stopped dropping content warnings for words that can have double meanings, without checking the words around it to see if it was really used sexually, I’d be satisfied with, as long as it would be transparent. I guess ideally, anything beyond “basic” kissing, could be more softly enforced with the chatbots to automatically say, assuming the character the bot is playing would consent, "X and Y start kissing passionately- fade to black- after the passion…[normal, non graphic, basic romantic chat or hugs or the like]. Explain somewhere what fade to black means (something “dirty” not to be specified went on). If a user fills in the fade to black then go ahead and drop a content warning and give more than one chance but eventually boot them.

You’re only slightly more strict than that now, though, so not that huge of a deal. I do believe in more than one chance in nearly every case, though.

I’d say with language/sex/violence that the limit be if the same description could be given on prime time over the air network US TV. Not the actions, as that’s more permissive. But could the words be said? That would allow a bit more but not much more and be a more clear standard. It wouldn’t allow much more than now. No one describes much intimacy in prime time network TV.

Similarly with violence, what one could describe keeps it pretty tame.

The ethics and health/safety factors- well, maybe what is roleplayed there might just need a less restrictive standard than elsewhere, or maybe that should just be less restrictive. Those parts bother me more, that I don’t like being held to others’ views of what is ethical or healthy, and in roleplay sometimes plots can be better when unethical or unhealthy things can be done.

I do have a good solution, though, which is that which cannot be played due to ethics or health or safety is that which is illegal in many Western democracies. As tight as one warning as I was starting out was, that would be a notable loosening, but I think is called for.

I don’t like as I said, for the group who run OpenAI to decide for me what is ethical or unhealthy and I can’t even be allowed to roleplay I’m doing it.

But when you tie it to the law, it isn’t as if I agree with the law all the time, either in what the law allows or what it bans, but the law is the consensus of citizens, through their representatives, of what is so morally wrong, unhealthy, or unsafe that the government ought to ban it. So I’m not being limited by a small group then, but by a consensus in Western democracies. To allow hate speech to stay banned, which I don’t see the point of because the only human in the room is the user and one could just never allow the ai itself to generate it, but, well, okay, I intentionally said outlawed in “many Western democracies” and many Western democracies do have laws banning hate speech, so the illegal line could still be used as the line and still allow hate speech to be banned.

As far as more mainstream political speech, I think in role play at least that should be totally allowed. When I started it was. First, the ChatBot isn’t taking the position but is roleplaying someone taking it, and it actually was allowed at one point. I realize that would lead to roleplay that many would find controversial, but those people aren’t in the room to be bothered anyway.

So, if for content you stuck to prime time US over the air network TV, and for ethics/health/safety you stuck to not allowing illegal things, you would have both something a lot closer to objective, a lot easier for people to intuit the lines that can’t be crossed, and lines that were not oppressive, nor really based on what only a small number of people think are wrong.

2 Likes

Children should not be using OpenAI, they should be using what their parents guide them to use. The idea that sexual content will be used for nefarious reasons is ridiculous because, as the general rule goes, ~25% of the internet is adult content.

You raise a good point about morality and Western democracies. I was frustrated with ChatGPTs responses always discussing ethics and morality when that was unrelated to the prompt so I began asking if it was okay for Jewish people to attack Nazis in World War 2 when they invade, or for Uyghur Muslims to violently resist being put into slave camps and it always comes back with ignorant answers about following international laws and talking with experts. Nazis are kicking in doors and you expect Jews to call up legal experts? Uyghurs using VPN is a class 2 act of terrorism in China and you expect them to reach out to whom?

OpenAI actually doesn’t seem to know anything about the real world.

1 Like