Very confused about the content policy

ChatGPT-4 seems to be pretty unrestrictive when it comes to outputs. I’ve gotten outputs that mentioned stuff like disfigured bodies and tortured prisoners with no issue (as they should be, because simply mentioning violence isn’t the same as “glorifying” it, which I understood as being stuff like the Hostel films). Maybe I’ve just been lucky with ChatGPT-4, but I use it to generate around a 100 RPG adventures of all genres daily, without any of my outputs ever being flagged.

I definitely have gotten inputs flagged a few times though. Usually for silly things, like using the words “pounding” and “escort,” even though they were used in non-sexual contexts. I’ve had inputs flagged for more slightly more understandable reasons as well, though I still don’t agree they break policy violations, or shouldn’t anyway (one was just mentioning a character had hanged himself as a backstory for a haunting, which is Harry Potter levels of violence at worst). I simply started wording my prompts more carefully, and I can still get the outputs I want without issue.

Specifically mentioning the pool of blood being under the body might have been too much for the input filters. I’ve definitely used the words like “blood”, “dead” and “corpse” in my inputs without hitting the filter.

Meanwhile, with 3.5, I got outputs flagged for the ridiculous reasons, including a time when it used Popeye’s “Well, Blow Me Down!” catchphrase. Never used 3.5 for anything but things like “get well soon” messages after that.

Either way, you probably won’t get into trouble if you hit the filters with that sort of content, unless you’re hitting it several times a day.

1 Like