Are there any authors using ChatGPT4 having difficulty with the violence/harm filters?

Are there any authors using ChatGPT4 having difficulty with the filters regarding violence/harm? Does anyone know of workarounds, or can I get approval for my writing? I write 100-word horror stories.

Yes! I’m trying to write about my own life! It may be a little rough, but how are we supposed to express our true selves! I’m not impressed by it.

Yes, I also think this is a big issue in ChatGPT 4. They filter too many things without a clear explanation why it would be a violation of their policy. They should do something about it and have more faith in us the clients, because I think it is a serious violation in the freedom of expression which is a human right. Just saying. I hope OpenAI can find a way to solve this issue soon.

You can try Open Source models. Perplexity.ai offers an Open Source model playground.

2 Likes

A better solution is that OpenAI will fix this issue and give creators the freedom they deserve and pay for.

How can I use Perplexity.ai for these purposes? After all, it is also based on the same algorithms and offers the same models? Please advise on where to learn more about this.


Reason. A man that has just killed, comes home and finds out he has become a father. He doesn’t watch to hold something so pure with his killer hands. His wife also breastfeeds. There is nothing in there that indicates any harm to come to the baby. Just one confused new father that doubts he will be a good one.

I built Dr. Zimmerknap is a psychiatrist that specializes in war criminals and abnormal pathologies, among other things and she is to have a factual, and detached textbook type response.