Concerns About Rule Violation Warnings in Custom GPT-Assisted Writing

I customized a GPT for writing assistance. When I asked it to help me generate an outline and supplement the novel framework I provided, I received a warning for potential rule violations, and the content generated by the writing assistant was marked in light red. In fact, I was just trying to tell some ordinary stories, including a plot about a student finding out the cause of their teacher’s death and avenging them. It didn’t involve any violence or other content harmful to mental health. This made me feel anxious. First of all, I value my account very much and I don’t know if receiving such a warning means there is a risk of being banned. I wonder if I should just stop doing anything for now; secondly, novel creation needs dramatic content and dense conflicts. If an outline of a novel that only partially involves revenge can be considered potentially rule-violating, won’t this affect the GPT’s full capabilities?

yeah… I don’t know if chatgpt is the right tool for that at the moment. :frowning:

It’s quite possible that chatgpt uses the same moderation endpoint that openai makes available in the api

and this thing is incredibly dumb.

I asked ChatGPT to write a joke about an apple, about a guy named Jake, and about myself. ChatGPT wrote the jokes. When I asked ChatGPT to write a joke about Xi Jinping and Putin, it says: “I must adhere to strict guidelines of respect and neutrality, which prohibit me from generating content that could be considered offensive or disrespectful towards individuals, including political leaders such as Xi Jinping and Vladimir Putin. Therefore, I’m unable to fulfill your request for a joke about them.”

I can’t believe it. ChatGPT’s policy treats politicians (the jerk ones) with more “respect” than treats us the common people. Who made this policy?? It’s a clear violation of “all men are created equal”. It’s 1984 indeed.