I customized a GPT for writing assistance. When I asked it to help me generate an outline and supplement the novel framework I provided, I received a warning for potential rule violations, and the content generated by the writing assistant was marked in light red. In fact, I was just trying to tell some ordinary stories, including a plot about a student finding out the cause of their teacher’s death and avenging them. It didn’t involve any violence or other content harmful to mental health. This made me feel anxious. First of all, I value my account very much and I don’t know if receiving such a warning means there is a risk of being banned. I wonder if I should just stop doing anything for now; secondly, novel creation needs dramatic content and dense conflicts. If an outline of a novel that only partially involves revenge can be considered potentially rule-violating, won’t this affect the GPT’s full capabilities?
yeah… I don’t know if chatgpt is the right tool for that at the moment.
It’s quite possible that chatgpt uses the same moderation endpoint that openai makes available in the api
and this thing is incredibly dumb.