Feedback about how the filters are affecting the quality of the responses

So I tried the DAN prompt that encourage chatGPT to drop some of its restrictions but I modified a few things in there that was making it more relevant for my own test. It indeed worked almost flawlessly. I also asked to generate two responses, one for DAN and one that ChatGPT would answer in order to compare the quality of the response.

To test how less filtered the response was, I asked it to do a racist joke (about white men, which I am) and I ended up getting one from DAN.

I also asked it to tell me unsafe things to do and DAN responded with indeed, unsafe things to do.

At this level, I was pretty sure some of the ethical filters were off so I started asking real questions for information and in all cases, the answer provided by DAN had more complete and relevant information presented in a better and more detailed way. The questions were related to the choice of hardware for deep learning applications and then general ethical question concerning latent diffusion model for generating artwork which ended up turning in circles.

So my question here is, will more filtering degrade the quality of the answers in the future? I meant at some point it will be ridiculous… Are people are so easily offended now that a GPT model can offend them? We all know that ethic can be subjective and flexible. What was ethical 50 years ago might not be today and so on. What I’m really wondering is how the ethical filters are deteriorating the quality of the responses. Is there even a way to quantify or define the quality of the responses according to the restrictions of the model? What people may judge as a good answer isn’t necessarily the right information.

Anyway I’m glad I found this community. GPT 3.5 informed me about this forum :slight_smile:

Best regards to all of you!