New message delete thing when it thinks it violates content policy but doesnt let you check it to make sure it got it right

so gpt now deletes the text of any response it thinks might conflict with open ai content policy, and then says if i think it was wrong to do it to provide feedback. but how can i provide feedback if i cant see if it was right or wrong to flag it? like its done it at times to things it doesnt do it to beforehand, so i really cant tell if something was so unique that i wanted to generate it flagged it on purpose or if it made a mistake somehow… iv dont some weird stuff with gpt so i know what sets it off normally and what doesnt…