I am very curious about how tools such as gptzero, aiundetect and other tools implement AI detection and anti-detection. Can they just use prompt? Or do we have to train our own algorithm
It is a trick for foolish people who cannot keep up with the scam. and hurt a lot of innocent people. If OpenAI had more news capabilities, it would probably be able to solve the problem better.
Let me give you an example: gptzero (I remember only one) measured writing using an AI like ChatGPT4 at about 40% (1-2 months ago) for high school students who copy and try to avoid 20% and an intermediate method by Written in stupid English at a level that didn’t know how to spell instruction before November 6th, it can trick GPT4 (default) with the original text for comparison. Except for the similarity of the word GPT, no plagiarism was found. Portion of those children playing = 0%
I would argue that it is easier to find evidence of using AI for forensic copying than to develop a promising detection program. As for the evasion method, I don’t want to explain it because it can also be used to commit wrongdoing. I can only say that making mistakes in writing isn’t a bad thing.