We all know that AI detectors produce a lot of false positives and false negatives.
Let’s say a student uses ChatGPT on an English assignment in High School or College knowing that using AI is against the honor code.
If the school gets suspicious of this student, why can’t they simply contact OpenAI and ask them to search their database to see if parts of their assignment were, in fact, generated by GPT?
Seems like a very easy fix to the AI ethics problem to me. What do y’all think?
Absolutely not.
That would be overreach of the highest order on behalf of the school, and would be open to untold abuse and… just nope… there are logs kept for legal compliance for 30 days, and that’s it.
Also the Logistical problems of having 100’s of millions of users data accurately being limited to just one call where output was “like”, I don’t think it’s a good idea myself.
3 Likes
While that may certainly become possible at some point, I think it is a bad policy as a matter of course.
When it will happen will most likely be when a student is accused and found guilty of plagiarism and the incident leads to a lawsuit.
Then the records will be subpoenaed by one or both parties.
But, there is zero probability OpenAI will give this information up without a court order.
The only solution I can think of for this issue is for the teachers to query the student about what they wrote. Any student who made their own paper can answer any questions about it. It takes more work from the teacher though.
Privacy. No way around that, at that level.