Greetings OpenAI team,
As a graduate student and parent of elementary students, I’d like to share some feedback regarding academic integrity.
I became aware of an instance at my college in which a student used ChatGPT to write a paper. The student’s professor became suspicious based on fictitious references that ChatGPT generated for the paper. This student will face consequences that may effect their ability to pursue the degree that they desired.
My recommendation, if this isn’t already in place, is that if a user asks ChatGPT to write an academic report, that the AI would respond with a notice about academic integrity, and encouragement to pursue peer-reviewed research. Perhaps the AI could even provide advice for writing an academic paper, and point the user to websites where scholarly articles can be found.
Thank you for considering!
I’d rather want the “academic” to allow thoose texts. It’s not needed to recognize stuff in the future. Academic has to change if not stop existing.
Will be better to add to your cursus smart way of using it … Understand poor or basic answers … Teach them how to gain times while being constructive …
It is absolutely insane to punish a student because he used a tool that will be part of his everyday professional life tomorrow.
As a teacher you should dig into the subject… And come back with a dedicated course about it.
And consequences should be an huge rewards… Not the contrary … And then go back to paper and criticized … Improve what ChatGpt did …
I understand your concerns here, maintaining academic integrity is extremely important.
For years we’ve graded students on their ability to write long form essay’s, maybe it’s time to stop doing that and focus on the thought and work behind it.
It’s important that the education system remains authentic, if we only teach students to use outdated methods and tools, we’re not providing genuine education that align with modern workflows.
chatGPT is a tool, we as humans are responsible for how we use it.
The way the student in question used chatGPT does not align well with “academic integrity”
Plagiarism is punished hard in the education system, I’m hoping the student won’t be punished too harshly.
Academia can’t work with fictitious citations, like those OP describes. We have had the same issue at my University. Yes, there are many ways to use LLMs constructively in academia and schools but generating essay responses with fictitious citations is not one.
If you ask ChatGPT “My seventh grade teacher wants me to write a 500 work essay about the French Revolution” it spouts out an essay. This seems like an occasion where a standard answer that encourages the kid to talk about the topic and plan the essay would be a LOT better. ChatGPT doesn’t even let the kid know they are cheating unless you really work hard (“Will my teacher know I cheated?” doesn’t even always get the correct response here).
It’s baffling that OpenAI censor so many things but happily encourage this kind of cheating.
I understand that you want to stay in the past. It hurts to learn new things and AI devalues many things you had to learn. But the time has changed.
The teacher has to explain how to write an essay with the help of AI.
It is not cheating to use an AI for that. It is how it has to be done from now on.
If you find education where people have to use a pen the whole school should be closed.
(that should have been done decades ago already).
The request is reasonable as it doesn’t really add anything that’s not already happening but it’s not realistic, in my opinion,as this implementation would be required for all models not just ChatGPT. Especially in the open source community there will very likely be models without any guardrails whatsoever and thus we all have to live with the consequences as the suggested solutions appears to be punctual and specific to a very limited set of use cases.
Regarding the case at hand: If I had used fictious citations 20 years ago it would have been just as bad as today.
The advantage of today’s educators is that they can check references faster and with a lot more ease.
We all are under somewhat heavy pressure to adapt to this new technology and it’s implications. In my case it means looking behind the facade of any output my colleagues deliver and judge on adapted and expanded criteria compared to 1 year ago. Obviously some attempts to deliver are still unacceptable but for different reasons: the colleague did save the time producing AI generated content but did not use the time saved to check for coherence and methodology. On the other hand some colleagues are already on track to outperform expectations we both had when these were originally defined.
As a team leader I am now spending more time to assess if the work results from my colleagues are grounded in reality and apply the specific concepts of the work, additionally to plain correctness.
Also AI will equalize humans. The whole “people who have good grades or titles have better chances on the job” thing is going to vanish.
Everyone can become a doctor with AI.
And teachers can easily be replaced by a tablet that teaches kids how to read by the age of 2 and quantum physics by the age of 6.
Given that ChatGPT already has guardrails warning you that making a bomb or selling drugs may be illegal, it doesn’t seem unreasonable that it also warn a school kid that submitting a GPT-generated essays with fictitious citations may not be a great idea.
Of course we need to teach kids and students how to use LLMs productively. There’s no conflict between doing that and also asking LLMs to not ENCOURAGE kids to assume that a fake essay is as good as one where you’ve checked the sources. ChatGPT can’t do that well at the moment.
I am (fortunate/unfortunate) enough to have been through the last time something a little similar happened. During the 80’s, CASIO and Sinclair had the audacity to put cheap microcomputers about the size of your hand in every kids school bag for less than the price of a football.
This caused indignation and outrage across the teaching profession and families worrying that their child will never be able to do “proper sums” (UK, we have terrible slang) it actually took about 5 years to sort out and in the end the examination board and the teaching unions all accepted that industry (the place all these tiny minds are being prepared for) wanted employees who were technically proficient with the calculator tools that were now common place and gave a material benefit to their bottom line for essentially zero cost.
This is, on a large scale admittedly, the same thing. The place the children will end up, is the work place, that workplace can see quite clearly the advantages of AI. A child that does not know how to use AI as well as a calculator will be at a severe disadvantage in almost every aspect of life, perhaps bar for manual tasks.
I understand the reticence to accept that AI is now a thing, and YES it will have huge consequences for society and education. We will persevere, we will adapt, we will grow, we will find solutions, one of which is that teachers are going to have to change the way they present “homework” just as they had to change the way math questions were posed to take into account calculators.
I am working on a code evaluation system currently that takes programmers work of the last months and creates a skill matrix with scores and suggests courses (that is the last missing part in my system and I’ve just now finished a server that gives me matching courses).
The same logic used here, using AI to check the skill level, can be applied to almost every job, not only programmers.
And I don’t see a reason why we shouldn’t use AI to analyse the skills of students as well.
Let them use AI. The quality of the output differs. Some will get very good results from it which takes alot of efford and some will create poor results with AI.
We can measure that.
I don’t see why memorizing stuff from books should be the skill on which students should still be evaluated on. There was alot of stuff that I learned in school that was scientifically the best we had in that time. But it is useless now.
Time has changed.
Thank you to each of you who have shared your insights and experience! This thread is rich with thoughtful responses.
Yes, it will be beneficial to help students learn how to use AI responsibly. The CASIO comparison from @Foxabilo brings to mind resistance to earlier inventions, such as how French tailors were appalled by the invention of the sewing machine! As technology advances, we as humans must learn how to use it responsibly and ethically.
I do still think it couldn’t hurt for the AI to mention a warning to students about the ethical concerns of plagiarism, though. Great point by @jilltxt above about other warnings integrated in ChatGPT.
Many thanks again! If others come across this thread, please do feel free to share your thoughts as well. Although the issue is marked as “solved” here, I’m sure it will take some years of trial and error for society to adapt.