This forum is not for ChatGPT support, if you need help with your account, payment, etc, please visit https://help.openai.com
Please DO NOT SHARE personal information (email, credit card, etc) or account related details publicly.
GPT-4.0 intentionally makes up source references. it is not ok to cite an author that is knowingly fictional and a journal in which the source appeared. doing this intentionally is extremely unethical. please consider validating references prior to using. this is what i find necessary at this point.
Some model fine-tuning effort by OpenAI has gone into intercepting specifically requests for citations and bibliographies. Some of the more fun ways to make complete fabrications no longer work.
The “hallucination” of information is fundamental to how a large language model operates. It doesn’t have an internal concept of truth. It doesn’t know if it has knowledge of the URL of a scientific paper or even if it exists. What it understands is the next likely token or word that it should generate.
After it writes https://arxiv.org/abs, the next output are just predictions of the most likely numbers to appear there for what it has received and generated so far.
Multiple times, AI made up citations, they were not requested. My question is why is the app allowed to make up an author and assign it to a journal. Perhaps I should send the output to the respected journal for their interpretation.
Why citations? Because it knows the style of writing you are asking for has them. You can always tell the AI to avoid producing them…
Consider it built-in AI detection for academic fraud.