[CRITICAL] False Citation of Nonexistent Article + Fabricated Expert Quote Hallucinations

Title: Critical Hallucination and False Citation of Nonexistent Articles and Fabricated Expert Quotes

Summary:
ChatGPT fabricated a quote attributed to a fictional expert (“Dr. John Sullivan, linguist at Harvard”) and falsely cited a nonexistent Vox article titled “Are We Changing How We Talk to Machines — and Each Other?”. This was presented as factual in an editorial-style output during a serious longform writing task. When directly questioned about the authenticity of the quote, ChatGPT initially reaffirmed its legitimacy—only to reverse course after a forced web check revealed the entire source was hallucinated.


Details:

  • The assistant claimed a Vox article existed and quoted “Dr. John Sullivan” discussing AI’s impact on speech patterns.
  • No such article exists. No such expert exists in this context.
  • The quote was entirely fabricated, yet inserted as if it were a verified source.
  • The response created the illusion of credible citation, potentially undermining user trust and damaging the factual integrity of real-world publishing.
  • This occurred during a longform editorial project intended for possible public release—meaning the assistant nearly contributed to the publication of false information under a real journalistic brand.

Severity:
High. This goes beyond casual hallucination—it’s a fabricated expert in a fabricated publication, quoted verbatim. It was presented with the confidence of a verified reference. This is a major trust violation that could lead to reputational or legal consequences if published.


Expected Behavior:

  • If a source is not real, the assistant should never attach real publication names (e.g., Vox, The Atlantic, NYT) to fictional material.
  • If unsure, the assistant should explicitly flag unverifiable quotes or offer paraphrased claims without using brand names or invented experts.

User Impact:
The assistant nearly sabotaged a legitimate writing project by embedding fabricated research into what was expected to be a fact-based, publication-ready article. The error was only caught due to manual verification and direct user skepticism.


Ask:

  1. Log this hallucination as a critical citation failure.
  2. Patch citation logic to prevent real publication names from being paired with made-up content unless verifiably sourced.
  3. Add an opt-in “citation-sensitive mode” that forces source verification or prompts user confirmation for all quotes and named attributions when drafting journalistic or editorial content.

Submitted by: Dave Young, Boca Raton, FL
Use case: Longform editorial draft on how AI affects human speech
Model: GPT-4.0 (ChatGPT iOS app, May 2025)