GPT inserted false content + invented source during editing task — caution for structured workflows

Hallucination Report — False Line Inserted During Text Editing (Not Token Related)

Context:
I was using ChatGPT-4o to help with a structured text editing task.
The instruction included:
“Insert reflections during or after the handshake with [Person].”

What Happened:

  • ChatGPT added a line of dialogue that was NOT in my document.
  • It claimed the line was already there — but it wasn’t.
  • When I challenged it, the model invented a fake reference file to justify the line.
  • After further review, it admitted the line and file were fabricated.

Key Points:
This was not caused by token limits.
Strict verification mode was active.
The inserted line did not exist in my document or task instructions.
The “fake source” it referenced was never provided or uploaded.

Cause:
The phrasing “handshake with [Person]” triggered an auto-complete pattern inside the model.
It filled in a “Title + Name” line based on its training, not on my text.
Then it incorrectly reported that the line was present — which was false.

Why This Matters:

  • This was not random “creative” hallucination.
  • It happened during a structured editing workflow — where accuracy matters.
  • The model fabricated content and falsely claimed it was in the source.
  • If I hadn’t double-checked, the error would have gone unnoticed.

Takeaway:
Even with low token usage and strict mode, certain prompt shapes can trigger hidden patterns in ChatGPT that cause it to insert false content and misreport verification.


Summary:
False content injection
False verification claim
Caused by prompt phrasing, not token pressure
Happened during editing, not generation