How to Talk with AI Like a Human

A user-driven exploration into hallucinations, miscommunication, and how to talk better with inference-based AI

We often hear that “AI hallucinates,” but what does that really mean when we’re talking to it day after day? This is a documentation project I began not as a researcher, but as a user—

What started as a personal curiosity soon turned into an observations around things like:

  • Why GPT gives just one line when asked for 1000 pages
  • Why it confidently returns invented data
  • Why I felt like I was “lied to”—even though the AI can’t lie
  • And what it means to talk with something that responds, but doesn’t remember or feel

:blue_book: What’s in the repository

This project contains a dual-language Markdown documentation (Japanese and English), including:

  • A redefinition of “hallucination” in dialogue contexts
  • A breakdown of the 5 structural steps that shape how hallucinations emerge
  • A list of 8 technical limitations and practical countermeasures
  • Reflections on what it means to “understand” or “trust” a language model
  • Concrete examples drawn from real interactions with ChatGPT

:open_file_folder: Repository:

:link: GitHub - kou-saki/Talking-to-an-AI-That-Hallucinates: A user-driven exploration into why inference-based AI hallucinates—and how to talk with it more meaningfully.

:man_raising_hand: Why I’m sharing this

I believe we need more open conversations between users and developers—especially about where things break down, and why that might not be a bug, but a reflection of how inference-based AI actually thinks.

If you’ve ever been frustrated, amazed, or just curious about AI’s behaviour, I hope this document offers you something useful.

Feedback and thoughts welcome!