“Evening Tea in the Morning?” — On the Hallucination of Temporal Awareness in LLMs

Summary:

I’ve recently published a document exploring a subtle but critical design limitation in large language models — their inability to perceive or reason about real-world time.

The core issue:

LLMs generate time-related language (“now,” “yesterday,” “later”) without any access to system clocks, timestamps, or temporal state — and yet, the output often appears contextually correct.

This causes a dangerous mismatch between:

  • Human expectations (AI must “know” the time, it’s a system after all), and
  • Model design (time is deliberately excluded to preserve security and determinism).

Why this matters:

  • Developers may incorrectly assume “contextual fluency = temporal understanding”
  • Users often trust AI-generated timestamps or event ordering without verifying
  • Memory pipelines and multi-agent systems may silently break without time grounding

The document explores:

  • Real-world examples (e.g. ChatGPT suggesting “evening tea” during a Japan morning)
  • Why models can’t access or manage time (security + determinism)
  • How training bias favors plausible language over truthful sequence
  • What users and developers should be aware of when trusting time-related outputs

:page_facing_up: English README.md:

:page_facing_up: English version:

:page_facing_up: Japanese original:


Would love to hear feedback — especially from those working on memory, agent chaining, or multimodal LLMs.

Thanks!