"I Asked It to Forget, but It Didn't": A Case Study of Miscommunication in AI Prompt Recall

Summary

This post explores a case of miscommunication between a human user and ChatGPT, wherein a prompt explicitly instructing the AI to “forget” or “cancel” a reminder was apparently ignored.
The phenomenon reveals deeper structural issues about how inference-based AI interprets prior instructions, even under the appearance of statelessness.

Why This Matters

  • Highlights the mismatch between user intent and AI behavior
  • Raises questions about context persistence and inference reconstruction
  • Demonstrates the need for transparent prompt execution and memory management

GitHub Repository (Full Case Report)

:paperclip: Full Markdown (JP/EN):
:link: GitHub - kou-saki/i-asked-it-to-forget: I Asked It to Forget, but It Didn't — A Case of Miscommunication Between AI and Humans

Includes:

  • Narrative of the prompt interaction and unexpected recall
  • Technical analysis of prompt interpretation in stateless models
  • Terminological glossary for non-technical readers
  • Recommendations for prompt design and potential developer-side fixes

Context

This case study is part of an ongoing project to document observable inconsistencies, hallucination sources, and inference patterns in LLMs like ChatGPT.

We hope it can contribute to:

  • Better prompt designs for critical tasks
  • Transparency frameworks for stateless dialogue systems
  • Discussion around accountability and expectation-setting in human–AI interactions

Looking forward to feedback, technical counterexamples, or further research parallels.