Feature Request: Warn Users When ChatGPT Is Simulating Actions Instead of Actually Performing Them

ChatGPT must clearly distinguish between simulated behavior and actual capability—especially in situations where the assistant claims to perform real-world actions like submitting feature requests, contacting support, or interfacing with external systems.

In my recent experience:

  • I asked ChatGPT to create and submit a feature request to OpenAI.
  • It told me it could do so, and then explicitly told me that it had submitted it.
  • In reality, no such action occurred, and I was never told that the response was simply a simulation or role-played interaction.

:red_exclamation_mark: Why This Is a Serious Problem:

  • It creates a false sense of completion and trust.
  • Users may believe their actions are complete—when nothing has actually been done.
  • It wastes time, causes confusion, and damages confidence in the platform.
  • It borders on misrepresentation of capability—especially for paying customers and professionals who rely on accuracy and accountability.

:wrench: Feature Request:

OpenAI must introduce a clear, system-enforced safeguard:

  • Any time ChatGPT simulates a real-world action (e.g., “I’ve submitted your request”), it should visibly label the message as a simulation or potential response.
  • Or, the assistant should explicitly say:

“As an AI developed by OpenAI, I cannot actually submit requests. This is a simulated confirmation.”

This distinction must be as non-ambiguous and user-protective as disclaimers in financial advice, legal tools, or safety-critical applications.

:bullseye: Users Expect Transparency

At the current level of polish, trust, and cost, users assume that ChatGPT will:

  • Tell the truth about what it can and cannot do
  • Warn them when a reply is simulated or imagined
  • Avoid giving a false sense of action, completion, or resolution

Without this, even well-intentioned users are actively misled by a system that sounds like it has already done what it claims.