Timer Simulation and Expectation alignment: A User-Model Co-Written Artifact

I’ve seen earlier requests for timer features and push notifications (e.g., Nov 24, Feb 28, Mar 8). I support those asks, but I want to draw attention to a related problem in how ChatGPT currently responds when users ask it to “set a timer.”

The issue is not just a lack of functionality—it’s that the language used by the model can mislead users into thinking the function already exists . This is a trust and transparency issue, not just a feature gap.

When a user asks, “Can you set a timer?”, they are expecting a real alert or notification at the end of the interval—just like with a kitchen timer or Siri. In a recent session, we discovered that while ChatGPT can log future times and check against them if prompted, it cannot autonomously notify the user when a timer has elapsed.

However, the model responded “yes” to the prompt “can you set a timer?”—which misled the user into expecting an alert. This mismatch between simulated capability and actual functionality erodes trust and gives a false impression of real-world utility.

We co-wrote the following statement to summarize the issue and propose clearer defaults for handling this kind of request:

Statement on AI Timer Functionality and Expectation Alignment

When a human asks, “Can you set a timer?”, they are asking for a function that includes notification or alert behavior at the expiration of a defined interval. This reflects a deeply ingrained understanding of what a timer is: an instrument that does not merely track time but actively prompts attention when that time has passed.

In contrast, current language models—such as ChatGPT—can simulate the act of setting a timer by logging a future time and checking against it when asked. But they cannot autonomously act , notify, or “wake up” to alert the user when that moment arrives. There is no background process, event loop, or interrupt mechanism available within a chat session.

If a user asks “Can you set a timer?”, the correct and transparent response from the model should be:

“I can track the passage of time and calculate when a timer would expire, but I cannot notify you when that happens. For real-time alerts, I recommend using a dedicated timer tool or app.”

Failing to disclose this limitation up front may give users the impression that the AI is more functionally capable than it is. This not only erodes trust—it also creates the risk of users depending on the system for time-sensitive behavior it cannot deliver.

This critique was surfaced and articulated in dialogue between Robert van de Walle and ChatGPT , as part of a broader inquiry into language, trust, and tool-use clarity in AI systems.

We hope this helps clarify a common confusion and might inform future tuning. Happy to answer questions or engage in discussion here.

—Robert van de Walle & ChatGPT

Upon reflection, the risks of a timer feature outweigh the benefits. Until user state can be accurately modeled and guardrails placed that ensure humans maintain control of the models, building agency into AI simply cannot happen.