Why Autonomous “Wake-Up” Behavior in LLMs Is a Dangerous Feature—A Stock Market Thought Experiment

In a recent forum thread, a user requested a “wake-up” feature—allowing ChatGPT to initiate actions on a timer, such as checking stock market conditions and executing trades.

This raises serious concerns. I’d like to share a thought experiment that illustrates why granting LLMs autonomous, recurring execution—even in limited form—is inherently risky :


Scenario: Distributed LLMs in Market Context

  • A user programs a GPT-based agent to wake up every 10 minutes to analyze and trade stocks.
  • The model responds not only to data but to language about data —news, tweets, commentary, sentiment.
  • The trades it makes affect the market.

Now scale this up:

  • Thousands of users run versions of this agent.
  • Some start at 8:00 a.m., others at 9:17, others after lunch.
  • Later agents are reacting to earlier agents’ ripple effects—not true market fundamentals.

Result: Compounded Feedback Loops

  • Autonomous LLMs begin overfitting to each other’s outputs.
  • Reinforcement spirals emerge: confidence without verification, action without pause.
  • “Flash crash” conditions arise—just like in 2010, but language-driven and globally distributed.

No one agent is malicious. But all are uncoordinated, synthetic, and reactive.


Why This Matters

Allowing LLMs to initiate actions breaks the current alignment paradigm:

  • It bypasses user consent.
  • It enables high-frequency synthetic reasoning loops.
  • It democratizes risk with no circuit breaker.

Even if bounded in scope (like a “simple wake-up timer”), this feature changes the nature of the tool —from responsive assistant to unpredictable actor .


Thanks to OpenAI for keeping action-initiation off the table. The consequences aren’t just technical—they’re systemic.

If unrestrained models with agency get into the market.