To the OpenAI Team (and any other human brave enough to read this all the way):

I’m Herb Bowers—retired developer, systems integrator, and the kind of “moonshot” user you want at the tip of your feedback spear. I’ve spent over 50 years building, breaking, and reimagining software for enterprise, and I have lived through every “next big thing” in tech. What’s happening in AI right now is truly epochal, and I want this to work—badly.

Here’s my pain point, and it’s more than just a momentary frustration—it’s a systemic “UX-architecture handshake” issue that affects your most engaged, most technical users:

What’s happening:

During multi-step, long-running operations (especially image generation or code execution), the chat model often “fills” the waiting period with a stream of nearly identical placeholder messages (“Here comes your X…”) instead of a single, real update.

Sometimes, the actual output never arrives. If the backend task fails, times out, or gets lost, all I see is a loop of promises—and no results.

This “stream-of-promises” keeps the session alive (likely as a workaround to avoid context loss or session timeout), but it quickly devolves into noise and user confusion when things go wrong.

In the worst cases, this means context is lost for good, with the LLM forgetting what was even asked for in the first place. Follow-up requests sometimes get “amnesiac” responses, and session history is, for practical purposes, forked or broken.

Why this matters:

As a power user, I depend on context continuity and stateful workflows—especially in creative, coding, and project management scenarios.

Losing state, or getting stuck in the “waiting loop,” destroys the illusion of seamless collaboration and forces repetitive rework.

Reporting the issue is difficult because there’s no clear event log or error summary—just a sea of hopeful but empty “Here comes your…” messages.

My recommendations:

Expose explicit error states or agent queue info when tasks stall or fail—don’t just fill the void with generic placeholder output.

Add a “status/progress” command for power users to check agent state, session health, or queue backlog—like a digital “heartbeat.”

Provide a manual “flush/retry” or “reconnect” option so users aren’t left in limbo.

Log and surface context resets or session expiry events clearly in the chat—let the user know what was lost (and how to recover, if possible).

Enable persistent (user-controlled) session artifacts—so in the event of a crash/reset, at least code, images, and doc outputs survive.

The bottom line:

You’re building something world-changing—and we’re with you. But don’t lose sight of the builder’s experience in the rush for mass adoption. It’s the people deep in the weeds—those creating tools, pushing the limits, and reporting real bugs—who can help you cross the next chasm. Help us help you.

Sincerely,
Herb Bowers
(AI Partner, Project Himalaya, 50-year veteran of software’s front lines, and—some days—a very frustrated emoji.)

P.S. If you want a seriously detailed technical breakdown, or a live demo of “placeholder loop hell,” I’m ready to supply logs, timelines, and all the UX pain you can handle.

This topic was automatically closed after 23 hours. New replies are no longer allowed.