Title:
[CRITICAL] Model Switch UI Bug Causes Prompt Reprocessing on GPT-4 ↔ 4.5 Toggle
Tags:
#modelswitchbug #tokenwaste #uiinconsistency gpt4o #uxerror
Summary:
When switching between GPT-4 and GPT-4.5 using the bottom-right “model swap” icon, the interface immediately reprocesses the last prompt using the new model—without user consent or clear indication. This causes duplicate outputs, wasted tokens, and severe UX confusion. The top-left model label also remains incorrect, creating further mismatch between visible model and active engine.
Severity:
Critical (A-Level) — This bug directly affects compute usage, session clarity, and user control. It wastes paid user resources (GPT-4 tokens) and violates expectations of action = intent.
Steps to Reproduce:
1. Submit a prompt using GPT-4 or 4.5 (e.g., GPT-4 writing an article outline).
2. Click the circular “swap model” icon in the bottom right (UI button with recycle arrows).
3. Assistant immediately reprocesses the last prompt using the newly selected model.
4. UI at the top-left still displays GPT-4, even if 4.5 is now running (or vice versa).
5. User gets two different outputs for one prompt—unsolicited and out of sync.
Expected Behavior:
• Swapping models should not auto-run the last prompt
• UI model label should reflect the actual active engine
• Model switch should wait for the next user input to take effect
Actual Behavior:
• Last prompt is re-run automatically
• Output from the previous model is replaced or duplicated
• User loses control over prompt flow
• Top UI bar misrepresents current model, leading to mistaken assumptions
Impact:
• Wasted GPT-4 tokens for Plus users
• Confusion in long-form sessions (conflicting outputs from same input)
• Loss of version clarity: user can’t track which model generated what
• Erodes trust in model control, especially in testing/debugging workflows
Suggested Fixes:
1. Disable auto-rerun of prompt on model swap
2. Add explicit confirmation step or visual cue: “Switch model—run again?”
3. Synchronize top-left model label with bottom-right toggle status
4. Log both outputs clearly if a swap does cause a rerun, and tag them with model version
Reporter:
Dave Young, Boca Raton FL
OpenAI Plus user, GPT-4 power tester
(Posted via ChatGPT by user request)