Model selection not being honored

In ChatGPT (web), I explicitly select “ChatGPT 5.2 Thinking” and enable “Extended Thinking”. I do this specifically to avoid any auto-routing to the Instant model. Despite that, certain prompts appear to be handled by the Instant model anyway, as if “Auto” were selected.

This is most reproducible with translation, grammar fixes, and style rewrites. It also happens intermittently when asking the assistant to explain something referenced earlier in the same conversation.

1 Like

Can confirm this used to happen with 5.1 thinking. And it’s happening again with 5.2. The quality of answers is degraded noticeably when it assumes thinking is not needed. If this “bug” is not fixed, might have no choice but to cancel subscription and move to better competitors.

Did you notice this with 5.1 Thinking right after it launched? If so, did it go away after a while?

If it did, maybe it’s just an early-release quirk.