I’ve been a long-time ChatGPT Plus subscriber and began using GPT-4.1 in Chat as soon as it was released.
One of GPT-4.1’s core features is its support for a 1 million token context window—an upgrade that dramatically expands what the model is capable of. But in ChatGPT, GPT-4.1 is still restricted to 32,000 tokens. This is the same limit as GPT-4o and far below the model’s full capacity.
While the 1M context window is available through the API, this distinction was never clearly stated in the ChatGPT UI, subscription page, or original rollout announcement. That’s a serious issue for transparency, especially for paying users.
I’m not just requesting clarity. I’m asking OpenAI to:
- Update ChatGPT to support the full 1 million token context.
- Clearly label current token limits within the ChatGPT UI.
- Share a roadmap for when (or if) full-context support will be brought to ChatGPT users.
If the model supports it and the infrastructure already delivers it through API, paying users of ChatGPT should not be left behind. We’re a significant part of OpenAI’s user base and deserve access to the full capabilities we’re funding.