Message Context Limits Reduced?

Over the last few weeks, I’ve been running into an issue where ChatGPT rejects my inputs as “too large.” At first I was confused, because I distinctly remember sending larger contexts in the past when using GPT-4o and even GPT-5.

Recently, I started checking my inputs with OpenAI’s tokenizer, and the most recent rejected message was around 22k tokens. From what I’ve read, GPT-4o supports up to 128k tokens and GPT-5 is supposed to support 400k tokens.

So now I’m wondering: did the ChatGPT interface get restricted down to some much lower message limit? Because 22k feels absurdly small compared to the advertised limits.

Meanwhile, competing models like Gemini and Grok are already handling 1M+ tokens without issue. It’s making ChatGPT increasingly unusable for me, since I hit this message cap more and more often in real workflows.

Has anyone else run into this? Is this a deliberate product change or just a temporary bug?