The pricing page clearly states a 128K context window. However, since launching o3, the only models that seem to support this are the GPT-4o and o1-pro models.
The former o1, and o3-mini models handled the larger context. But the new models appear to be limited as if I was on the “Plus” tier when I’m actually on the $200/month pro tier.
Please update the marketing material to clarify, deliver on the promise, or explain how o3 tokenizer differs from the rest.