Codex Rate Limits Discussion Thread

What was done was wrong, period. Here is what they did:

  1. Gave us 2X usage for a time, yay.

  2. Took it away, no problem.

  3. Reset our usages, yay!

  4. Nerfed the reasoning ability and usage by like 3-5x.

So, basically whoever was behind it was a master manipulator. Reseting the usage only to nerf things like that overnight…wow. So thank you, but no thank you. And I have noticed a serious performance drop with Pro. It stumbles way more now, does not think things through like it used to, basically, xhigh is the new high or medium.

My guess is the 2x usage promo cost them a lot more than they had imagined it would, and this is their shady way of making up for it. So what they did was nerf everything in every which way, create another higher price tier and gaslight everyone into thinking the only thing that happened was 2x usage going away. What’s crazy is I remember a post a while back when 2x usage was a thing, and they predicted this exact manipulation move.

1 Like

Not only did they reduce usage drastically for Plus users, they lobotomized 5.4 xhigh for Pro users. It’s nowhere near as capable as it was before. It’s basically medium now. I don’t get it. You pay x dollars upfront for the baseline level of service available when you signed up, and then they can just change it? How is that legal? I doubt it is. It would be like walking into a buffet, looking around, deciding to pay, then they take you to another room where everything sucks. Leave it to OpenAI of all companies to pull this unethical behavior…no surprise given who is at the helm.

2 Likes

It came to my attention that the weekly usage in April is depleting way more quicker than in April.

Today I started asking a new question in a code repo I am working with, with a fresh clean Free plan to start with. But just after I asked one question to review my code, the weekly limit is depleted and I cannot even let it make revisions for me.

Token usage: total=39,009 input=35,773 (+ 83,456 cached) output=3,236 (reasoning 1,475)

I can recall that in March the weekly usage wasn’t running out this fast. I could reach the limit only if I use it in a continuation of 3~4 days, with the tasks performed being similar. Is there anything done behind the scene that is not known to users?

1 Like

A few hours ago my weekly usage was 100% left. I’ve been waiting for this for five days.
“What a great morning,” I thought. “I get to use the Codex for two whole days, even though I’ll have to wait another five days for it to recharge.”
Then my subscription expired about 40 minutes ago. I quickly bought the PLUS package again. Now Codex shows that my weekly limit is 0 and I have to wait cooldown again.
wtf :face_with_bags_under_eyes:

2 Likes

they baited people to gain momentum against Claude code, when done “rug pull”

1 Like

i think its time migrating to claude code !

1 Like

If Claude ultimately provides better results, the same rate limits, and better features, why pay for a ChatGPT subscription? I was supposed to renew my subscription today, but I’ll try Claude Code first to test it out.:smiley: :sweat_smile: :rofl:

2 Likes

TL;DR: Codex limits used to feel generous and workable, even for regular users like me who were not exhausting them. The recent reduction feels extremely severe, not minor. I understand the possible business logic behind pushing some users toward higher-tier plans, but this risks damaging long-term user habits and retention. In the AI market, losing users over preventable limit policy may be a much bigger mistake than it looks in the short term.


Codex limits used to reset quite often before their scheduled expiration. Because of that, I was not just happy with the product, but genuinely grateful. Also, except for maybe one time, I was never even a user who fully exhausted my limits. I was not someone using Codex at an extreme level and draining every bit of quota; in most weeks, I still had around 50% to 70% of my weekly limit left before it reset.

That is why I felt comfortable recommending Codex on social platforms. I often reassured people that they probably would not run into serious limit issues. Then, suddenly, the limits were tightened very heavily. This was not a small nerf. It was a major one. The difference between the old experience and the current rate of limit consumption is enormous.

I sometimes wonder whether the thinking is something like this: if one person upgrades to the $120 plan, that offsets the loss of six other users. That way, server load goes down while revenue stays balanced. I can understand that logic. However, there is something extremely important being overlooked here: user habits and long-term retention.

Right now, many of us may not leave immediately simply because of habit. But once the limits make regular usage feel impractical, and once that becomes noticeable enough, people may no longer have much choice but to look elsewhere. And once users move to other platforms, getting them to come back may be much harder than expected.

Especially at the dawn of the AI era, losing customers matters a lot. Losing them for a preventable reason like limit policy could become a much bigger mistake in the long run. On paper, losing six users and replacing them with one higher-paying user may look efficient. But if you eventually lose that one user too, then you have actually concentrated your risk. In other words, replacing six customers with one means a single future loss now costs six times more.

For that reason, I would strongly recommend reconsidering the recent changes to Codex limits. I believe there is room for a more reasonable balance between customer satisfaction and resource management.

1 Like

But what if they are making a loss?

You can’t sustainably sell inference at a lower cost than it takes to supply it whilst paying back your creditors … and staying in business.

On the evidence of this thread there appears to be a product-market gap based on price (for some part of the market)

What needs to happen is further innovation so that that gap can be closed.

I’m absolutely sure OpenAI is aware of all of this and on the case like a rash!