From that tweet, it looks like the way credits are being used right now isn’t intended.
I’ve used 100% of my weekly quota on the Pro plan in just one day, and on the second day I burned through another 2,000 credits, even though my workload was only about 30% of my usual usage.
With limits like this, Codex becomes unusable for active users.
Based on the credit consumption, an average task now costs around $1–3, even for simple “ask” prompts.
If I run 100 tasks a day — which is normal for my workflow — that would mean paying around $300 per day.
Given the silence from OpenAI representatives and the complete lack of answers on X, it seems this is the new reality.
But one has to wonder — how long can OpenAI sustain itself if its most active developer users leave the platform?
Who will build loyalty and drive adoption of your API services then?
Take a look at the chart — you’ve effectively blocked my ability to work since the new limits were introduced.
I even upgraded from Plus to Pro, but that didn’t change anything.
Since then, my usage curve has been hitting the ceiling almost instantly, and the system keeps throttling me despite significantly lower activity.
Moreover, many cloud tasks simply hang — they never complete, yet the credits are still deducted.
This happens in more than half of all runs, making it nearly impossible to maintain any predictable workflow.
It’s becoming hard to justify paying for a product that actively prevents you from using it.
Additional observation:
In IDE mode, the Codex agent now often refuses to execute tasks and instead produces textual summaries of what it would do — consuming credits in the process.
This behavior started about two days ago and happens in more than half of the runs.
It looks like a new system prompt or throttle layer was introduced, which makes the agent “explain” instead of “act.”
That effectively doubles the cost for active users while cutting task throughput in half.
I will also stop using it in a few days and switch to another platform, goodbye Codex
Honestly, paying $10 just to ask a couple of questions is ridiculous, and I can’t afford to keep paying this every month—so I can’t keep using it. I’m at the point where I need to find an alternative before my plan expires.
It’s a shame, because I found Codex genuinely innovative and highly useful, and I hate that this is how
I still want clarity: if you can’t pay over $300 a month, are you simply not OpenAI’s customer? Frankly, it feels like a bad joke. The thread may be tagged “resolved,” but it doesn’t feel resolved to me. I want a clear answer on whether this is
At this point, normal usage costs $100–300 a day)).
@OpenAI_Support @SamAltman (if it is you) Tell us If that’s actually the intended behavior, please just confirm it — no one will need to ask further questions.
One more feedback, potentially a bug on how usage is tracked. On Nov 6, I did one prompt (4 versions) on cloud, after the usage metrics were reset and credits were given (thank you Codex team for that!). Even though that one prompt did not consume all of my daily or weekly limit, I see that it ate 20 credits. This shouldn’t be the case as from what I understand, credits are used only after the limits hit.
@OpenAI_Support Could users with Pro plans get answers about the lower weekly limit we’re experiencing?
I have used codex consistantly for a while. same usage. Never had issues with limits, hence loved it over claude code. It is only this week, I used 85% of my weekly allowance in one day into the week. Now I had to subscribe back to the claude max plan. Codex Pro plan at 200 dollars is too much relatively. If anything the 5 hour limit is very generous but that weekly limit seems very small…
Hi all, being a long time codex user and now effectively no longer a user due to these limits. I moves to open code and use gpt 4o or grok fast code. Way quicker than codex and runs in CLI. Really easy to use. Wish I made the move sooner. I am making features and big fixes nearly 40% quicker than before. You can use any model
Please unmark this as “Resolved.” The core problem remains: normal workloads now cost an order of magnitude more, hung runs still burn limits, and there’s no clear accounting. Until there’s transparency and a real fix, calling it resolved feels inappropriate. @OpenAI_Support @SamAltman
Just like Claude… it’s unusuable whether you code or just use it for general tasks and questions.
The team is actively working on a fix for this issue!
I am kindly asking you to monitor the situation for updates in this topic:


