Pro plan: hit 5-hour limit twice (in 2h and 1.5h) and nearly exhausted weekly cap in 1 day after today’s update

Hi team,

Before, Plus on the web felt effectively unlimited. Two days ago (Oct 31) limits were introduced, and since then my experience has gotten dramatically worse (on Pro!):

  • Today I hit the 5-hour limit twice — first time after ~2 hours, second time after ~1.5 hours.

  • In one day I’ve nearly exhausted my weekly limit.

  • Some runs still hang and never produce a result, yet they consume my limits — this is especially frustrating.

  • I’ve reduced usage on purpose (fewer tasks, no parallel runs) and even upgraded to Pro. Screenshot attached.

To be clear, this is not about Plus vs Pro. The core issue is that my normal workload — which used to cost $20/month — now effectively looks like $1,500 to keep the same pace. That’s an order-of-magnitude change overnight.

Please clarify:

  1. Is this expected behavior after changes?

  2. What are the exact limits (per model/IDE/web) and how is usage counted (tokens, wall-clock, retries, background reasoning, hung runs)?

  3. What’s the plan to prevent/credit stuck jobs that eat limits with no output?

  4. Is there a way to raise limits or get a transparent real-time breakdown so we can plan responsibly?

If this is the new normal, please say so — I’ll have to plan accordingly and look for alternatives. Thanks.

13 Likes

i have the Plus plan and just did about 30 minutes of coding using the Codex Cloud interface and it triggered the 5 hour limit. That has never happened before. I did look at the new doc for the credits based usage and Cloud tasks are ~25 credits/msg vs 5 credits/msg for Local (l assume they mean Codex CLI). I did a couple hours of coding yesterday and hit the 5 hour limit which seemed more inline with past experience, but today seems very different.

https://help.openai.com/en/articles/11481834-chatgpt-rate-card

4 Likes

I think a patch in the last few hours completely messed things up. I ran a Codex task on my Business Plan account (which according to the documentation, has the same limit as Plus accounts), doing one task including one Q&A and one coding session, processing about 200 lines of Python code, but my 5-hour limit was reduced by 99%. This was the only codex task I ran in the last 5 hours. This has never happened before; I only ever had a chance to use up my 5-hour limit when I was really focused on vibes, and now it only takes one simple task.

Whether this is a bug or a carefully planned modification, OpenAI should recall this change, this makes codex far more expensive than any other similar tools.

3 Likes

Similar experience here. I’m on plus and 2 tasks today got me hitting the 5-hour limit. The same 2 tasks also contribute to around 35% of my weekly usage limit. Assume all tasks consume similar amount of credit I guess I can only run around 5~6 tasks a week! I would consider this unusable. Would also like to know

  1. If this is expected, if unexpected, any refund to the usage?
  2. If performing an identical job, would local contribute less towards the usage limit?
  3. Is there any way to choose a lower cost model?
3 Likes

I’m on Plus. Since yesterday’s Codex “Usage” rollout, limits feel broken: 8 tasks already count as 60% of my weekly cap, and 3 cloud tasks drained the entire 5-hour window. I’ve hammered Codex for six months and never hit limits — even with heavier workloads. This looks like a metering regression/misclassification, not intended behavior.

2 Likes

Now it hit the limit within 2 ten minutes sessions. At this much strict rates it’s borderline unusable.

7 Likes

I second this. Over the past two days, I’ve already hit the 5-hour rate limit twice on the Plus plan using the web app. According to the dashboard, I’ve also used nearly 60% of my weekly quota… Despite having completed less than 5% of my usual weekly workload.

It would really help to know whether these significantly stricter limits are the new normal, so I can plan ahead and explore alternatives if needed. Until then, Codex on the web (Plus) is almost unusable.

3 Likes

Today I was shocked seing my usage quota exceeded for the first time since using Codex. I’m a big fan using the web interface, but with just 3 tasks and already exceeding my usage limit on the plus plan? I hope this will be fixed ASAP and is NOT the new normal.

9 Likes

2 tasks and a fresh 5 hour limit has been hit. Its now bordering unusable

6 Likes

Tested again just now, for two simple coding tasks.

5-hour-limit drops from 100% to 0.

Weekly usage limit drops from 70% to 40%.

3 Likes

Plus account here. Ran 5 prompts on Codex Cloud and hit the 5 hour usage limit already. Hoping there is a bug, it doesn’t feel right; considering some tasks never complete, plan mode is buggy…

3 Likes

“Got this message: ‘I’m sorry, but I’m not able to complete this task.’
And somehow it used up 3% of my weekly quota on a Pro plan.

5 Likes

Got the same message again — and it burned the last 2% of my weekly quota.
Down to 0%. Well done, OpenAI. :frowning:

5 Likes

Posting more evidence to support this issue, see screenshot below.

5 Likes

Confirming the same limitations on my Team plan after about 5 cloud tasks (I easily was running 50-300 a day prior) . Unfortunately it looks intentional with this update to their documentation: https://help.openai.com/en/articles/11369540-using-codex-with-your-chatgpt-plan

Its not even usable with these new limits.

2 Likes

I wanted add some data on how the current Codex Cloud quotas behave in practice and why the pricing tiers need to evolve alongside the product’s new capabilities.

The screenshot below shows my actual usage curve. The steep drop-offs aren’t downtime; they’re throttling events.

A single Codex debug session, one prompt, one PR, one GitHub connect review generating a thumbs up, consumed roughly 20 % of my weekly quota.

That one prompt took 68 minutes to process, during which Codex:

  • Spun up the sandbox

  • Resolved 8 failing tests

  • Traced dependencies

  • Patched, linted and type-checked

  • Re-validated the pipeline with pytest

  • Produced a passing PR with full test confirmation

That one reasoning chain triggered ~15–20 individual “tasks”, roughly 250–300 credits. Every pytest, ruff, mypy or git command counted as its own billable action, even though they were all part of one continuous engineering task.

Here’s where the misalignment becomes obvious:

Plan Typical Weekly Allowance Cost (USD) Realistic Capacity Outcome
Plus ~500–700 credits $20 2–3 deep sessions Throttled within hours
Pro ~2 000–2 500 credits $200 7–10 deep sessions Still capped mid-week
What Codex Enables Continuous reasoning Dozens of multi-hour runs Not yet supported

Codex Cloud now operates as an autonomous engineering agent, not a chatbot.

It plans, executes, validates, and delivers. Yet the current billing model still assumes short, conversational bursts.

We’ve been encouraged, quite enthusiastically, to explore this new paradigm and to push Codex to its limits. But when we do, we end up constrained by the very usage model that the product itself has outgrown.

What’s needed is a shift from per-task billing to a clear, tiered pricing structure that reflects how people actually work with Codex:

  • Casual – short edits, quick fixes, or one-off assistance

  • Standard – daily light development and refactoring

  • Performance – continuous build/test and integration cycles

  • Power / Enterprise – full agentic pipelines, orchestration, and long-form reasoning

Codex is ready for full-time engineering workloads. The pricing model just needs to evolve to match the reality it created.

I’d happily pay for that kind of clarity. The technology is genuinely ready for sustained, full-time engineering workloads. The quota model just hasn’t caught up with the reality it created.

EDIT : Looking through everyone’s usage charts here in this thread, it’s clear there are two different patterns emerging. Most show short conversational bursts, Codex being used as an assistant. Mine (some others) shows sustained, continuous reasoning, Codex being used as an autonomous engineer.

Same tool, completely different usage physics.

The first fits the current quota model (even if limits are off balance). The second doesn’t.

2 Likes

Yeah same, I’m on Pro and while I usually don’t come close towards hitting my 5h limit, the weekly limit is way too small now and more like for 3 days of work. Considering the extensive price, some more compute would be great

4 Likes
Question
Where to complain?

Hi everyone, thanks for taking the time to flag this.

I can confirm that is a known issue, there have been no restrictions in usage, but there does appear to be a technical issue affecting usage limits that is being investigated.

4 Likes

They’ve been aware of a usage bug, yet it continues in production for over a day now?

2 Likes