Codex Credits Dissappearing

@OpenAI_Support I’ve also noticed the weekly limits draining faster than what’s expected on Pro subscription, I did have two tasks stuck for hours that might’ve been the culprit but they produced zero output? A bit of clarity here would go a long way, thank you.

then why am I unable to do anything if it was only promotional credits? and I never had a limit on usage before, no token limits at all.

Thanks for the explanation and for looking into this.

However, I’d like to clarify something about the “recent usage issues.” The extremely rapid consumption of credits that many of us experienced was considered a bug, correct? If so, has that bug actually been fixed? From the user side, it still feels like credits are being used up at an abnormally fast rate. There have also been statements from the dev team saying this issue was resolved, but many users can clearly feel that nothing has really changed and that the problem is ongoing.

When users ask directly whether this is a bug or intended behavior, both the dev team and support have largely remained silent. So from our perspective, the “recent usage issues” are still very much continuing, and codex usage has not improved to a level that would justify letting the free promotional credits expire.

Could you please clearly choose one of the following and communicate it to us?

  1. Provide additional compensation/credits while this ongoing issue is being addressed,

  2. Fix the bug so that credits are no longer consumed at this abnormal speed, or

  3. Officially state that this very low effective limit, which users have been experiencing since early November, is not a bug but the intended specification.

A clear answer on which of these paths you’re taking would be greatly appreciated.

1 Like

Agreed with other posters.

All I have noticed since this “bug” is that:

  • Codex is lazier
  • Usage continues uncontrollable
  • There have been zero changes

Since this “bug” I moved to other providers for agentic coding. I come back occasionally to try Codex and am always disappointed. I can understand that Codex is expensive, but the way OpenAI implemented this transition has got to be a case study.

I’ve returned back to using ChatGPT for coding now. Codex is expensive, uncontrollable, and worst of all: OpenAI is opaque and inconsistent.

Since the updates, business use simply isn’t enough, and I don’t see any way to increase credits for a significant user. I am fortunate that I can use codex through our azure subscription as well but then I’m wasting time a bit because of this bug here: Stream disconnected before completion (invalid type: null, expected struct Error) · Issue #6710 · openai/codex · GitHub … very silly

Got no credit.

Business account here – I am the account manager/business owner, and we have multiple seats on our business account. I used to personally have a pro account (day it launched), but when our business grew and more people needed GPT access, I moved to the business plan.

The kicker…. Since OpenAI only has the standard business plan or Enterprise when you hit I think 50 seats, there is no way to get closer to Pro-like limitations for a business account. So I’m spending more per month to OpenAI, but getting less since they treat each seat as it’s own account without having company limits.

Support bot told me:

“The 5-hour Codex usage window for Business users comes from the fact that ChatGPT Business includes the same per-seat usage limits as Plus”.

If Plus users got the credit, so should Business, especially considering how much more overall businesses pay for multiple ‘plus’ seats for a team. To me, there should be global Business limits depending on team size and other factors. In fact, Ive gotten credit grants to use API, but can’t apply them anywhere else within OpenAI account.

The assistant then responded:

“I understand your point and the frustration about credit fairness. Credit offers like the recent Codex $200 credits are determined by OpenAI policies and are usually directed to individual Plus/Pro accounts, not Business or Team accounts, even if their per-seat limits are similar. Business plans are billed differently, and credits tied to outages or promotions might not directly apply, regardless of spending amount or plan.

I’m unable to apply or guarantee credits or limit changes myself, and there’s no official channel to escalate Business Codex credit requests based on Plus/Pro promotions. You can raise this with your workspace owner who may reach out for custom solutions or work with your OpenAI account manager if you have one. If you have other questions about Codex limits, usage, or adding credits to keep building, please let me know.”

Hi everyone, we've shared all your posts with the team that worked on this.

They’ve published an update and summary here last week: https://www.reddit.com/r/codex/comments/1p2k68g/update_on_codex_usage/

Some task usage data:

  • Plus users fit 50-600 local messages and 21-86 cloud messages in a 5-hour window.
  • Pro users fit 400-4500 local messages and 141-583 cloud messages in a 5-hour window.
  • These numbers reflect the p25 and p75 of data we saw on Nov 17th & 18th. The data has a long tail so the mean is closer to the lower end of the ranges.

Please feel free to continue sharing your experience here or with support@openai.com with any account-specific questions. Thank you for sticking with us through this.

1 Like

Why is it that my weekly usage seems to act like the 5 hour time window at the moment?
About a week ago I was able to use codex reliably but now I don’t even run the 5 hour time window down before my weekly is at 0%. I can get maybe 20-40 tasks in before my weekly usage is up, which according to this message should in reality be my 5 hour time window.

I functionally can’t even use codex with the plus subscription since I hardly get anything out of it, and codex was the only reason why I have been keeping my long time plus subscription.

I tried to use credits too, but the credit usage spikes insanely high randomly and I used almost half of my credits in a single day.

1 Like

It seems that OpenAI is phasing out Codex as a product.

1 Like

Hi everyone, thanks for chiming in. We pulled usage data for several accounts in this thread and other related forum threads and reviewed it with the Codex eng team.

In the cases we saw, the balances that hit zero did so were directly tied to usage (as expected).

A few things that may be contributing:

  1. December Codex discount

From mid December through Jan 2, Codex ran with a 50 percent token discount. This meant every request cost half, rate limits doubled, and credits lasted twice as long.

That discount ended on Jan 2.

  1. A few runs can use a full week since Codex limits are based on tokens rather than task count.

Factors such as large repos, long diffs, retries, or stuck cloud jobs can use tens of millions of tokens in one run. We saw this in at least one account in this thread (one job consumed most of the weekly limit in a single spike).

One gap here is definitely visibility. You cannot see which job burned your budget and our support teams cannot clearly see it either without manual queries or working directly with the Codex team. We will be fixing this.

Eng is building tools to see per user and per session usage. We also plan to show clearer signals in the product so you can tell what limit you hit and why.

If your account still looks wrong, please do write into support@openai.com anyway and share your account email, whether you use Web, CLI, or Cloud, and the time window when the drop happened. Ask us to pull the token logs and confirm what used the budget.

Thank you again for your continued patience with us here, and I promise we’re still actively working on Codex!

2 Likes

I’m experiencing major token usage as well. It’s honestly frustrating to buy credits only to see them wiped out in a few prompts. Codex can be great, but the unpredictability makes it risky for real dev work.

The hard part isn’t even the price, it’s the randomness. You can split tasks into smaller chunks and still get a run that spirals, retries, or “thinks out loud” for ages, and suddenly your credits are gone with nothing shippable to show for it.

How are devs supposed to plan around this when there’s no way to anticipate how many tokens a task will take? In a professional workflow, predictability matters as much as capability.

What makes this feel unsustainable is there’s no safety net. If the agent loops or goes in circles, credits get depleted anyway, and there’s no practical path to a refund even when the output is clearly unproductive.

We need guardrails that protect users: hard stop limits, clearer pre-run estimates, and the ability to cap spend per task. If a run exceeds the cap, it should pause and ask before continuing.

At minimum, there should be “wasted run” protection. If the system detects looping, repeated retries, or no meaningful progress, it shouldn’t burn through paid credits like that.

Even when I do everything “right” (smaller prompts, narrower scope, clear instructions), the token spend can still blow up. The variance is the issue.

Right now it feels like the user carries all the downside risk. Codex is a solid project and I’ve seen it improve greatly over the past 6 months, but right now it’s still risky. If OpenAI wants dev adoption, cost predictability + consumer protection has to be part of the product, not a “hope it doesn’t loop and spend half of dev budget” situation.

The current system punishes experimentation. You try to iterate, and if the agent takes a weird route once, it can delete your entire credit balance. That’s the opposite of what dev tooling should feel like.

I’d love to keep using Codex, but until it’s predictable, it’s not reliable enough for professional usage. The tech is promising. The economics aren’t.