Not sure whether anyone has noticed, but March 27 and April 1st we have had CODEX Credits reset. Between March 28 and March 30 it was absolute insanity, i run multiple Business accounts for a team and i was shocked that 5H rates went within few tasks, and weekly within half day.
Thank you OPEN AI for taking this matter seriously and fairly. 2 complete resets within short term.
Finally after using it now on (April 1st 2026) i want to say that this now finally looking really well and normal(fair). I already did like 30+ really deep tasks and was not able to hit 5H usage. On top of that weekly limits went down really realistically. Im normally mostly using GPT 5.4 and 5.3-Codex.
Based on my usage (heavy) - i would assume i can push weekly limits for roughly 2-3 days which makes total sense based how heavy my tasks are.
My point of this topic is to give a review to OpenAI team. Finally this is real and normal and it should stay that way. I hope you Open AI team can forward this feedback.
I hope everyone else have noticed differences since April 1st reset.
I am Business account + API user. But here i spoke about Business account user Codex credits.
Thanks for update! Great to hear - and as i mentioned above, it really feels great now. I can see that team can really develop fairly. Completed tasks vs credit usage is super good now.
Iām still seeing really crazy credit usage - a simple bug fix just ate up like 25% of my 5 hour usage which was super confusing. Anybody else still having issues?
I might be seeing the same. I started using Codex about 4 weeks ago on the Plus plan, and throughout that time until the last day or so, I never needed to check my usage. It felt unlimited, but it was for a small mobile app, so maybe I wasnāt pushing Codex as hard as I thought I was. In the last 2 days, I have blown the 5 hour limit very quickly and it felt like something has definitely changed. Last night it dropped from 100% to 91% in 7 minutes and I cannot think how I had not hit a limit before. I do see that Codex is now defaulting to use 5.4 rather than 5.3 which I was pleased about when I noticed. but now Iām not so sure. I have been using Medium thinking throughout. Iām using the Mac App. Could it be to do with the thread getting very large? Iāve now moved to a new thread.
Same here 2 really small prompts to 5.4 medium with not much thinking needed and Iām down 14% on my 5 hours limit wich seems real off. A colleague tried only typing hello to codex 5.4 medium and lost 4% of the 5 hour limit wich seems wayyy off. That is since the reset today at 13h05 Eastern time
39mins ago i wrote my post above. After that post i noticed all limits reseted again to 100%.
2 tasks i did - 43% 5H left again
We can complain here all day long, its obvious and im sure OpenAi know that.
What mostly i dont understand how their algos works. It seems like that every individual user is part of some shared pool computing. Or its simply not enough power resources and the only way to reduce consumption from servers is to force cut limits so users leave codex for certain period. Interdasting
That one is obvious - its because its no longer following instruction where functionality was changed from being AGENTS.md driven to hook driven..
openAI codex team should probably have it recognize when AGENTS.md contains instructions which refer to legacy instructions that have been replaced with hooks and then prompt the user to convert the instructions into the correct format rather than just ignoring them and going back to the default inefficient āsearch-every-file with rgā system especially when most users are probably using some form of database-backed system for persistant memory across context window cleanups.
That is definitely the reason for the spikes in rate limits I 100% guarantee that.
For example, I have a database-backed system that allows for semantic function lookup and tells the agent exactly where things are and exactly what they do, so it doesnt have to read the whole file(s) + dependencies to figure out whats going on and can instead figure out what something is doing with a single shell command and then move onto the next stepā¦. as of the moment when the hook API started, and my database-backed shell scripts stopped working, my usage jumped 10x
New Codex April āeastereggā has been launched. I dont see anything exciting with new free codex plan and loading credits into ChatGPT. Its clearly written that its the same thing as using API and input/output pricing is normalized to be the same as using API. Then only positive thing here is that it easier to login - nothing else.
So after these changes and ending of 2x limits campaign- now regular Business user who uses both Workspace + Codex hits 0% 5H limits with 1-2 tasks.
Literally have been testing this all day. Either 1 larger task or 2 smaller - limits gone. This is total absurd. Even if say we were running double limits since campaign started - similar tasks used roughly 1-2% tops. So logically i should expect that once campaign is finished , i could at least do 15-20 tasks over 5H.
Anyway this is total absurd TBH.
Just for curiosity i have tested API numbers - one good size task came up on average 0.5$ per task.
So this tells also that regular Business account seat workspace + codex included uses also roughly 230k tokens in 5H perspctive. Just to round the numbers we can say that 100k tokens eats 20% of 5H limits.
All My tasks roughly inputs 150-200k tokens. Math comes up well here then - after 2-3 tasks 5H is 0%
So in a daily basis pure AI development with Codex can cost 50-100usd easily
I suggest using the /escalation command and sending this directly to the team, because that is not what I am seeing on my usage page. That should help them identify and resolve the issue.
I have two accounts with Plus and Business subscriptions. Here are three images showing limit usage, taken 20ā40 minutes apart, for medium-complexity tasks on 5.4 medium.
And then what? They gonna āfine tuneā my account? Or reset limits which will be burned again within 20mins again?
I mean i can provide so many proofs where i literally login into account, both 5H and Weekly are 100%. I send simple tasks to adjust CSS files - within 3 tasks 5H is already at 4 for those CSS tasks that only adjusted onHoover, effects, color, div work⦠But i dont see any need to prove. Whole community is crying about the same problem.
I have 2 companies and in total combined over 2 accounts i control 13 seats for my team. Devs are switching each other accounts to be able to work and do something because its going nuts.
We have our own dashboard and control over all accounts and their limits, reset times etc.
This is just one page of example, but look at the 5H Reset times. They are 30mins roughly after each other on every other account because this is how long it took to suck them out for one account. Basically dev logs in, works for 30mins tops, makes 2 tasks - 0%. Then he logs into another account to continue or finish tasks that were stopped due to ālimits usedā, and then after again 30mins must repeat the process.. And keep in mind that we had full reset yesterday.
I mean last week and last 3-4 months one dev was hitting weekly usage with 5.3-Codex within 2-3days at minimum. 5H was almost forgotten in general because never reached usage of 5H.
If only OpenAI actually listened to these complaints instead of turning a blind eye to them. Itās a shame that the money spent on the subscription didnāt justify itself. I got Plus for a month, and this issue with limits has been going on since around March 16. I thought the reason was that I had set the context to 1M for the 5.4 model, so I switched it back to 258K and fully reset the settings overall. At first, that seemed to help. But no ā now even with standard context windows and ordinary tasks, the 5-hour limit is basically enough only for something like, āWe need to implement this or that.ā After that, the agent starts reading files and searching through the code again because it compresses the context and loses track of where the code is, then begins rereading every matching file, burning through both its context and the limits in the process. Iām disappointed. And Iām not the only one.
I no longer see any point in a Pro subscription ā I believe the same thing is happening there now as with Plus or Business. Moreover, if this problem is hard to solve, why not add the ability to connect MCP to store memory about files and code, the project structure, and the history of changes? Otherwise, the agent will simply hit the limits while reading files when itās given the task of getting familiar with a project containing thousands of files.