I made a Signal bot.
A crypto Signal bot?
I made a Signal bot.
A crypto Signal bot?
No just a chatbot. No crypto in sight. Mainly for fact checking, web search, maths calcs, transcriptions and summaries.
Yeah, there are these tiny windows where 5.4 mini would be useful in my project… and there would have been more if I had planned the architecture better ahead of time.
It’s definitely something I’ll consider planning out better for future projects.
Has anyone from OpenAI chimed in or responded to any of these concerns? Is this just a situation where the company position is one of “this is just the way it is” or is there any indication that the outcry from users might be reaching ears within the organization that might contribute to decision making moving forward?
Same kind of experience here except I’ve been using Codex as a coding assistant, essentially filling in junior roles you’d find in a dev team. I started using Codex back in late February early March of 2026 and then noticed this month that I started hitting my weekly rate limits in just two days. While I’ve only been able to afford the $20 Plus plan, before the early April changes, I was able to get no less than 5 days of real work in, often a full 7 days, before hitting my weekly limit. Only being able to get 2 days of work in, not automating workflows or using multiple instances, with just me conversationally using Codex to implement projects feels bad. It’s not worth the subscription really. The reason I started using Codex rather than Claude Code was because OpenAI’s pre-April usage rate model was far less restrictive. Now it’s just as restrictive as Claude Code. I may as well unsubscribe from the Plus subscription and save my money to buy some hardware that can run a decent local coding agent instead.
I think we got double Wammy’d because those of us who started using Codex in late February, early March lost the 2x usage limit AND got another hit when the usage rate model changed from per message to per token in early April. So detailed, thought out prompts no longer have any advantage and could even cause you to eat more usage faster. It’s a terribly impractical model unless you can afford the Pro tier subscriptions which, if you’re like me, is out of your reach.
Aww. Hello again. Lately, I’ve noticed that there’s been a lot of controversy here. The tariff plan and subscriptions. I observe such a picture that. Some people use a model. For a long time. Yes, some people have been using the codex since February, I think. That they lost the two-fold limit? Even more so. For some, this is not profitable, for others it is so-so, and even more so if the request itself is equivalent to one token, then this is not a profitable model at all. It’s not tested at all. It may seem to some users that. It’s going to be a little expensive. Judging by the latest reports. We can say the following that. Although. I can ask you one question, but who has used my pro subscription at least once? Sorry to be off topic, but still. And what are the water prices and token withdrawals there?
I’ve got one more point about the broken trust. OpenAI will rollout a memory for Codex CLI.
If the memory won’t be fully local or easily exportable, I will not be using it. Migrating to a different provider would be way harder without owning the memory because crucial information will no longer be strictly necessary in my projects, although still a best practice since memory can be corrupted.
Will consider it only if OpenAI builds trust back up again or if I can fully export it regurarly with little friction.
Hot take? More like utter nonsense take.
NO ONE is talking about the promo ending. We’re talking about the additional limits imposed that made Codex completely unworkable with. I have 2 accounts already. PAID accounts. And each account get limited after 2-4 small tasks. How is that usable in your head? No matter the amount of mental gymnastics you do, it’s NOT.
And OBVIOUSLY it kills trust in the platform. I am certainly not going to switch to 1 Pro account and pay ~2.5 times what I did to get a semblance of a functional workflow compared to BEFORE the promo. Why would I? At this point it’s unscrupulous business tactics and even I could somehow justify the huge increase in cost, I could never trust the same limitation not happening shortly after. No one in their right mind can.
I’m already looking for viable alternatives and if there are none, I’ll just go back to completely manual. I can certainly write C++ code MUCH FASTER than what codex can with the 5hour limit as it is.
And I didn’t even get into the fact that it gets stuck often, repeating a nonsense response relating to the previous prompt while doing absolutely nothing, completely ignoring the prompt and STILL wasting limits doing so. This problem becomes more and more frequent the longer the session gets. I saw it take off 20% of the 5 hour limit just doing that yesterday. NO WORK AT ALL.
So no, your take is utter nonsense and ignorant. It sounds like you have not even tried to use it and decided to just troll everyone here.
sure thing boss.
will not read the rest.
Switched to codex cause it had way more generous limits. At this point might as well just go back to Claude
I want to add a specific angle to this discussion.
For me, the current Codex usage model in ChatGPT Plus is not just restrictive. It penalizes honest weekend open-source use and effectively pushes me toward ChatGPT Pro.
This change was introduced about two weeks ago, so this is not a brand-new reaction. I already had these thoughts when the change was made, but at that point I had not yet found the right words for them.
I use Codex for my private open-source work, not for my job. I usually work on that project once a week, on weekends, in one or two longer sessions.
At work, I use a company-paid AI coding subscription (in my case, Copilot). But I cannot use company-paid tools for my private project, and I do not want to blur that boundary. I think that boundary matters. Company resources should be used for company work, and personal resources for personal open-source work.
The problem is that the current session budgeting for Codex in ChatGPT Plus seems optimized for many smaller sessions spread across the week. That is the opposite of how I use Codex. I do not need many small sessions. I need one or two substantial sessions for my private open-source project.
For example, today I started working at around 7 p.m. After about three hours, I had already used the amount I could use within one session window, and the next window only opened at midnight. In practice, that means the model forces a two-hour interruption in the middle of an evening session and then pushes me into the night if I want to continue.
At that point, the official alternatives are not really good solutions. I can buy additional credits, or I can continue local work with an API key on usage-based pricing. But if I need to do that regularly just to keep one or two substantial weekend sessions viable, then ChatGPT Plus no longer fits my actual use case in any reasonable way.
As a result, the next viable subscription step for my use case is ChatGPT Pro. But ChatGPT Pro costs five times as much as ChatGPT Plus. So because I keep the boundary between work and private use, and because I use Codex in a concentrated but entirely legitimate way, I am effectively pushed toward a plan that costs five times as much.
That is the part I find especially hard to accept. I am not asking for free usage. I am not asking to stretch or abuse a company subscription. I am doing the honest thing, and the result is that I am being penalized for it.
From my perspective, this is not “more flexibility.” It is a pricing change that makes one specific kind of legitimate use much worse: serious private open-source work done honestly, but only once a week. In practice, it turns honest separation between work and private use into forced overpayment.
Is this intentional? And if it is, do you consider this fair?
So there’s a couple of things I noticed in your story… I don’t see you mention that you’re doing your reasoning in Chatgpt and am assuming that you’re making codex do the reasoning without ChatGPT in the loop between each prompt.
Passing codex responses into a chatgpt that is properly calibrated to your work using the projects feature in ChatGPT, dramatically cuts down on extra/wasted tokens in codex.
If you’re doing opensource work and the repo is constantly changing week to week or month to month, you need to have your codex spider the new shape, check surfaces, examine fragiliy and get that map straight in its context before setting out on the coding binge
these two practices alone stretch your codex usage out much closer to the 5 hour windows given.
This thread is an 81 min read and you’ve spent 6 mins reading.
Often times guidence you might need is already posted or archived here on the forum.
It’s worth glancing at more than just a few mins ![]()
![]()