I’m building multi-repo AI systems using Codex in VSCode with frequent high-tier reasoning calls. My workload includes agent orchestration, backend services, data pipelines, and iterative architectural refactoring across repositories.
I’d like to share feedback specifically from the perspective of heavy AI builders.
For advanced developers, Codex usage is not simply a convenience feature — it is execution throughput. It determines how fast we can:
Refactor multi-file systems
Generate production-ready modules
Iterate on architectural decisions
Maintain coherence across large codebases
In this context, usage limits effectively become a productivity ceiling.
The current Pro tier is positioned as a premium offering, but for serious builders the critical metric is sustained Codex throughput, not general chat capability. When pricing scales significantly beyond Plus but execution throughput does not scale proportionally, it creates hesitation among high-intensity developer users.
There is a rapidly growing segment of “AI-native builders”:
Independent AI product developers
Startup founders
Internal tool creators
Agentic workflow architects
For this segment, a Pro tier offering approximately 10× (or more) Codex throughput relative to Plus would strongly align pricing with production value.
If Pro clearly provided ≥10× sustained Codex usage capacity compared to Plus, I would upgrade immediately. I believe many serious builders would do the same.
Strategically, this would:
Increase Pro adoption among high-value developers
Strengthen OpenAI’s position as the default AI development platform
Reduce incentives to migrate heavy workloads to open-weight or self-hosted alternatives
Improve long-term developer retention and ecosystem lock-in
There is significant upside in explicitly designing a tier optimized for AI builders where execution throughput scales meaningfully with price.
This feedback is intended from a developer-platform perspective rather than general ChatGPT subscription discussion. I appreciate the continued evolution of the developer tooling ecosystem and wanted to contribute a builder-focused viewpoint.
Just a quick note: heavy users can purchase additional credits.
Since the limits for using Codex with a ChatGPT subscription are currently doubled, does that mean the effective rate limits for pro subscribers should increase by more than 3× compared to what they are now?
I think the “3x” framing may be slightly different from what I’m evaluating.
When considering an upgrade from Plus to Pro, the decision is primarily based on the relative throughput multiplier between tiers — not the absolute limits in isolation.
From a heavy builder’s perspective, the key question is:
How many times more sustained Codex throughput does Pro provide compared to Plus under current conditions?
Temporary 2× boosts across plans are helpful, but they don’t change the upgrade calculus if both tiers scale proportionally.
For serious AI-native builders, a Pro tier that offers a clearly substantial baseline multiplier (for example ≥10× relative to Plus) would make the value proposition unambiguous and significantly reduce upgrade hesitation.
Credits work as overflow capacity, but they don’t replace a structurally higher baseline tier designed for sustained production-grade development.
Thanks for pointing me to the official pricing page and the listed usage multipliers (6× local/cloud task limits, 10× cloud code reviews).
That clarification is very helpful.
From a heavy builder’s perspective, the decision to upgrade from Plus to Pro is evaluated based on how many times more sustained Codex throughput the higher tier provides relative to Plus — not just based on isolated sub-feature multipliers.
Currently, Pro is priced at roughly 10× the monthly cost of Plus ($200 vs $20), whereas the most relevant everyday coding throughput (local/cloud task execution) appears to be around 6× rather than ≥10× overall despite specific sub-features like code reviews hitting 10×.
Temporary boosts such as doubled rate limits across plans or additional credits help in peak usage bursts, but they don’t change the baseline throughput gap between Plus and Pro that influences real upgrade decisions.
For sustained, production-grade AI development workflows — where bottlenecks are often throughput and context capacity — a Pro tier whose baseline Codex throughput is meaningfully above a simple multiple of 6× and closer to the pricing multiple (e.g., ≥10×) would make the value proposition clearer and much more compelling.
That’s the context behind my feedback — the core question for builders is not whether some sub-features scale 10×, but whether the overall sustained productivity envelope scales in line with the price multiple.
It seems speed for iterating is your concern. This might cover your quest for speed. What OpenAI is keeping behind a pro subscription, where it may be quite hard to justify $2400 a year for talking to an AI for plausible answers (and having your functions deleted out of your code with “oops, sorry, I’ll patch that same deletion I repeated earlier back in, so the call site 100 lines later will still work.” by Codex)
Codex-Spark is rolling out today as a research preview for ChatGPT Pro users in the latest versions of the Codex app, CLI, and VS Code extension. Because it runs on specialized low-latency hardware, usage is governed by a separate rate limit that may adjust based on demand during the research preview..
Then that “separate rate limit” is: no promise of when it will shut you off and not be available when you are trying to get a multi-turn job done, anyway. What you could do in a day might be less, but this model and its hardware generates near 1000 tokens/second so it is less waiting in attendance for the next chat turn that is dependent on the last.
I think that’s the only thing that is going to be in service of your want for more “go”. However the generation is not the time consumer. It takes much more time than a turn to find out after a day of sending messages where the slop regressions happened becuase you didn’t take the time to read and understand every single thing the AI wrote, or if you run a few in parallel, which version of inscrutable spaghetti is better.
Thanks for the detailed explanation — the low-latency Spark model and hardware acceleration are definitely valuable improvements.
That said, for heavy builder workflows, the primary bottleneck isn’t per-turn token generation speed. It’s sustained multi-turn throughput capacity over extended sessions.
In practice, most AI-native development work involves:
Iterative multi-file refactoring
Long agentic chains
Repeated architectural adjustments
Parallel experimentation
In those scenarios, what determines productivity is not whether a response generates at 1000 tokens/sec, but whether the system allows sustained high-frequency turns without hitting rate ceilings.
So while Spark meaningfully improves per-response latency, the upgrade evaluation from Plus to Pro is still primarily based on the relative sustained throughput multiplier between tiers.
If the baseline sustained capacity gap remains closer to ~6× while pricing scales 10×, the differentiation may still feel incremental from a full-time builder’s perspective.
That’s the distinction I’m trying to highlight — latency improvements are welcome, but tier positioning is ultimately judged by sustained execution envelope.
I’ve reviewed the listed multipliers (6× higher usage limits for local/cloud tasks and 10× more cloud-based code reviews).
My core point is about upgrade incentives based on the current tier-to-tier multiplier. When users evaluate upgrading from Plus to Pro, the most intuitive comparison is: “How many times more baseline Codex capacity do I get for day-to-day coding throughput?” If Pro is priced at ~10× Plus ($200 vs $20), it’s very reasonable (and cognitively simple) to expect the baseline Codex usage envelope to be ≥10× as well — not primarily ~6× on the core “local + cloud tasks” throughput that builders hit every day.
I completely understand and appreciate the other Pro improvements (priority processing, Spark, etc.). Those are great. But even if we look only at limits, aligning baseline throughput scaling with the pricing multiple (≥10× vs Plus) would make the value proposition far more direct and would materially increase upgrade motivation for heavy builders.
Also, temporary boosts (e.g., doubled rate limits across plans) are helpful, but they don’t change the upgrade calculus if both tiers move together and the relative multiplier remains similar.
That’s the perspective I’m trying to highlight: a ≥10× baseline Codex capacity gap vs Plus is a clean, intuitive, and compelling builder-tier distinction.
There, the baseline amount you pay for zero use is zero dollars.
You can offer the same quality of service to a organization user that consumes $200 a day as to the one that consumes $200 a year. Then also use “service_tier”: “priority” on supporting models’ API request to go to the front of the line at double the price. You won’t be nagged to add $40 increments to bypass a timer, you’ll just be turned off if you empty your balance of prepaid credits.
The VSCode extension allows you to start with an API key instead of linking a ChatGPT subscription. Perhaps paying for what you use would simplify your calculus.
Thanks — I’m aware of the API route and the pay-as-you-go model. That’s a valid option for fully elastic scaling.
However, my feedback is specifically about subscription tier positioning rather than pure usage-based billing.
The API model solves the “no ceiling” problem by shifting entirely to variable cost, which makes sense for certain production deployments. But many individual builders and small teams prefer a predictable monthly tier that clearly signals its intended workload capacity.
My point is that if Pro is positioned as a full-time daily development tier, then a baseline Codex capacity multiplier that more closely matches the pricing multiple (e.g., ≥10× vs Plus) would make the upgrade path structurally clearer — without requiring users to move to API billing just to achieve sustained throughput.
In other words, API solves scaling technically.
Tier design solves scaling strategically.
OpenAI has “buy Codex” credits even for Pro subscription levels.
They have motivation to make any subscription tier be less than what you want.
And then, to keep subscriber levels on a balance sheet after those users have gotten over the honeymoon period of high use. The desired user in a subscription plan for services that directly incur costs by usage is a user that doesn’t make use.
Why wouldn’t you get directly 10x the use of the one thing that seems of value? Because on Codex, a casual day is 50% more than a monthly bill.
A pro subscription gives you higher use in ways that are not a direct correlation: larger input contexts to models, more reasoning turns of internal tool calls, Pro models which cost 12x as much on the API, almost non-stop image generation, larger resolutions and durations on Sora, etc.
So your feedback, “I want it made clear that I can cost OpenAI more than my subscription” is unlikely to resonate - and here you are just talking to fellow users anyway.