I’m using the Codex CLI with a ChatGPT Plus subscription and I’m trying to understand the usage quota differences between models.
From the [ChatGPT Rate Card](https://help.openai.com/en/articles/11481834-chatgpt-rate-card), I can see that for Business/Enterprise flexible pricing:
- **GPT-5.2 Instant**: Unlimited (N/A credits)
- **GPT-5.3-Codex / GPT-5.2-Codex**: ~5 credits per local message, ~25 per cloud task
This suggests GPT-5.2 (non-Codex) is significantly cheaper in terms of quota consumption.
However, for **ChatGPT Plus** users, the rate card only shows the shared 5-hour rolling window (45-225 local messages / 10-60 cloud tasks). It doesn’t specify whether switching between GPT-5.3-Codex and GPT-5.2 in the Codex CLI consumes quota at different rates.
**My questions:**
1. On ChatGPT Plus, does using `gpt-5.2` (non-Codex) in Codex CLI consume the same quota as `gpt-5.3-codex`?
2. Does the Enterprise rate card credit weighting (Unlimited vs ~5 credits) also apply proportionally to Plus plan quota consumption?
3. Has anyone done empirical testing comparing quota depletion rates between these two models on Plus?
Any insights from the community or official clarification would be appreciated. Thanks!