Did any one notice way higher than expected token consumption report in api usage with gpt 5.5 today (Apr 28).
My dashboard suggests that when using codex I used 95 Million tokens input, 25 Million cached input to produce 600K output tokens!!??
I did not use codex for such intense work, and the output token count to input ratio should on its own suggest there is some kind of problem with this.
Hey @kurlytail, yeah, that spike would feel pretty off, especially with that input to output ratio.
There’s actually been some discussion around usage reporting, you might find this thread helpful: Codex Rate Limits Discussion Thread. A few folks are comparing notes there and sharing what they’re seeing.
Worth a quick skim to see if it lines up with your numbers, and to track any updates as more info comes in.
-Mark G.
Codex 5.5 is consuming tokens as if its drinking gasoline even at thinking medium
Openai should seriously look into this problem
Either come up with a codex version or decrease the token consumption
Hey guys, you have to fix the issue ASAP. Don’t play the same games as Anthropic, please.
The most I was able to do before was to hit my weekly limit after 6 days which felt pretty reasonable to me. Now with using GPT 5.5 (they even buried 5.3 Codex in another submenu drop down together with touting using fewer tokens with 5.5 so i did switch to using 5.5) I managed to hit my weekly limit within 2.5 days. That does not feel great guys.
I’ve also experienced this problem .Our token usage has gone up 5 fold with 5.5.