Hey why not, for fun…
What’s the highest token count you’ve gotten in response from a GPT model, whether reasoning or non?
I just got 13k tokens as a single code file (py)! As a part of a 16.5k total response.
So… stats and a little bit of context?
GPT-5 : 16k total tokens
Med Reas : High Verbosity : /chat/completions
(prompt 116k total 133k) , 4m15s time to completion
Context: Surprising most to me was the successful generation of a single code file of the length and complexity that was achieved. It was a re-write/overwrite of an existing file containing mostly similar content but with significant re-factoring.
Note: Since I wasn’t using /responses, I didn’t get feedback on “reasoning tokens” or whatever. This was a single-shot in the sense that there was no “internal” tool calling or whatever - this was just a straight single generation from chat completions.

