[Responses API] GPT 5 Nano ignores the detail parameter on image inputs

“detail”:“low” for images only works on tile-based vision models, reducing the cost to just one base tile 512x512 for whatever is sent.

gpt-5-mini and nano are “patches” models, that use a different input token algorithm for vision. They are not affected by the detail parameter.

gpt-5.2 also seems to be using “patches”, no support for low detail – then multiplying the token cost higher, which OpenAI is aware of by increasing the price multiplier of it on Chat Completions to match the high price charged on Responses after my report, not by fixing the completely missing documentation of what you will be billed for that model.

Sending 10 images in a request, how much in extrapolated tokens just those images are using at “detail”:“low”:

model chat completions responses
gpt-5.2-2025-12-11 3280 3270
gpt-5.1-2025-11-13 700 700
gpt-5-2025-08-07 700 700
gpt-5-mini-2025-08-07 2730 2720
gpt-5-nano-2025-08-07 2720 2720

gpt-5.2 is higher in my reverse calculation because I would have to divide by an apparent undocumented x1.2 multiplier to get the actual input tokens of “patches” (like I do for mini and nano with their published cost increase).

Your concern needing understanding could be addressed if OpenAI was responsive to documentation requests - a month ago:

1 Like