Examples of online calculator’s issue vs API billings
gpt-image-1
Send an image to gpt-image-1
via the edits endpoint. The online calculator at the bottom of the pricing page shows:
However the API usage is:
Usage(input_tokens=219, input_tokens_details=UsageInputTokensDetails(image_tokens=194, text_tokens=25), output_tokens=272, total_tokens=491)
194 tokens = 1 tile
is delivered by the API with an input image of 1024x1025, and also 1025x1024 (by a resize to 512x512 of the longer dimension, a round-down), not 323 = 2 tiles of the online calculator. (showing 1025->513).
Sending an actual 512x513 image gives two tiles’ billing.
Reducing to 1023x1024 input is one tile’s billing. I’ll let OpenAI run all the trials to find when “almost square” no longer becomes a square…it is formulaic - a different formula.
gpt-4.1-mini
One of these two calculators is wrong…
Mine
4000 x 1474 = 2450
billed tokens
OpenAI’s
4000 x 1474 = 2348
billed tokens ??
Trials
trial: send the image to gpt-4.1-mini
Showing total input 2498
billed tokens (w overhead)
The trick answer: Both are wrong, OpenAI worse
OpenAI found a way to bill the maximum
Billing is 1536 x 1.62 model multiplier = 2488
24 tiles × 64 tiles = 1536 token max
Additional
Edits endpoint, gpt-image-1
Beyond the prompt tokens measured, there are 32
additional text tokens that are billed when sending a single image. This is documented NOWHERE.
(postscript: my web script calculator obtained correct pricing when integrating gpt-image-1, why I had to triple-check a lot more cases and models.)