Vision Pricing calculator: inaccurate resizing/rounding vs API costs

Examples of online calculator’s issue vs API billings

gpt-image-1

Send an image to gpt-image-1 via the edits endpoint. The online calculator at the bottom of the pricing page shows:

However the API usage is:

Usage(input_tokens=219, input_tokens_details=UsageInputTokensDetails(image_tokens=194, text_tokens=25), output_tokens=272, total_tokens=491)

194 tokens = 1 tile is delivered by the API with an input image of 1024x1025, and also 1025x1024 (by a resize to 512x512 of the longer dimension, a round-down), not 323 = 2 tiles of the online calculator. (showing 1025->513).

Sending an actual 512x513 image gives two tiles’ billing.

Reducing to 1023x1024 input is one tile’s billing. I’ll let OpenAI run all the trials to find when “almost square” no longer becomes a square…it is formulaic - a different formula.


GPT-4.1

online calculations, no issues found so far

The calculator here matches the API in these edge cases that might stimulate rounding issues. Billed:

2049x513 = 1453
2050x513 = 773


Additional

Edits endpoint, gpt-image-1

Beyond the prompt tokens measured, there are 32 additional text tokens that are billed when sending a single image. This is documented NOWHERE.

(postscript: my web script calculator obtained correct pricing when integrating gpt-image-1, why I had to triple-check a lot more cases and models.)