I’m so sorry to hear about that. It is disappointing when an API doesn’t meet your expectations, and overbills.
Platform documentation that has GPT-5 in a table (but doesn’t have it in the header), still reassures us:
A 4096 x 8192 image in “detail”: “low” most costs 85 tokens
Regardless of input size, low detail images are a fixed cost.
Using Chat Completions, today:
— Testing “low”
A black-and-white checkerboard pattern of alternating squares arranged in a grid. CompletionUsage(completion_tokens=24, prompt_tokens=79, total_tokens=103, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))
This corresponds to the base tokens of 70 for the gpt-5 model.
— Testing “high”
A high-contrast black-and-white checkerboard pattern filling the frame.
CompletionUsage(completion_tokens=23, prompt_tokens=359, total_tokens=382, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))
This corresponds to 1x70 + 2x140 = 350 of the two-tile image.
Here’s Chat Completions code to do the same with the magic of "type": "image_url" and an image included within. Refactor for Responses so you can care to submit a bug report to the tracker (oops, there’s none) and a refund invoice for any overbilling discovered)
"""send a checkerboard_513x512.png 2,144 bytes for vision"""
base64_encoded_image = "iVBORw0KGgoAAAANSUhEUgAAAgEAAAIACAIAAACU2CiTAAAIJ0lEQVR4nO3XwQ0cMQwEQdP550xnwJ+wxnVVANLo1dDs7p+XZubp+fbf7L/Zf7P/5/f/fXoBAP8zDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAICueX3B7j49f+btE+y/2X+z/2b/5/v9AwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6ZnffXjDz9Hz7b/bf7L/Z//P7/QMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAuub1Bbv79PyZt0+w/2b/zf6b/Z/v9w8A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6JrdfXvBzNPz7b/Zf7P/Zv/P7/cPAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOia1xfs7tPzZ94+wf6b/Tf7b/Z/vt8/AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBrdvftBTNPz7f/Zv/N/pv9P7/fPwCgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACga15fsLtPz595+wT7b/bf7L/Z//l+/wCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCArtndtxfMPD3f/pv9N/tv9v/8fv8AgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgK55fcHuPj1/5u0T7L/Zf7P/Zv/n+/0DALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALpmd99eMPP0fPtv9t/sv9n/8/v9AwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC65vUFu/v0/Jm3T7D/Zv/N/pv9n+/3DwDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDomt19e8HM0/Ptv9l/s/9m/8/v9w8A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6JrXF+zu0/Nn3j7B/pv9N/tv9n++3z8AoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoGt29+0FM0/Pt/9m/83+m/0/v98/AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBLAwC6NACgSwMAujQAoEsDALo0AKBrXl+wu0/Pn3n7BPtv9t/sv9n/+X7/AIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIAuDQDo0gCALg0A6NIAgC4NAOjSAIA/Wf8AoBab6CH9t6EAAAAASUVORK5CYII="
image_parts = [{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{base64_encoded_image}",
"detail": "low", # parameter under test
},
}]
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "Describe image briefly"},
*image_parts,
],
},
]
parameters = {
"model": "gpt-5",
"messages": messages,
"max_completion_tokens": 4000,
"verbosity": "low",
"reasoning_effort": "minimal",
}
# Send the request and receive the response
print(f"--- Testing")
import openai
client = openai.Client()
response = client.chat.completions.create(**parameters)
print(response.choices[0].message.content)
print(response.usage)