GPT-Image-1.5 rolling out in the API and ChatGPT

Danke,

Most of my images are not art, but are for entertainment purposes only and should not be taken seriously.

1 Like

@jeffvpace

Respect - I’m not a police man. :slightly_smiling_face:

Selbst habe ich manchmal Gedanken, was mit der Zukunft noch kommt.

Zum Beispiel in Europa.

Are you from the DACH region?

For many countries is the self language LLM is a problem.

And the rules are others, but medically would I say too: openai first…

1 Like

Respect - I’m not a police man.

I know. I was just making a joke.

Are you from the DACH region?

No. I learned a bit of German in high school many years ago…

I believe that your profession, and heath care in general, will benefit greatly from AI in the not too distant future - lets hope!

Have a Great Holiday!

2 Likes

OpenAI still has not touched on any documentation of costs for gpt-image-1.5, which additionally includes:

  • reasoning text (undelivered) output tokens (200-500 typical)

  • mandatory or non-working input_fidelity switch, always “high”

    Documentation implies the switch should work - a) GPT Image models (gpt-image-1.5, gpt-image-1, and gpt-image-1-mini) support high input fidelity. b) If you are using gpt-image-1.5, the first 5 input images will be preserved with higher fidelity. c) To enable high input fidelity, set the input_fidelity parameter to high. The default value is low.

  • n>1: you pay again for input for each generation, unlike dall-e or chat

    edits: n=1
    "input_tokens_details": {"image_tokens": 4354,"text_tokens": 59}
    edits: n=2
    "input_tokens_details": {"image_tokens": 8708,"text_tokens": 118}
    
  • There is no usage report of cached, nor API shape for it in yaml spec, yet cached input is in the price list. Despite extensive repeated API calls in app development, I’ve not once received a single cached input token, with assurance I’m looking in the right place:

image

  • beyond denials: costs of dropped images from safety inspection - I’ve gotten just one of two requested, for example.

Consolidated pricing - calculated

gpt-image-1.5 per-image input pricing by size (n=1)

Aspect ratio band (AR = long/short) Resized short×long threshold range Tiles Token formula Total tokens Cost per image
AR = 1.00 (exact square) 512×512 1 65 + 129×1 + 4160 4354 $0.034832
1.00 < AR ≤ 1.25 (square-ish) 512×(512–640] 2 65 + 129×2 + 4160 4483 $0.035864
1.25 < AR ≤ 2.00 (rectangular) 512×(640–1024] 2 65 + 129×2 + 6240 6563 $0.052504
2.00 < AR ≤ 3.00 (more rectangular) 512×(1024–1536] 3 65 + 129×3 + 6240 6692 $0.053536
3.00 < AR ≤ 4.00 (very rectangular) 512×(1536–2048] 4 65 + 129×4 + 6240 6821 $0.054568
  • this assumes shorter dimension >=512px (OpenAI’s downsize), and that ‘closer to square’ is actually true in regards to image fidelity

  • 16 image inputs are allowed on API; assume the max that can be scraped from chat by the Responses tool is similar.

Per-image output pricing by size (n=1)

Model Quality 1024×1024 1024×1536 1536×1024
GPT Image 1.5 Low $0.008704 $0.013056 $0.012800
GPT Image 1.5 Medium $0.033792 $0.050688 $0.050176
GPT Image 1.5 High $0.133120 $0.199680 $0.198656
GPT Image 1 Low $0.010880 $0.016320 $0.016000
GPT Image 1 Medium $0.042240 $0.063360 $0.062720
GPT Image 1 High $0.166400 $0.249600 $0.248320
GPT Image 1 Mini Low $0.002176 $0.003264 $0.003200
GPT Image 1 Mini Medium $0.008448 $0.012672 $0.012544
GPT Image 1 Mini High $0.033280 $0.049920 $0.049664

Fixed-price image models

DALL·E 3

Quality 1024×1024 1024×1792 1792×1024
Standard $0.04 $0.08 $0.08
HD $0.08 $0.12 $0.12

DALL·E 2

Quality 256×256 512×512 1024×1024
Standard $0.016 $0.018 $0.02

Distilled token pricing sheet you are offered

Text token pricing (per 1M tokens)

Model Input Cached input Output
gpt-image-1.5 $5.00 $1.25 $10.00
gpt-image-1 $5.00 $1.25
gpt-image-1-mini $2.00 $0.20

Image token pricing (per 1M tokens)

Model Input Cached input Output
gpt-image-1.5 $8.00 $2.00 $32.00
gpt-image-1 $10.00 $2.50 $40.00
gpt-image-1-mini $2.50 $0.25 $8.00

Example cost-avoidance “edit”

gpt-image-1.5, 1 square + 3 rectangular in, quality:medium 1024x1024 out

  • Text input: 122 × 0.000005 = $0.000610000000
  • Image input: 24043 × 0.000008 = $0.192344000000
  • Text output: 448 × 0.000010 = $0.004480000000
  • Image output: 1056 × 0.000032 = $0.033792000000

Total cost: $0.231226

4 Likes

Another bump - of October’s report:

Bumping this in 2026, because the docs are still wrong about the per-image price grid for the mini model still, where nothing seems to account for this except for bad math (or hidden fees).

1: OpenAI’s output token consumption for “the models”, quality vs resolution

Quality Square (1024×1024) Portrait (1024×1536) Landscape (1536×1024)
Low 272 tokens 408 tokens 400 tokens
Medium 1056 tokens 1584 tokens 1568 tokens
High 4160 tokens 6240 tokens 6208 tokens

1a: My token results by running a generate at each, from “usage”

  • identical
Quality Square (1024×1024) Portrait (1024×1536) Landscape (1536×1024)
Low 272 tokens 408 tokens 400 tokens
Medium 1056 tokens 1584 tokens 1568 tokens
High 4160 tokens 6240 tokens 6208 tokens

2: OpenAI per-image pricing page for gpt-image-1-mini (output)

Quality 1024×1024 1024×1536 1536×1024
Low $0.005 $0.006 $0.006
Medium $0.011 $0.015 $0.015
High $0.036 $0.052 $0.052

3: Calculated per-image pricing for gpt-image-1-mini using Table #1 tokens and $8.00 / 1M output image tokens

Quality 1024×1024 1024×1536 1536×1024
Low $0.002176 $0.003264 $0.003200
Medium $0.008448 $0.012672 $0.012544
High $0.033280 $0.049920 $0.049664

OpenAI’s pricing page is thus showing us prices vs actual:

Quality 1024×1024 1024×1536 1536×1024
Low 229.8% 183.8% 187.5%
Medium 130.2% 118.4% 119.6%
High 108.2% 104.2% 104.7%
Some pics

gpt-image-1-mini @ 1536 and quality:high - about $0.05

dall-e-3 @ 1792 and not HD - $0.08

The second failure in a day to outfill the sides (the first completely white beyond 1024 width)

A single project request for 100 images might show us what’s being billed in dollars with the needed accuracy…

1 Like

Wow, really good analysis. Good job!

1 Like