Discrepancy of input image token count between gpt-image-1 and gpt-image-1.5

I observe very huge image token count difference between gpt-image-1 and gpt-image-1.5.

Here are the input token details for the very same operation (composition of three images).
Each image originate from a 500x500 webp pictures converted to base64.
API calls are made throught the last version (6.15) of openai npm package.

gpt-image-1 :

input_tokens_details: {
      image_tokens: 582,
      text_tokens: 78
    }

gpt-image-1.5:

 input_tokens_details: {
      image_tokens: 13062,
      text_tokens: 81
    }

What could explain such a difference ?

OpenAI is forcing you to pay for “input_fidelity”: “high”.

Read, where I outline the undocumented costs of image input now:

2 Likes