GPT-4-o-Mini Vision Token Cost Issue

Hi everyone,

I wanted to bring attention to what appears to be a significant token cost discrepancy with GPT-4-o-Mini’s image handling. I’ve noticed that when processing the exact same images with identical code, simply switching from GPT-4-o-Mini to other models reduces the token cost by a factor of 25.

This observation isn’t speculative - I’m seeing this directly in the usage data returned in the API responses. To be specific:

Same image, same code:

  • GPT-4-o-Mini: ~25x more tokens
  • Other models: 1x tokens (baseline)

This seems like it could be an unintended calculation issue rather than intended pricing, given the dramatic difference. Has anyone else encountered this? It might be worth investigating if you’re using GPT-4-o-Mini for image processing in production.

Let me know if others are seeing similar token counts in their usage data.

1 Like

Hi and welcome to the Forum!

gpt-4o-mini is known to consume a significantly higher amount of tokens relative to gpt-4o or other models, making the cost for image processing essentially similar across models.

Here’s a thread from July discussing this in greater detail.

Additionally, as also referenced in the thread, here is an X post by the OpenAI Head of Developer relations on this very point:

In fact, it would currently even be 50% cheaper to process an image with the latest snapshot of gpt-4o-2024-08-06 compared to gpt-4o-mini given its recent price decrease.

2 Likes