Is the API pricing for GPT-4.1 mini and o3 really identical now?

Hi, I received an announcement that the o3 model’s API pricing has been reduced by 80%, now costing $0.40 per 1M input tokens and $1.60 per 1M output tokens—which matches the pricing of GPT-4.1 mini.

Can anyone from the OpenAI team or the community confirm that:

  • The pricing for GPT-4.1 mini and o3 is now exactly the same?
  • There are no hidden differences (e.g. rate limits, usage restrictions, latency) that would affect billing?

Also, assuming the same cost, are there any drawbacks to switching from GPT-4.1 mini to o3 for general-purpose tasks?

Thanks in advance!

I think you are mistaken. It is gpt-4.1 (not mini) that is comparable.

model input pricing output vision tokens
o3-2025-04-16 $2.00/1M $8.00/1M (75 base + 150/tile)
gpt-4.1-2025-04-14 $2.00/1M $8.00/1M (85 base + 170/tile)

However, reasoning means 2x to 10x the billing of output, depending on how much internal deliberation the AI does to solve the problem.

“Answer this incredibly hard math problem, with the only allowed output being True or False” - one of the models is going to cost you much more.

1 Like

Thank you!
I see that o3 has reduced their prices by 80% to
$2 per million input tokens and
$8 per million output tokens
So the comparison is gpt-4.1.
Understood!