Request to O3: Service Cost

Hello, OpenAI,

Could you please confirm whether a request to O3 costs either 20 usdt 2,000 usdt per single request? I would appreciate accurate pricing information.

Thank you in advance for clarifying.

Best regards

1 Like

Current answer: Wait and see.


It sounds like you are discussing ChatGPT, where $20 is a monthly subscription price, certainly not a fixed price “per single request”.

In the announcement video chat, it was stated that the earliest some might see this O3 model series would be at the end of January 2025. No information was presented about availability within ChatGPT, or even a mention of it.

One could speculate that with the demonstrated performance at lower computational cost, especially for the mini version, it could easily be positioned similarly to current o1 models in ChatGPT. But that is indeed speculation.

OpenAI’s answer about API pricing would be on the API pricing page whenever release happens. Like how you can read pricing there about recently-announced “01” - still only available to an elite few. Billing is based on the reasoning effort and amount of language data used in a request with current models, with pricing depending on the model employed.

they’re obviously not going to be releasing a model where each query is $2k. They’ve worked out how to unlock higher intelligence at a fairly dramatic level and now the focus will be on getting this to work at a lower cost point. A big part of that will probably be some form of fine-tuning based on complete model output - where a less compute-intensive model can learn how to reduce down the evaluation space, rather than using a full evaluation space like the complete model*.

  • My understanding is that the new model is essentially generating hundreds or thousands of o1 style answers and then using a next step to evaluate the best answer (and maybe a further set of iterations). So the core cost is the number of iterative answers being generated.

Their reveal video had a graphic with relative costs for them to run the o1 models vs. the o3 models. If I interpreted it correctly (the x-axis was unlabelled), o1 was placed somewhere around 0.7, while o3 was near 2.1. Based on this alone, a 3x increase in consumer cost would be likely. Since o1 is $20 and o1 pro is $200, maybe o3-mini is $60 while o3 is $600. On the other hand, they usually step up the subscription offerings with each new model release, so we could also get a discount to o1 pro or see it blended into the $20 option, while o3 gets charged $200. The increase to their cost to run doesn’t support it, but it’s a safer bet than saying they’d charge $2000/mo. They’d be vilified for doing so.

But I think cost of o3 mini have to be more then o1 pro. Other wise dear o1 will be left out. don’t you think so

You can see that for running the codeforces benchmark, o3-mini has significantly lower costs.

(I cleaned this up and added the arrow line annotations)

Or on the ARC challenge benchmark, with the thing running for hours of trials at the top end just like the 01 top benchmarks previously published.


The performance scale is exaggerated, the vertical origin starts at 1600, so we are looking at the top 1/3 of a total performance graph. scores 1650-2722 (with o1-preview at 1258 and unknown effort).

There is no “o1-pro” that is measurable, since it is a ChatGPT-only product, except by comparing to nerfed o1 of ChatGPT or o1-preview in quality.

1 Like