Am I begin overcharged for o1-mini?

Hi everyone.

I am sending some math problems to the API. I am asking the model to check the calculations and return the response. All good. I receive json response back. Something like:
[
{“number”: 1, “result”: “correct”, “details”: “”},
{“number”: 2, “result”: “correct”, “details”: “”},
{“number”: 3, “result”: “correct”, “details”: “”},
{“number”: 4, “result”: “correct”, “details”: “”},
{“number”: 5, “result”: “correct”, “details”: “”},
{“number”: 6, “result”: “correct”, “details”: “”},
{“number”: 7, “result”: “correct”, “details”: “”},
{“number”: 8, “result”: “correct”, “details”: “”},
{“number”: 9, “result”: “correct”, “details”: “”},
{“number”: 10, “result”: “correct”, “details”: “”}
]

OpenAI Tokenizer says this is 170 tokens. But the response usage says:

CompletionUsage(completion_tokens=2044, prompt_tokens=6448, total_tokens=8492, completion_tokens_details={‘reasoning_tokens’: 1856})

So I am being charged for 2044 token and the response is 170 tokens.

Am I doing something wrong? I tried reading through the documentations. I don’t see anywhere mentioning how the output_tokens are calculated?

1 Like

Welcome to the Forum!

Completion tokens for the o1 models consist of both the reasoning tokens and the tokens for the actual / visible response. You are charged for both.

It is also addressed here in the documentation as well as disclosed on the OpenAI pricing page.

Based on my own tests, predominantly with o1-preview, I can confirm that reasoning tokens always tend to be significantly higher than the number of tokens of the actual response. Your example is consistent with this.

2 Likes

In a similar way to the Assistant API you are giving Open AI even more discretion to go round and round and chew up your tokens. :).

If you want more control and can handle the decreased reasoning power, use standard Completion models.

With great reasoning comes great cost :wink:

1 Like

I’m hoping that in the future we’ll get more granular control over the amount of reasoning tokens :pray:

2 Likes

Although that would hit profit margins? :upside_down_face:

Thanks @jr.2509
It actually make sense to pay for internal prompts as well. It can be a significant extra cost for OpenAI and it’s not difficult to write a prompt to generate little output but requires a lot of processing power.

It shouldn’t be a fine print though. It should be written in HUGE fonts :smile:

2 Likes