Daily usage of 22/09/2023 at Usage dashboard (USD 709.23) and OpenAI CLI (USD 698.94) is not tally even compare with manual calculated pricing (USD 141.84657). Can you explain more details why these three numbers are totally different.
10 epoch:
9 x 684,030 trained tokens x 0.0060 / 1K tokens = 36.93762
15 epoch:
1 x 1,068,105 trained tokens x 0.0060 / 1K tokens = 6.40863
16 x 1,026,045 trained tokens x 0.0060 / 1K tokens = 98.50032
Total = 141.84657
Manual calculation is based on details from Daily usage breakdown (UTC) details under fine-tuning training drop down list (OpenAI Platform). The first number is represent how many model that consist the same number of trained tokens and epoch.
FYI, I use openAI CLI command: openai api fine_tunes.get -i <FINE_TUNE_JOB_ID>, to get fine-tune cost for each fine-tune model created (OpenAI Platform).
Lets get a price that is in terms of a million tokens.
Model
Training
Input usage
Output usage
Context Length
GPT-3.5-turbo-base
$n/a
$1.50
$2.00
4k
GPT-3.5 Turbo fine-tune
$8.00
$12.00
$16.00
4k
--------
-------------
-------------
-------------
-------------
babbage-002 base
$n/a
$0.40
$0.40
16k
babbage-002 fine-tune
$0.40
$1.60
$1.60
16k
--------
-------------
-------------
-------------
-------------
davinci-002 base
$n/a
$2.00
$2.00
16k
davinci-002 fine-tune
$6.00
$12.00
$12.00
16k
First, there is no “OpenAI CLI” for obtaining pricing that is official.
Why is 10 epochs given a 9x multiplier? Why is 15 epochs 16?
Have you actually tokenized and totalled the contents of your training data prompts in your file? It is (prompt tokens + generation training tokens) x epochs.
If you go to usage month, and at the bottom select a day, you should have a separate “fine tune training” section that can be expanded. That will show you “#### trained tokens” by selecting hourly and then going to the five minute level, and should let you isolate a particular model training period to add all the tokens seen there.
If there’s a discrepancy in billing, you have probably just hours to get it corrected before they bill high-value accounts first at the end of the month.
Thanks for reply. Manual calculation is based on what I get details from Daily usage breakdown (UTC) details under fine-tuning training drop down list. The first number is represent how many model that consist the same number of trained tokens and epoch.
Could you explain me more on why I couldn’t get the total USD 709.23? So far I just calculated with training cost without considering input and output usage for fine-tuning.
FYI, I use openAI CLI command: openai api fine_tunes.get -i to get fine-tune cost for each fine-tune model created OpenAI Platform.
I get that to be in the 100-200$ range as it stands, you’ll need to reach out to help.openai.com and use the support bot in the bottom right to leave your contact details and a description of the issue.