Cost of Fine Tuned Model Usage

Hello everyone,
I have a question about the usage of a fine-tuned models, as I have read on the pricing section, is it more expensive in both input and output (I am talking about GPT 3.5 Turbo)? For example, the input cos of the simple GPT 3.5 turbo is $ 0.003 in input, while it is $0.0120 for the fine-tuned version, is it correct?
Moreover, when I use a fine-tuned model I find on my “Usage” section the “Fine-tuning models” voice, while when I use the normal one, it is only GPT 3.5. Briefly, the first one is purple, the second one is cyan.
When I fine-tuned a model I put one key, it was October 2nd. I kept using the same key until today October 22nd, but on the API keys page the last usage is blocked to the first date I fine-tuned the model. Why?
Many thanks

In my chart, cost is per 1 million tokens so you can compare the prices more easily:

Model Training Input usage Output usage Context Length
GPT-3.5-turbo-base $n/a $1.50 $2.00 4k
GPT-3.5 Turbo fine-tune $8.00 $12.00 $16.00 4k
GPT-3.5-turbo-16k $n/a $3.00 $4.00 16k
GPT-4 $n/a $30.00 $60.00 8k
-------- ------------- ------------- ------------- -------------
babbage-002 base $n/a $0.40 $0.40 16k
babbage-002 fine-tune $0.40 $1.60 $1.60 16k
-------- ------------- ------------- ------------- -------------
davinci-002 base $n/a $2.00 $2.00 16k
davinci-002 fine-tune $6.00 $12.00 $12.00 16k

Comparing how much the price increases when you fine-tune:

  • gpt-3.5-turbo fine tuned is 8x more expensive to use.
  • davinci-002 (a base model) is 6x,
  • babbage-002 (nearly a nonsense-production model) is 4x.

The latter two have little use without fine-tune, as they have lower quality than what they replace even with 20-shot or massive input context for completion.

2 Likes