Hi everyone,
I recently received an email from OpenAI stating that fine-tuning is free for 1M training tokens per day for GPT-4o and 2M training tokens per day for GPT-4o mini until September 23, 2024. However, when I created a fine-tune model, I noticed that I was still being charged, albeit at a low cost.
Has anyone else experienced this issue? According to the email, fine-tuning should be free until September 23, so I’m unsure why there are charges appearing. Any insights or clarification would be greatly appreciated!
I wanted to share some details from my recent fine-tuning run to better understand the situation:
- Trained tokens: 5,370
- Epochs: 10
- Base model: gpt-4o-mini-2024-07-18
Here is the email I received from OpenAI:
Hi there,
Great news! Fine-tuning is now available to help you get higher performance at a lower cost for specific use cases.
Fine-tuning enables you to customize a model’s responses to fit your preferred structure or tone, or adapt it to follow complex domain-specific instructions.
From coding to creative writing, fine-tuning can have a large impact on model performance across a variety of domains: Cosine’s Genie achieves a SOTA score of 43.8% on the new SWE-bench Verified benchmark with a fine-tuned GPT-4o model. Distyl’s fine-tuned GPT-4o achieved a SOTA execution accuracy of 71.83% on the BIRD-SQL benchmark. Developers can already produce strong results for their applications with as little as a few dozen examples in their training data set. (See more details on their results in our blog post.)
Start by visiting the docs or head to the fine-tuning dashboard, click ‘create,’ and select ‘gpt-4o-2024-08-06’ or ‘gpt-4o-mini-2024-07-18’ from the base model drop-down. GPT-4o fine-tuning training costs $25 per million tokens, and inference is $3.75 per million input tokens and $15 per million output tokens. For GPT-4o mini, training cost is $3 per million tokens, and inference is $0.30 per million input tokens and $1.20 per million output tokens.
To help you get started, we’re also offering 1M training tokens per day for free for every organization through September 23 for GPT-4o fine-tuning and 2M training tokens per day for free for GPT-4o mini fine-tuning through September 23.
If you have questions, please reach out in the OpenAI developer forum.
Happy tuning!
I am also uploading a screenshot from their website for reference.
And I also find this on pricing page .