Customers who haven’t paid at least $250 are unable to finetune the new gpt-4o-mini model, as shown in the error. This is quite a large expense for individuals such as myself. Seeing that 3.5-turbo is available for finetuning to everyone (and I’ve done so many times), this inconsistency is frustrating to anyone wishing to get the new finetuning prices and the newer model.
I understand that releasing anything major is done slowly at OpenAI to avoid hiccups, but I felt the need to make this post just in case the plan was not to change this. OpenAI seldom communicates and it’s impossible to tell.
EDIT: The free usage promotion until September is probably the reason for this. Ah well, cya in fall!
As you mentioned, ‘At least it’s Tier 3, or they don’t give free training tokens’ a Tier 3 account according to the level guidelines here (https://platform.openai.com/docs/guides/rate-limits/usage-tiers) should be able to use the fine-tuned GPT-4o mini model, right? However, I see that the notification below the image indicates it is Tier 4?
Hi there @OnceAndTwice - just wanted to follow up in you case you have not heard already: fine-tuning gpt-4o-mini is now possible for all usage tiers 1-5 with up to 2M free training tokens per day.
As mentioned by others above, while it’s not free, fine-tuning GPT-4o-mini is available regardless of whether you are in Tier 4 or above. The costs are explained as follows:
For a training file with 100,000 tokens trained over 3 epochs, the expected cost would be: ~$0.90 USD with gpt-4o-mini-2024-07-18
Although it’s not free, it’s significantly cheaper to fine-tune compared to GPT-3.5-turbo.
My interpretation of this update is that any developer in tiers 1-5 now can benefit from 2M free training tokens per day through to the end of September:
I apologize for missing the updates on X and providing incorrect information.
As mentioned above, fine-tuning is available until September 23rd regardless of tier, so I think it’s worth everyone’s while to try fine tuning during this time
When you fine-tune an existing model, do we send a new JSONL with just the new prompt-completions or we add the new prompt-completions to the same previous file (which has all the previous + the new ones)? I tried with just the new ones, and it seems it completely forgot about the old ones. I also tried adding the old ones to the new one, and the cost it showed me was for the whole file (old + new). What am I missing ?