"Error creating job: Your organization must qualify for at least usage tier 4 to fine-tune gpt-4o-mini-2024-07-18. "

Hello,

Customers who haven’t paid at least $250 are unable to finetune the new gpt-4o-mini model, as shown in the error. This is quite a large expense for individuals such as myself. Seeing that 3.5-turbo is available for finetuning to everyone (and I’ve done so many times), this inconsistency is frustrating to anyone wishing to get the new finetuning prices and the newer model.

I would really appreciate if OpenAI could consider lowering the tier requirements to make it more accessible, as they’ve mentioned accessibility to be a goal of theirs in the gpt-4o-mini announcement post.

I understand that releasing anything major is done slowly at OpenAI to avoid hiccups, but I felt the need to make this post just in case the plan was not to change this. OpenAI seldom communicates and it’s impossible to tell. :slight_smile:

EDIT: The free usage promotion until September is probably the reason for this. Ah well, cya in fall!

3 Likes

At least it’s Tier 3, or they don’t give free training tokens

1 Like

As you mentioned, ‘At least it’s Tier 3, or they don’t give free training tokens’ a Tier 3 account according to the level guidelines here (https://platform.openai.com/docs/guides/rate-limits/usage-tiers) should be able to use the fine-tuned GPT-4o mini model, right? However, I see that the notification below the image indicates it is Tier 4?

1 Like

Hi there - you have to be in Tier 4 or 5 in order to take advantage of the free fine-tuning for gpt-4o-mini.

3 Likes

Can one pay USD 250 and qualify?

As long as it’s been at least 14 days since your first successful payment.

Hi there @OnceAndTwice - just wanted to follow up in you case you have not heard already: fine-tuning gpt-4o-mini is now possible for all usage tiers 1-5 with up to 2M free training tokens per day.

1 Like

As mentioned by others above, while it’s not free, fine-tuning GPT-4o-mini is available regardless of whether you are in Tier 4 or above. The costs are explained as follows:

For a training file with 100,000 tokens trained over 3 epochs, the expected cost would be: ~$0.90 USD with gpt-4o-mini-2024-07-18

Although it’s not free, it’s significantly cheaper to fine-tune compared to GPT-3.5-turbo.

My interpretation of this update is that any developer in tiers 1-5 now can benefit from 2M free training tokens per day through to the end of September:

1 Like

I apologize for missing the updates on X and providing incorrect information.

As mentioned above, fine-tuning is available until September 23rd regardless of tier, so I think it’s worth everyone’s while to try fine tuning during this time :grinning:

1 Like

gpt-4o-mini-2024-07-18 after the free period ends on October 31, 2024. (https://platform.openai.com/docs/guides/fine-tuning/estimate-costs and https://platform.openai.com/docs/guides/rate-limits/usage-tiers)

1 Like

When you fine-tune an existing model, do we send a new JSONL with just the new prompt-completions or we add the new prompt-completions to the same previous file (which has all the previous + the new ones)? I tried with just the new ones, and it seems it completely forgot about the old ones. I also tried adding the old ones to the new one, and the cost it showed me was for the whole file (old + new). What am I missing ?