# OpenAI Expands GPT-4o Mini Fine-Tuning Access

OpenAI Expands GPT-4o Mini Fine-Tuning Access

Exciting news for developers! OpenAI has expanded GPT-4o mini fine-tuning access to all developers across usage tiers 1-5. Starting today, you’ll receive 2 million training tokens daily at no cost until September 23. This expansion opens up new possibilities for enhancing your applications by fine-tuning GPT-4o models with custom data.

Don’t miss out on this opportunity to optimize your AI projects. Check out the full details and start fine-tuning today!

Learn More

Thanks for this, I just fine-tuned the GPT-4o-mini model using the minimum 10 lines of examples. I realise the ideal is 50 to 100, but I really wanted to see what I could achieve with only 10 examples.

Really encouraging results, and the fact that one can set seeding is also helpful. I’m happy to share more detail if someone is interested.

1 Like