Cannot Fine Tuning on Explore plan?

I am trying to fine-tune gpt-3.5-turbo model to evaluate how it performs on my own datasets but I keep getting this error - " Fine-tuning jobs cannot be created on an Explore plan…" I have enough credits with me.

Fine-tuning is what I intend to explore to assess if it makes sense for me. What do I do?

Hey @Mayank11 you will need to upgrade to the Pay as you go plan or buy API credits in order to access fine-tuning.

3 Likes

Thank you for getting back. Actually i have lot of credits for the normal plan but I realize API credits is what I need because I am interested in fine tuning. Can I convert my existing credits into API credits?

If you have free trial credits, they will be used before any pre-paid credits that you purchase.

Being a paying customer will lift rate limits and other restrictions.

The API returning “explore plan” maybe refers to using trial with no payment plan otherwise. Silly undocumented names for undocumented payment systems.

I am confused about this. I can hit the API and get responses via API for my prompts. Its just the fine tuning API that’s giving this error message. If I move to paying plan, then will I be able to use my existing credits for fine tuning API?

What’s the point of the explorer plan if I can’t explore the product? How do I know if the fine-tuning will work for me if I can’t test it first? I’m much less inclined to pay for something if I don’t know that it will work for me.

Enough fine-tune tokens and examples to get satisfactory fine-tune results would likely take more than $5 of trial credit, and then use of the model is 8x more expensive.

500k gpt-3.5-turbo training tokens = $4

(tokens: 100 in/400 out) examples at 5 epochs = 2500 tokens = under 200 training examples possible.

(training the retiring davinci was over 3x more expensive still)

Maybe don’t upload such a large training set then when you are exploring. I just uploaded a 10-row training set (each row contained a fairly sizeable chunk of text) and it was 33¢. Yes, that is a small training set, but it’s big enough for me to test my code and know if my concept will work.

You usually won’t get any useful result with only 10 examples, this is mainly to stop abuse given it is much more intensive to run a fine-tuning job than do a simple completion via the API. Apologies for the hassle.

2 Likes