Fine tuning issue in playground


I’m currently fine-tuning a model with my SMS conversation data and have encountered an issue. Although my JSON file contains 167 lines, the fine-tuning process appears to exceed this line count. Could there be a step I’m overlooking that would cause the fine-tuning to run beyond the provided number of conversation lines? Any guidance would be greatly appreciated.

Here is the formula:

Steps = Epochs * Lines

You would expect 498 total steps


Cheers Curt. Is there an easy way I can get an estimate cost in tokens/money to train a model?

From my experience, it usually predicts the price beforehand. So very EASY

But rough price estimate is, if you need to estimate it yourself (for English):

Tokens = 0.75 * CharactersPerLine * NumberOfLines * NumberOfEpochs

Then on the pricing page, look at a chart similar to this:

Then compute your token units:

TokenUnits = Tokens/1000

Then depending on your model, in your case, your estimate for gpt-3.5-turbo is max $0.0060 per 1k, or per TokenUnits.

So in your case, the max you would pay is:

MaxDollarsForTraining = 0.0060 * TokenUnits

1 Like

Much appreciated Curt :ok_hand::beers:. Thanks again

1 Like