FineTuning running in 'pending' status and spending funds

Hi,

I have a model FineTuning with only 42kB of .jsonl data in 23 prompt entries started via ipynb. It seems it’s “stuck” in ‘pending’ mode for 75mins now, and is continuously spending funds. Is this expected behavior, or should be stopped and debugged?

Job not in terminal status: pending. Waiting.
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending
Status: pending

As a rough guide, if you have 42kb of data then it will take 1/4 of that in tokens, more for symbolic data up to 1:1 so you could potentially use between 11k and 42k tokens at $0.0004 per 1k should cost you around 1 - 2 cents ($0.01)

Thank you, so the cost will be solely data-size-based, it will not accumulate with time?

What about the time, I read in a few topics some FineTuning jobs did not budge from ‘pending’ for 24h and more. So this is a usual behavior? I can just let it run and wait to finish during the week?

well, just keep a look at your token usage for ada-002 and make sure it’s not in the hundreds of thousands level, if it is, then you know you have an issue.

75 min is not unusual. How many epochs? These multiply the cost and the training time.

Not sure if it charges costs in real-time… that seems surprising. Maybe each epoch it does? Let us know!

Great point, forgot to mention to * by the epochs, nice catch.

1 Like

It was pending & queueing for 2.5 hrs, and trained in 2mins. Everything is fine.

It seemed as it’s spending funds as it made the payment immediately, and nothing was happening. But OK, will get used to it.

I left hyperparameters out (model, epochs…) to test auto setup, it went for 4 epochs, with 1 epoch per sec, that is nice. With, as mentioned, entire fine-tuning done in 2 mins.

2 Likes