Not able to fine tuning GPT-3.5-Turbo

I was trying to fine-tune the model GPT-3.5-turbo using API. The fine-tuning job was successfully created but it was showing a waiting state. I waited for more than half an hour but still, it is showing a waiting state.
Is there any limit to fine-tuning a model, or are there any restrictions in place?

The limitation is in how deep the queue is for performing fine-tune. While some are done within the hour, several hours is not unusual. The same waiting for processing is also the case with the JSONL file upload.


In the future, I would specify the the exact model explicitly, either gpt-3.5-turbo-0613, gpt-3.5-turbo-1106, or gpt-3.5-turbo-0125

The generic name for gpt-3.5-turbo chat completions now points to the -0125 that only recently was added to fine-tune capability (this addition snuck by until I just now double-checked), and I haven’t explored what you get when using the generic name as a fine-tune model. Better to ask for and get exactly what you want.

There’s a limitation of parallel runs, and fine-tune per day, depending on your tier. Not something casual use would run up against.

I used gpt-3.5-turbo-1106 and gpt-3.5-turbo-0125 for fine-tuning. Currently, I have a Tier-1 account.
In total, three fine-tuning jobs were executed successfully, but subsequent jobs are waiting state.
What do you think? @_j

If you sent more, while three are in process, the parallel limit for at least tier-3 and above, the other jobs may be held off or blocked. You’d expect the limit would give you an API error, but I’ve never had four to kick off at once.

I wouldn’t start scratching my head until your first post above says > 4h

@_j Is there a way we can check model availability for fine-tuning? If not, I recommend adding it as a feature request.