Fine-tuning a fine-tuned model via Python API

I’m currently running the following to start a fine-tuning job on an existing fine-tuned model -

     training_file=training_file_id, model=fine_tuned_model_id

But am receiving a “Error creating job: Model ft:gpt-3.5-turbo-0125:personal:gpt-tdd:9PZMlP3T is not available for fine-tuning or does not exist.” I get the same error when attempting to create a new fine-tuning job on the fine-tuned model in the OpenAI UI as well.

I have confirmed that “ft:gpt-3.5-turbo-0125:personal:gpt-tdd:9PZMlP3T” is in the reeturn result of client.models.list(), and the same fine-tuning implementation is working when I provide “gpt-3.5-turbo-0125” as the base model, but breaking when I use my fine-tuned model id.

Is there something specific I need to do to enable fine-tuning a fine-tuned model?

Hey there and welcome to the forum!

Strange, looking at the docs this should be possible.

How new is this fine tune? Maybe things need to update internally in their systems before it can be fine tuned again?

And you’ve confirmed that you can still use the model, correct? As in, if you try to make a request like chat completion with your fine tuned model it doesn’t throw any errors?

Hi, thanks!

The fine-tuned model is relatively new, created ~24 hours ago. I am able to use it for chat completions.

I agree that this looks like I might need to just wait a day or two…I will follow up on Monday to confirm the issue is still occurring.

1 Like

You are not the first to report recently that there were problems continuing a -0125 fine-tuning.

The first thing you can try is to create a non-project “user key” (legacy key), and use that for submitting the fine-tuning API call.

1 Like

I tried using a legacy api key by creating one and replacing my OPENAI_API_KEY environment variable with it, but am unfortunately still getting the same error.

Error code: 400 - {'error': {'message': 'Model ft:gpt-3.5-turbo-0125:personal:gpt-tdd:9PZMlP3T is not available for fine-tuning or does not exist.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_available'}}

I am getting the same issue with a fine tuned model. Using the same gpt-3.5-turbo-0125.

1 Like

I’m on with support now, they have escalated to engineering.

We would be pleased if you could enlighten us when a great answer is provided

1 Like

Support has confirmed it is a bug

It seems like the problem has been resolved, and I can now retrain a pre-trained model. Thank you.