Can't fine-tune a fine-tuned model even if API documentation claims it's supported (using the API or the Playground)

Trying to fine-tune a fine-tuned model results in the following error :

openai.BadRequestError: Error code: 400 - {'error': {'message': 'Model ft:gpt-3.5-turbo-0125:{org}:{mycustomsuffix}:9MiTvkka is not available for fine-tuning or does not exist.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_available'}}

The same error is shown on the Playground web app when trying to manually fine-tune a fine-tuned models using the UI.

(Note that the fine-tuned models appears in the dropdown for base models) :
Error creating job: Model ft:gpt-3.5-turbo-0125:{org}:{mycustomsuffix}:9MiTvkka is not available for fine-tuning or does not exist.

Here’s a little Python script that doesn’t need you to type the right model, it gets the fine tunes you’ve performed and makes sure they are in your orgs model list endpoint.

Also then you get to choose one.

import openai
client = openai.OpenAI()
lst = client.fine_tuning.jobs.list()

ft_dict = {}
idx = 0

for ft in lst.data:
    if ft.fine_tuned_model:
        idx += 1
        ft_dict[idx] = ft.fine_tuned_model
        try:
            apimod = client.models.retrieve(ft.fine_tuned_model).id
            print(f"{ft.fine_tuned_model}: API OK")
        except openai.error.InvalidRequestError as e:
            print(f"{e}\n{ft.fine_tuned_model}: model not in API")
            ft_dict.pop(idx)
        except Exception as e:
            print(f"{e}\n{ft.fine_tuned_model}: API error {e}")
            ft_dict.pop(idx)

print("\n\n=== Example usage: Pick from good models ===")
for key in ft_dict.keys():
    print(f"[{key}]: {ft_dict[key]}")

while True:  # try until valid choice or exit
    choice = input("Your choice? (Press Enter to exit, or select a number): ").strip()
    if choice == "":
        print("Exiting the script...")
        exit()
    try:
        choice = int(choice)
        if choice in ft_dict:
            break
        else:
            print(f"Invalid choice: {choice}. Please select a valid number.")
    except ValueError:
        print(f"Invalid input: '{choice}'. Please enter a number or press Enter to exit.")

my_model = ft_dict[choice]
print(f"You chose {choice}: '{my_model}'")

That should let you see if a model to continue on isn’t really there with the API key you are using.

Thanks a lot for your response. I tried to run your script using my authentication & project keys like this :

client = openai.OpenAI(
    api_key='...',
    organization='...',
    project='...'
)

and it outputs that my models indeed exist and that I have access to the models (I’ve hidden the prefix for privacy but left the suffix to compare with later screenshot having the error) :

But, when trying to fine-tune one of these models, I still get the ‘model_not_available’ error. I thought that it may be related to some kind of permissions on models but I can’t find anything about permissions in OpenAI’s back-office.

Bizarro. Continuing fine-tune should not be blocked on any model.

You could try omitting project and organization from the API call if it is your own organization, or temporarily set the default of all API keys in the web interface to be the organization where the model is if you are doing this work for another. Generate a new “all permissions” API key.

Projects now can have endpoints set read-only or disabled.

Or do a little check with a 1-epoch job on 10 examples, and then continue it again on the 10 examples to see if just the internal metadata of the existing model is messed up.

If the API just resolves to block all continuations on -0125, that’s a big bug needing OpenAI support. You’d think more people would notice if it was system-wide.

I tried to reset my auth tokens and even created a new project but the errors remained the same. I took a pause working on this project for several days and today coming back on it, I ran the same code again and it works… As you said, should have been a bug in the openai API

Thanks for your help!