Finetuned models missing from API

Hello, I have created fine tuned models today but they don’t appear when I’m making a call to the v1/models endpoint. I’m able to see finetuned models created 2 weeks ago but not the ones from today. I’m out of ideas to debug, is anyone facing same problem?

I’m able to see these fine tuned models in chat playground but not in assistants playground.

7 Likes

Did the fine-tune job finish?

1 Like

Hey, yeah. Same here. Or at least tangentially related…

Been working w/ the Assistants API & have done about 4 fine-tunes on gpt-4o-2024-08-06. Their documentation states that this is a fine-tunable model, as well (the support bot said it wasn’t but the docs, plus my use counters that).

All of my previous FT models were available pretty instantly after completion of the FT job, and continue to be available (and being used) today. However my most recent three FT models (as new as 24 hrs as of this post) are unavailable for use in the API or the assistants playground (they are available for use for chat completions, though).

I only did the 3 most recent, because I wanted to see if it was something I was doing causing that issue, but I don’t believe it is. I tried:

  • Starting from scratch, fine-tuning plain-ol gpt-4o-2024-08-06 with the entire data set from the 4 prev FT’s
  • Fine tuning a previous FT model (the first ‘version’ - which was trained on gpt-4o-2024-08-06 as well)
  • Limiting the data set being trained on to 10 basic examples

I can confirm that the fine-tune jobs finished successfully, that it’s using a compatible model (seemingly), refresh/logout doesn’t help, model is listed as available in the dashboard.

The only real difference I can notice is that the ‘timeline’ in the fine-tuning job for the previous, usable fine-tunes have the ‘The job has successfully completed’ posted twice, whereas the new ones do not.

4 Likes

Yeah it did I actually created the fine tuning job directly on the openAI dashboard and calling the models endpoint from postman to verify. Extremely simple usecase.

You’re correct. I actually can see the models in the chat playground but not in the assistants playground. Not sure what’s going on here. Looks like a bug to me on openAI end.

Glad I’m not going crazy. Frustrating that they have a 3 day SLA for support responses & there are seemingly breaking changes that happened recently.

The other confusing part is I’m able to view older fine tuning models from 2 weeks ago 9/12. Not sure what happened in the past few days that lead to this issue and none reported this issue so far.

Potentially related (cant post OpenAi Status page link)

Doesnt seem to be fixed as they say, though. I do now notice the ‘Usage policy evaluations completed, model is now enabled for sampling’ messages from the recent FT jobs…

oh I haven’t checked that page. But I’m not getting any 404 issues nor I understand what sampling permissions are.

Thanks for reporting. We’re trying to see if we can find out additional info.

4 Likes

This is likely the new holdup. Scanning your model weighting and production itself, which I didn’t note before.

“Sampling” is simply you being able to get a response generated.

1 Like

Same problem here.

A few weeks ago I did an FT and is listed when i call the API to list the models.

This morning I did FT two models and neither appears on the API response.

As you can see, in the messages of the today FT model the following is missing:

  • “Evaluating model against our usage policies before enabling”

  • “Usage policy evaluations completed, model is now enabled for sampling”

I’m also able to see these fine tuned models in chat playground but not in assistants playground.

I imagine the problem may be related to this.

3 Likes

Oh wow that is very interesting observation, I haven’t checked my previous fine tuned models status progress messages but there seems to be a significant difference which might be the reason for this to fail. I’m not alone in this it looks like.

1 Like

Dealing with this problem too! Any insights or fixes? What does OpenAI have to say?

1 Like

Maybe it will speed up model release if any generative tasks with uninformed usage policy safety attempts have quick redirection…

2 Likes

Some my older FTs have this message, some don.t.

Same problem, duplicate thread: https://community.openai.com/t/fine-tuned-model-not-available-when-creating-assistant/956988

2 Likes

The FT models I have compared the current one with are from 1 or 2 weeks ago. Are yours as well?

OK, it seems to be a shared issue… hopefully that will be fixed then.

Mine are from 3-4 months ago.