Hello, I have created fine tuned models today but they don’t appear when I’m making a call to the v1/models endpoint. I’m able to see finetuned models created 2 weeks ago but not the ones from today. I’m out of ideas to debug, is anyone facing same problem?
I’m able to see these fine tuned models in chat playground but not in assistants playground.
Hey, yeah. Same here. Or at least tangentially related…
Been working w/ the Assistants API & have done about 4 fine-tunes on gpt-4o-2024-08-06. Their documentation states that this is a fine-tunable model, as well (the support bot said it wasn’t but the docs, plus my use counters that).
All of my previous FT models were available pretty instantly after completion of the FT job, and continue to be available (and being used) today. However my most recent three FT models (as new as 24 hrs as of this post) are unavailable for use in the API or the assistants playground (they are available for use for chat completions, though).
I only did the 3 most recent, because I wanted to see if it was something I was doing causing that issue, but I don’t believe it is. I tried:
Starting from scratch, fine-tuning plain-ol gpt-4o-2024-08-06 with the entire data set from the 4 prev FT’s
Fine tuning a previous FT model (the first ‘version’ - which was trained on gpt-4o-2024-08-06 as well)
Limiting the data set being trained on to 10 basic examples
I can confirm that the fine-tune jobs finished successfully, that it’s using a compatible model (seemingly), refresh/logout doesn’t help, model is listed as available in the dashboard.
The only real difference I can notice is that the ‘timeline’ in the fine-tuning job for the previous, usable fine-tunes have the ‘The job has successfully completed’ posted twice, whereas the new ones do not.
Yeah it did I actually created the fine tuning job directly on the openAI dashboard and calling the models endpoint from postman to verify. Extremely simple usecase.
You’re correct. I actually can see the models in the chat playground but not in the assistants playground. Not sure what’s going on here. Looks like a bug to me on openAI end.
The other confusing part is I’m able to view older fine tuning models from 2 weeks ago 9/12. Not sure what happened in the past few days that lead to this issue and none reported this issue so far.
Doesnt seem to be fixed as they say, though. I do now notice the ‘Usage policy evaluations completed, model is now enabled for sampling’ messages from the recent FT jobs…
Oh wow that is very interesting observation, I haven’t checked my previous fine tuned models status progress messages but there seems to be a significant difference which might be the reason for this to fail. I’m not alone in this it looks like.