Fine tuning gpt-3.5 just released yesterday. I was curious to check it out. so, I followed their newly available doc and fall into some issues.
- I created the training data and uploaded the file that worked fine
- Then I created a fine tuning job which also turned out to be fine
- Now ,when I was trying to use the fine tuned model, it always says that “model does not exist”
I am providing with the screen shots
payload : job creation response:
payload : body for using the model
"content": "You are an assistant that occasionally misspells words"
"content": "Hello! What is fine-tuning?"
payload : response to the request for using the model
"message": "The model `ft:gpt-3.5-turbo:org-dUVLhaAQ37YCGwVC2QVY8sdB` does not exist",
Appears like it didn’t finish. Your model should be here not your org id…
Did you get an error message when fine-tuning?
Same error is appearing on my side “The model does not exist”.
No i did not get error message but a success message.
how to know when my model is finished with the training?
Is there any api that can provide me with that information?
same for us. Training finished and works… But when we try to use it as base mode for fine-tuning we are getting the same error.
Training finished, and then it says ‘Model ftgpt-3.5-turbo-0613izdat::7qjtSaj8 is not available for fine-tuning or does not exist.’,
When you submit a fine-tine job it goes into a queue to be processed. When you use API to retrieve a fine-tuning job (with the
ftjob-* ID) look for
status: succeeded to know it is complete. Then you use the value in
fine_tuned_model as your model for future requests.
You cannot currently use a fine-tuned GPT3.5 model as the base model for more fine-tuning.
The ultimate answer of availability can be to retrieve the account’s available models from the models endpoint.
Also facing that issue. Can not fine-tune second time my fine-tuned model gpt3.5.
how to vote for that? where to create an issue?
They’ve said they’re working on it and it might be available this Fall…
Is there any links we can use to follow the progress?
For my use I was splitting the fine tuning into several smaller files to adjust based on the results of each session, and planned to further fine tune after some testing of the fine tuned model, but it looks like I just need to go full send and fine tune with the full dataset.
Would be nice to know more about the issue and where to track it’s progress!
Oh no pls don’t.
It is SO EASY to use Weights and Biases with fine-tuning. I would 100% recommend using a small chunk of your dataset first to get a rough idea of progress and size needed to accomplish your end goal.
One. Freaking. Line. Of. Code. ONE
And then you get pretty graphs as well.
I just read the new fine-tuning guide and saw this:
Does the new fine-tuning endpoint still work with Weights & Biases for tracking metrics?
No, we do not currently support this integration but are working to enable it in the near future. ref