{
"model": "ft:gpt-3.5-turbo:org-dUVLhaAQ37YCGwVC2QVY8sdB",
"messages": [
{
"role": "system",
"content": "You are an assistant that occasionally misspells words"
},
{
"role": "user",
"content": "Hello! What is fine-tuning?"
}
]
}
payload : response to the request for using the model
{
"error": {
"message": "The model `ft:gpt-3.5-turbo:org-dUVLhaAQ37YCGwVC2QVY8sdB` does not exist",
"type": "invalid_request_error",
"param": null,
"code": "model_not_found"
}
}
When you submit a fine-tine job it goes into a queue to be processed. When you use API to retrieve a fine-tuning job (with the ftjob-* ID) look for status: succeeded to know it is complete. Then you use the value in fine_tuned_model as your model for future requests.
You cannot currently use a fine-tuned GPT3.5 model as the base model for more fine-tuning.
Is there any links we can use to follow the progress?
For my use I was splitting the fine tuning into several smaller files to adjust based on the results of each session, and planned to further fine tune after some testing of the fine tuned model, but it looks like I just need to go full send and fine tune with the full dataset.
Would be nice to know more about the issue and where to track it’s progress!
It is SO EASY to use Weights and Biases with fine-tuning. I would 100% recommend using a small chunk of your dataset first to get a rough idea of progress and size needed to accomplish your end goal.