I am trying to use prompting and fine tuning together on an existing Assistant. If the Assistant has been created via OpenAI’s UI and the jsonl files required to train a particular model have also been uploaded via the UI, how can I check if the trained model is being used in this assistant? Is it enough to only set the assistant to use the corresponding main model, i.e. GPT 3.5 turbo 1106?
On this occassion I have not created the assistant programatically but via the UI at OpenAI platform and this is why I am asking.
Only fine-tuning models based on gpt-3.5-turbo-0125 are supported in Assistants.
The model name you wish to use would start with ft:, and of course you could have several with different behaviors, needing you to choose the correct one when you create or modify an Assistant ID.
Attempt and this is what you get:
{
"error": {
"message": "The requested model 'ft:gpt-3.5-turbo-1106:orgorg::83852n3r' cannot be used with the Assistants API.",
"type": "invalid_request_error",
"param": "model",
"code": "unsupported_model"
}
}
The above will create a new assistant. What if we would like to edit an existing assistant id (created via the UI) and change the model is using to a specific fine tuned model?
You can include the behavior of managing the fine tune model in playground in your bug report tthat you send through your account’s help->messages; API->feedback.
When one selects to fine tune a model, say gpt-3.5-turbo-0125 by uploading the files from the UI, the corresponding model appears in the drop-down list when editing the assistant, as shown below. Sorry for not confirming earlier.