I fine-tuned the gpt-3.5-turbo-1106 model with some example prompts. The job finished, and I can see the new model in the /models endpoint. If I try to create an assistant using /assistants with the custom model, I get an error.
{
"error": {
"message": "The requested model 'ft:gpt-3.5-turbo-0613:personal::_____' cannot be used with the Assistants API.",
"type": "invalid_request_error",
"param": "model",
"code": "unsupported_model"
}
}
Iāve moved this to the API Bugs section, can you please post your calling code? and any setup it relies on. 3.5 models are ok according to the documentation, wondering if itās a naming issue or some setup thing.
I am getting the same (or similar) error.
BadRequestError: Error code: 400 - {āerrorā: {āmessageā: āThe requested model āā¦ā cannot be used with the Assistants API.ā, ātypeā: āinvalid_request_errorā, āparamā: āmodelā, ācodeā: āunsupported_modelā}}
I am using retool resource query, which is basically a REST call to the API.
I tried fine-tuning and using 0613 and 1106, but both produced the same error.
Iām having the same issue. Fine-tuned by model, using the correct name through the Assistants API and getting the error: āHTTP error 400: The requested model āft:gpt-3.5-turbo-1106:personal::XXXXXā cannot be used with the Assistants API.ā
Followed the docs, simply tried setting up the Assistant, and got the same error. No issue with naming or setup. Seems to be a restriction with the API. Any solution to this yet?
As with others, Iām also unable to see my fine-tuned model in the Assistants playground GUI.
Seems like it canātā¦ yet. But it is supposed to work.
Iāve resorted to using the Chat Completions endpoint for now. Waiting for the Assistants endpoint to work with fine-tuned models so I can switch over.
I decided last night to start a fine tuning with only a small number of training data and I am able to use this new fine tuned model over the playground. Used the gpt-3.5-turbo-1106 as base model.
You are actually right. No, itās not listed on the assistant playground. I can only use it over the chat interface. But to be honest, the assistant is useless with this pricing and flaws. So I think will use more the chat completion again and build my āownā assistant with summarization.
Itās nuts to pay for the whole context window on every request. Even when files are used (they do not work for me and I have no idea why). I really see the whole presentation they gave as a marketing show. Nothing announced is working as it should, and is much more expensive than before.
Thanks for sharing. Letās see how it plays out when they get it to workā¦
Threading was an exciting concept, but nothing we canāt build ourselvesā¦
I have the same issue. Non dev, learned from scratch how to fine-tune, got it done, but wonāt appear as an option in my Assistants. I need RAG for this implementation. How do we get visibility on this ?