Cannot use fine-tuned model in assistant API

I fine-tuned the gpt-3.5-turbo-1106 model with some example prompts. The job finished, and I can see the new model in the /models endpoint. If I try to create an assistant using /assistants with the custom model, I get an error.

  "error": {
    "message": "The requested model 'ft:gpt-3.5-turbo-0613:personal::_____' cannot be used with the Assistants API.",
    "type": "invalid_request_error",
    "param": "model",
    "code": "unsupported_model"

Hi and welcome to the Developer Forum!

I’ve moved this to the API Bugs section, can you please post your calling code? and any setup it relies on. 3.5 models are ok according to the documentation, wondering if it’s a naming issue or some setup thing.

I am getting the same (or similar) error.
BadRequestError: Error code: 400 - {‘error’: {‘message’: “The requested model ‘…’ cannot be used with the Assistants API.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘model’, ‘code’: ‘unsupported_model’}}

I am using retool resource query, which is basically a REST call to the API.
I tried fine-tuning and using 0613 and 1106, but both produced the same error.

I can’t find my fine-tuned model for assistants in the gui either. Either in the assistants GUI or in the playground under assistants

1 Like

Same for me. Was even wondering if fine-tuned models are yet supported yet - if not, any idea when will it be?

I’m having the same issue. Fine-tuned by model, using the correct name through the Assistants API and getting the error: “HTTP error 400: The requested model ‘ft:gpt-3.5-turbo-1106:personal::XXXXX’ cannot be used with the Assistants API.”

Anybody figured this out yet?


Followed the docs, simply tried setting up the Assistant, and got the same error. No issue with naming or setup. Seems to be a restriction with the API. Any solution to this yet?

As with others, I’m also unable to see my fine-tuned model in the Assistants playground GUI.

Same problem here. Please let us know when they’re available for the assistants API


Wow. I am currently preparing data for my fine-tuning and now this? Can someone confirm that a fine-tuned model can be used with the assistant API?

Seems like it can’t… yet. But it is supposed to work.
I’ve resorted to using the Chat Completions endpoint for now. Waiting for the Assistants endpoint to work with fine-tuned models so I can switch over.

I decided last night to start a fine tuning with only a small number of training data and I am able to use this new fine tuned model over the playground. Used the gpt-3.5-turbo-1106 as base model.

I’ve tried that but my fine-tuned model doesn’t show up on the list of models… I’m assuming yours was listed…?

You are actually right. No, it’s not listed on the assistant playground. I can only use it over the chat interface. But to be honest, the assistant is useless with this pricing and flaws. So I think will use more the chat completion again and build my “own” assistant with summarization.

It’s nuts to pay for the whole context window on every request. Even when files are used (they do not work for me and I have no idea why). I really see the whole presentation they gave as a marketing show. Nothing announced is working as it should, and is much more expensive than before.

See my bug report, which explains my findings a bit more detailed:

Thanks for sharing. Let’s see how it plays out when they get it to work…
Threading was an exciting concept, but nothing we can’t build ourselves…

I do not see the advantage over the normal chat interface. Right now nothing works there. What do I miss here?

Hello OpenAI - can you provide any help with this? The docs say that finetuned models are supported.


Me neither! It used to show up in the list now its just gone

Same problem here, seems that the API is not up to date with the documentation…

1 Like

I have the same issue. Non dev, learned from scratch how to fine-tune, got it done, but won’t appear as an option in my Assistants. I need RAG for this implementation. How do we get visibility on this ?

1 Like