Cannot use fine-tuned model in assistant API

Looking at another thread I saw info from Open AI that you can’t use a fine-tuned model with a Retrieval-enabled Assistant (ie: can’t use fine-tuning with RAG / Files)

It’s not the problem, the assistant i try to create is without any tools.

1 Like

This is obviously still a problem. Not showing up anywhere. They have my money from training but the trained model is not available anywhere.

1 Like

Sorry I should have tested before I posted. Fine Tuned models aren’t usable in Assistants at all.

@Foxalabs can we get a update on this ongoing issue where we are unable to see or use the finetuned gpt in assistants?

Hi and welcome to the Developer Forum!

Sure, I’ll add it to the pain points, might not be much activity over the holiday period though.

1 Like

I know what you mean but I am wondering if they do.

Hi, I am getting the same error also. We have a typescript fluent library built for combining data driven assistants with mathematical simulation and optimization, and very keen to be able to turn fine tuning on. I’m finding that the assistants api with gpt4 will push cost to my customers past the point of viability for most applications, but also finding that 3.5 turbo has a high error rate with a practical level of structured arguments.

How I can track resolution or activity of “pain points”, I’m new to the community and after searching can’t find any references.

It’s not a tracked service, it’s something we do as forum volunteers to make the most effective use of the bandwidth we have with OpenAI to try and give the members of this forum the best possible experience.

I’ll make sure to update this post should there be any new information.


I am currently working on a project where I am using assistants to reply to different people. The replies that I have gotten could be further improved with finetuning. I have a deadline on the end of January so I hope to seek your understanding to quickly resolve this issue so I can carry on with the project. Thanks

Hi @antiscambot2 not sure if will help you, but while I wait for resolution I ended up injecting my fine tuning json directly into a sub heading in the instructions. It has improved performance enough that 3.5 turbo suffices for most applications. Not an ideal solution but it got me out of trouble for my own dead-lines.

Fine-tuned models are not currently available through the assistants API, but might be in the future.

There cause of this confusion appears to have been a mistake in the documentation.

Following to stay updated. I also have a requirement for a fine tuned model to be able to read files. The assistant api simplifies this a lot. Would exploring chat completion api work for now?

this would be a game changer feature, fine tuning and retrival from files are highly complementary techniques.