Fine tuning: what is it good for?

I’m watching the AI Engineering Summit Day 2 talks and just watched a talk about fine tuning. There are a lot of great opinions on here about when you should use fine tuning to training things like a binary classifier but just wanted to add some color about when you’d fine tune an LLM for reasoning… It’s when you don’t like the answers you’re getting from the model.

If you can use RAG to show a model the information you want to ask questions about and you like the answer you get back you don’t need fine tuning. If however, there’s nothing you can show the model that will result in the answer you want. It needs to be tuned.

1 Like

As you might immediately guess, there are very few cases where you would need fine tuning but they do exist. They’re typically going to be around complex reasoning. Given this argument between two people who is right? That’s very much open to opinion and you may not agree with the models answer. That’s a case where you can tune it to think more like you.

1 Like

Yes. It’s also mentioned in docs as well:

Common use cases

Some common use cases where fine-tuning can improve results:

  • Setting the style, tone, format, or other qualitative aspects
  • Improving reliability at producing a desired output
  • Correcting failures to follow complex prompts
  • Handling many edge cases in specific ways
  • Performing a new skill or task that’s hard to articulate in a prompt

One high-level way to think about these cases is when it’s easier to “show, not tell”.


those are all spot on examples… it’s generally edge cases where you need fine tuning.


The more important thing is to be able to recognize when a task you’re asking the model to perform could benefit from fine tuning… For me that’s one rule… When you don’t like the answer you’re getting and nothing you do seems to fix it…

To be completely transparent I have yet to fine tune a model as it’s expensive and so far I’ve always been able to prompt engineer myself out of corners. I’m mainly stating what it would take to get me to resort to fine tuning a model.

I can definitely see cases where nothing you’re going to do in the prompt is going to result in the answer you want so there are plenty of cases for fine tuning LLM’s.

On the post over here is mention of using a hybrid RAG + Fine-tune for customer service. The Medium article mentions the fine-tune improves tone and reduces hallucinations.

Here is the notional graph for the overall perspective on where this hybrid fits in:

The exact details in the article are thin, but I can envision how to implement this, and that it should be less prone to hallucinations and have the correct tone.

The fine-tuning is soaking up the patterns, while RAG is suppling the knowledge. The combo of the two, I would expect, is potent.