Why do some problems after fine-tuning the large model not match the answer

I tried to create a customer service system to connect to chat software, but now I’m stuck in the first step. I used a large model to fine tune my dataset, and after training, I tested some questions. I found that some of the questions can match the preset answers, while others are incorrect or even opposite. It works freely and in the wrong direction. I want to know if this is normal? Or is my plan itself wrong? @N2U

Hi there!

Fine-tuning is not intended for use cases involving Q&A. Instead, you should be looking at the application of RAG. I have just the other day compiled a post to another user who was in a very similar situation complete with links to relevant resources, which you may find useful. See in the following:

2 Likes

THANKS
Hello, I quickly reviewed your forwarded content and I’m not sure if I understood it correctly. I think what you’re saying is that fine-tuning a large model to learn specific domain knowledge and expression styles through a dataset may involve emotions, words, tone, etc. These are the main functions. And RAG is a combination of large models and external plugins, such as a large model+knowledge base. In our example, it should represent vectorized data storage+semantic retrieval+GPT polishing?

So my current plan is to retrieve the content of the knowledge base through a large model. If the user’s question matches the answer, the answer will be retrieved and the user can choose whether to use GPT polishing and output. If the answer is not matched, the large model ID will be fine tuned to answer.

Actually, it’s like fine-tuning a large model with a speaking style that’s like a mouth. If that’s the case, I need to use system prompts to make some settings to help the fine-tuning base model understand the purpose of the dataset, which is more convenient for the base model to understand.

The focus of RAG is on setting matching thresholds, while the focus of fine-tuning is on introducing the dialogue and setting the temperature.

Please help me correct it. I am a beginner and I may have had misunderstandings about fine-tuning before. I have always believed that as long as I train my Q&A skills, it will give me the corresponding answer by inputting questions and spreading them through temperature.

Hi there!

I am not an expert but, after read the description of your problem, I realize the OpenIA assitent maybe fit better to your requirements. Based on I know, a large model fits better when you want a new result based on dataset that you train the model but when you want answer-response then assitent fits better.

I hope this answer can help you

Regards