Good day to all! There are tasks that are solved via RAG, I am thinking about fine-tuning. A database of slightly more than 50 questions and answers has been compiled on our data.
When substituted as a database into the RAG system, this set worsens the system’s responses. The database by topics produces a higher-quality result and is more flexible. As a result, there are questions
- Does this mean that fine-tuning on this database can also fail?
- Does LLM trained on individual data lead to faster information processing or can it only improve context understanding?
- What actions can help reduce query processing time