Fine-tuning a model so it always answers with an answer from a training file

What is the best way to achieve this ? I simply need a model that I can train with a file of say 500-1000 questions for which chatGPT will always return an answer from the training file. Currently I’ve tried with 50 questions but each time I asked it, it never used the data from the file I submitted for fine-tuning, not even when I asked the exact same question (a rather specific one) in the training file, word for word.

The model is the standard chatgpt 3.5

1 Like

Hi John and welcome to the Forum!

Fine-tuning, in the narrower sense of how the term is used in the context of fine-tuning OpenAI’s models, is not intended for knowledge injection. To achieve your goal, you should be looking at embedding. I’m sharing the following a couple of links to OpenAI’s resources that will help you get started.

https://platform.openai.com/docs/guides/embeddings

I hope this helps!

1 Like

Hi

Okay and thank you, this helps.

However from what I see you can not use the simple way of just replacing the model with the fine-tuned model as you can with the fine-tune API for the front end ?

This kind of thing seems to work ok too (below)…but would only make sense if you can compress your topics into keywords. Or you might be able to use full sentences, if the expense of the tokens isn’t too high.

[SYSTEM PROMPT]
You will consider the following keywords list, and always respond only with a single number. Your response number will be the number for the set of words that seems the most closely aligned related to the prompt. Reply simply by printing that number.

1: birds, cats, dogs
2: bicycles, cars, people

====
Q: What kind of pet should I buy?
A: 1

Q: What do people drive to work?
A: 2

WARNING: I haven’t tested if performance of this will degrade or not for LARGE system prompts, so do testing at scale of course, with garbage data or something, maybe.

EDIT:

I just realized this approach is an algorithm that can be applied recursively. Imagine a massive document with a massive Table of Contents. You could apply this “which number” game at the top level of ToC, and then apply it again and again, until you get to the correct “page” of content, and then execute the Prompt with the ENTIRE page of data to get a precise answer. This is sort of a way of doing what I’d call “Synthetic RAG”, or “RAG Tree”… because it needs a cool name. :slight_smile:

1 Like