Can we use a fine tuned model to answer questions? If I have uploaded a jsonl file with the purpose of “fine tune” is it possible to use that same file to generate answers? Or do I have to have to make another file, with essentially the same data, and upload that?
So the idea is that I make an “answers” request, but I choose a fine tuned model I created previously? I guess that could make sense. I created a fine tuned model where the prompts were the titles of articles in a knowledge base, and the completions were the text from the articles themselves. I’m guessing that would not be a useful fine-tune, and I should just follow the answers guide OpenAI API properly.
However, if I were going to use a fine-tune to try and answer questions about a knowledge base, what kind of completions would I put in it?
boris
5
Your idea makes sense. At the moment the answers endpoint Diane sorry fine tuned models. However you can easily recreate the behaviour on your end using search for retrieval and a fine tuned model for completions.
For example openai-python/answers_with_ft.py at main · openai/openai-python · GitHub