Prompt Assistance for Custom Knowledge Base

I would like to build a custom knowledge base which does not offer answers from any source other than the examples I program. Ideally I would do this via fine-tuning rather than embedding given that is a much simpler process.

As a first approximation I am trying to test the response of the various models.

davinci-003 works perfectly with the prompt below. But the base models Davinci, Curie, Babbage, and Ada offer all sorts of irrelevant and/or redundant information.

I would prefer to use davinci-003 but of course fine-tuning is not available for that model. Is there a way to revise the prompt to confirm that the base models can handle this? Or is the whole point of fine-tuning that only davinci-003 can learn effectively from context so the other models might work better after I proceed to fine-tuning?

Using only the Answer Bank below respond to the question. Do not use any other information sources. If you get an answer from the Answer Bank then stop after giving that answer. If you get no answer from the Answer Bank then say β€œI do not know.”

Answer Bank:

Right wrist pain is caused by writing.

Left wrist pain is caused by working.


My right wrist hurts. What is the cause?

This is working in the playground (or in code) because you are including the context with the question.

You are not including specific contexts when you fine-tune.

Your only choice is to include the knowledge with the question. If you have a lot of knowledge, you need to use semantic search to find some text that can be used to provide the answer and then send it to GPT with the question.


I think you should use Embeddings API. It can’t be done using Fine-tuning with satisfactory results. Here is a very similar problem solved using Embeddings: Question answering using embeddings-based search | OpenAI Cookbook