Hi, Ma.
Rencently, i have the same purpose like you.
The first question
“The new model trained with gpt3’s davinci model loses the original functionality of davinci”
Davinci and text-davinci-003 are two different model.
How do davinci and text-davinci-003 differ? | OpenAI Help Center
We cannot currently fine-tune on text-davinci-003
.
Can I fine-tune on text-davinci-003? | OpenAI Help Center
The other question
“When the traning finished, I got a new model, when I test the model the response is a mess”
You may add the parameter stop=["\n"]
openai.Completion.create(
model=model,
prompt=text,
max_tokens=max_tokens,
temperature=0,
top_p=1,
frequency_penalty=0.2,
presence_penalty=0,
stop=["\n"]
)
Finnally, I recommand you to read this:
Fine tuning vs. Embedding - General API discussion - OpenAI API Community Forum
It provides a relatively workable idea.
…Basically, you embed all your facts about Mars. Then the incoming question comes in and you embed this. You correlate your incoming question with the entire set of embedded facts. Then based on the top correlations, you pull all the top facts from the database and form a prompt out if this (truncate to fit the limited size of the prompt window). Then you ask GPT-3 to answer the question based on all the top correlated facts in your prompt.