I have created a Davinci FineTune model in GPT-3 with 2 sample prompts:
Prompt:
When was Company X started?
Completion:
Company X was started in 1984
Prompt:
How old is Company X?
Completion:
Company X was started in 1984
When I do a completion on that model with the exact same prompts, I don get the correct answers. I’ve tried different temperatures from 0 to 0.9. N = 1, Different token sizes, stop sign and so on.
But I don’t get the right answer. I am sure I am using my own model in the completion. I get results like Company X was founded in 2003 by a couple of brothers in Denmark. These completions have noting to do with the real answer.
Will it not work with only two prompts in the finetune model? GPT-3 says it will work better after 100 prompts, but they can’t mean that I should have like 100 prompts of the same question?
Shouldn’t I get the correct answer?