Has anyone got an idea how to limit completion API answers within fine-tuned model.
I have got right answers with short sentences like “Santa Claus”:
Answer the question truthfully just using the model "davinci:ft-personal:tt-2023-03-23-22-12-10", and if the question can't be answered truthfully using the model "davinci:ft-personal:tt-2023-03-23-22-12-10', say "I don't know" Question: Santa Claus Answer: I don't know
But I still get wrong answer of full question: “Who is Santa Claus?”
Answer the question truthfully just using the model "davinci:ft-personal:tt-2023-03-23-22-12-10", and if the question can't be answered truthfully using the model "davinci:ft-personal:tt-2023-03-23-22-12-10', say "I don't know" Question: Who is Santa Claus? Answer: Santa Claus is a mythological character.
You’ll need to use the fine_tuned_model name for 'model'. The 'prompt' will be set to the question appended with the separator used while fine-tuning in your code.