How to limit completion API answers within fine-tuned model?

Has anyone got an idea how to limit completion API answers within fine-tuned model.
I have got right answers with short sentences like “Santa Claus”:

Answer the question truthfully just using the model "davinci:ft-personal:tt-2023-03-23-22-12-10", and if the question can't be answered truthfully using the model "davinci:ft-personal:tt-2023-03-23-22-12-10', say "I don't know" Question: Santa Claus Answer: I don't know

But I still get wrong answer of full question: “Who is Santa Claus?”

Answer the question truthfully just using the model "davinci:ft-personal:tt-2023-03-23-22-12-10", and if the question can't be answered truthfully using the model "davinci:ft-personal:tt-2023-03-23-22-12-10', say "I don't know" Question: Who is Santa Claus? Answer: Santa Claus is a mythological character.

Hello @yuriyuag

Welcome to the community.

This is not at all how a fine-tuned model is consumed.

Please share the code that you are using to call the API, so that we can help you.

Thank you for feedback! My code:

$instruction = "Answer the question truthfully just using the model " .
    "\"" . $params['model'] . "\", and if the question can't be answered truthfully using the model " .
    "\"" . $params['model'] . "', say \"I don't know\"\n\nQuestion: " . $params['prompt'] . "\nAnswer:";

$api_params = [
    'model' => $params['model'],
    'prompt' => $instruction,
    'max_tokens' => (int) $params['max_tokens'],
    'temperature' => (float) $params['temperature'],
    'top_p' => 0.25,// 1,
    'n' => (int) $params['n'],
    'echo' => true,
    'stop' => ["\n"],
];

Log::channel('openai_completions')->info($log_id . ': ' . var_export($api_params, true));

$result = OpenAI::completions()->create($api_params);

@sps Can you help me to configure API call to get response only from the fine-tuned model?

You’ll need to use the fine_tuned_model name for 'model'. The 'prompt' will be set to the question appended with the separator used while fine-tuning in your code.

You’ll find these docs useful.

For examle my request params:

$api_params = [
    'model' => 'davinci:ft-personal:tt-2023-03-23-22-12-10',
    'prompt' => 'car driving',
    'max_tokens' => 400,
    'temperature' => 0,
    'n' => 1,
    'stop' => "\n",
];

response:

, and the like.

I have no any prompts with word “car” in my fine-tuned model.

1 Like

Read this fine-tuning tutorial by @ruby_coder

But if you want truthful responses, embeddings is a better approach than fine-tune @yuriyuag

Can we invite you to our project for assistance to make it quicker?
Thanks

Hi @yuriyuag

Please send me a message with the details.

@sps Hi, I have got correct result with embeddings + completions API. Do you have any idea how to use html tables? Maybe markdow?

Thanks you very much!

Hey @yuriyuag how it worked for you , can we limit the reponse by passing the instructions ? or is there any other ways ?
Can you please tell ?

It works for me. You can filter best matching with embeddings API and send it in context to completions API, for example

Answer the question from context
Context: {$context}
Question: {$question}
Answer: