Fine Tune Model Response

I’ve created a fine tune model with these parameters
“training_file”: “file-kxaa2IXau4P1uA13bunOUJ9C”,
“model”: “davinci”,
“suffix”: “fine-tune-model6”
but when I’m trying to access the model through Completion Endpoint it is giving me mutiple ocurrences of completion in output. How to get the single occurence of completion in output.

I have tried providing the jsonl file in below two formats.

  1. {“prompt”: “does spago in beverly hills allow customers to make reservations”, “completion”: “AcceptReservations”}
    {“prompt”: “what is the reservation policy for spago in beverly hills”, “completion”: “AcceptReservations”}

2.{“prompt”: “does spago in beverly hills allow customers to make reservations \n\n”, “completion”: " AcceptReservations ###“}
{“prompt”: “what is the reservation policy for spago in beverly hills \n\n”, “completion”: " AcceptReservations ###”}

This is resolved but I want to return “No answer Found” if the prompt doesn’t exist in the training dataset of fine tune model. Right now it is returning some random string. How to achieve that?

The output text is being generated this way because of the second prompt file. The prompt in the JSONL file has \n\n after the question, which is missing in the body question that you send, hence it being present at the beginning of the answer.

Also, in the headers, if you set the stop header to “###” ( “stop”: “###”), it will stop once it encounter this token during generation, giving only one output

1 Like

Add that to the question prompt ie If the prompt is not found, return No answer found. Might have to structure it a bit differently than this but in essence, should work.

Thanks for the reply. Since this was resolved i wanted to restrict the response to the training dataset. If the user provided prompt doesn’t exist then it should return “None” instead of some random strings.

Something like this? It didn’t work.

Maybe a long shot below. Sometimes the docs are not perfect.

Maybe simplify the stop to a string versus an array?

stop: "###"

not, as in your screenshot, maybe?

stop: ["###"]



Screenshot 2023-02-02 at 4.16.19 PM

Anyway, according to the docs, an array of strings (up to four), should work:

I was asking to return one string ex.“None” if it is not asked one is not available in the prompts while training model.