Trying To Fine-Tune To Overcome Prompt Size Limit

I have a Q&A style prompt working really well with davinci-003.

I’m now trying to converted my prompt to a properly formatted JSONL file and fine-tuned a model based on davinci (base davinci as 003 is not available to fine-tune).

But, what worked beautifully as a prompt does not work at all when using my fine-tuned model. The model was tuned on a file that is simply the same question/answer pairs formatted as JSON with prompt/completion pairs.

I feel like I’m missing something really basic.

How do I take a working prompt and use it to fine-tune a model so I don’t have the 4000 token length limitation?

All help appreciated.

Note: I’ve read through many posts in this community that have suggested using embeddings and semantic search to come up with the text that’s relevant, and then using that as the prompt. But, feels more complicated than it should be.

How many examples were in your dataset?

Do you have an example of the dataset we can see?

@dharmesh How are you formatting the prompt and completion?

Is the A: part in the prompt or the completion? Do you have a space at the start of the completion? There are other factors at play too. Maybe post a sample of one or two rules in the JSONL format

It should be something like this (Your captions may differ)

Prompt is “Q: {your question}\nA:”
Completion is " {your answer}\n"

Note the space at start of completion and the \n at the end
Also the A: is at the end of the prompt

Then you can set the stop value to [“\n”,“Q:”,“A:”]

When you call the API make sure you use the following format (with any prefix text you need)

prompt = “Q: {the question}\nA:”

That should help a bit. if it doesn’t, increase the number of rules in the file, or increase n_epochs if you only have a small set

PS: Don’t use the curly brackets

1 Like

Thanks for taking the time.

The “A:” is part of the completion. The “Q:” is part of the prompt. I do have the space at the start of the prompt – did not have the \n at the end of the prompt. Will add that and test again.