I have a prompt similar to the following: #You are an agent that does {x} #Input: {y1} #Output:{z1} #Input: {y2} #Output:{z2} #Input: {y3} #Output:{z3}
When converting this to a fine-tuned model via the examples, how do I retain the “you are an agent that does {x} context”? Or do fine-tuned models only go off examples?
In my experience I only have given the examples (no other prompt context) and fed it to the fine-tuning engine. If you do, for whatever reason, decide to include more prompting in training, you would likely have to keep that same prompting in when you run the fine-tuned model. Which IMO, defeats the point of a fine-tune. Because now you are paying more for the fine-tune and haven’t reduced the number of input tokens.
Right it is a trade, another trade is downgrading the model if you have enough training data. So for example, I had a case with 4000 prompt/completion pairs, and for basic categorization output, I only needed to use Babbage. And might have gotten away with Ada, not sure. But that is a trade too since these models are much cheaper to run.