In fine-tuning, how to specify a "context" and a "completion" part?

It was possible to do in the previous FT API.

My use case is sending some very long text to the model and then have it answer with some short text. So currently, in fine-tuning most of the training goes into predicting tokens inside this long query, rather than going into predicting the short answer.

Is there a way to train only to predict the “completion” tokens?

1 Like