How to give common instructions to fine-tuning in addition to jsonl data?

For doing my fine-tuning, I read this section, repeated below for your convenience, on an OpenAI tutorial when trying to prepare my jsonl data, which is not performing very well so far.
My question is: how to implement these wonderful advices into my jsonl training data? Repeat every advices again and again on every prompt-completion pairs? I guess there’s a better way?

More prompt advice

For more prompt examples, visit [OpenAI Examples][OpenAI Examples].

In general, the input prompt is the best lever for improving model outputs. You can try tricks like:

  • Give more explicit instructions. E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say “I don’t know” when the it doesn’t know the answer, tell it ‘Say “I don’t know” if you do not know the answer.’

  • Ask the model to answer as if it was an expert. Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. E.g., “The following answer is correct, high-quality, and written by an expert.”

  • Prompt the model to write down the series of steps explaining its reasoning. E.g., prepend your answer with something like “Let’s think step by step.” Prompting the model to give an explanation of its reasoning before its final answer can increase the likelihood that its final answer is consistent and correct.