How do I fine-tune when I have a long prompt?

Hi all,

I have a question about fine-tuning. I notice this sentence in the docs:

We generally recommend taking the set of instructions and prompts that you found worked best for the model prior to fine-tuning, and including them in every training example.

This is in reference to this format for fine-tuning:

{“messages”: [{“role”: “system”, “content”: “”}, {“role”: “user”, “content”: “”}, {“role”: “assistant”, “content”: “”}]}

I am working on a pretty complex Assistant running on GPT4-o. The prompt is very long, and I’d like to shorten it but also improve the Assistant’s behavior in edge cases, so I am fine-tuning. However, I cannot include such a long prompt in the system message every single time, because copy and pasting that prompt hundreds of times would make the file way too massive.

What’s the solution here? Can I place only relevant sections of the prompt in each message? If not, what is cleaner?

Thank you!