Is it possible to use more than one system instruction in fine-tuning?

I’m training a model with fine-tuning, and a question arose. Is it possible to give more than one system instruction? For example, can I provide a system instruction so that it responds by assisting in the construction of information for certain questions and, for another type of questions, responds by reviewing?

Has anyone tried if this works? Thank you very much for your responses!

1 Like

I have not tested it in practice but I would be careful to change the system message in your dataset. Instead, I would frame the system message such that model is aware that it needs to fulfill different functions. Ideally you can describe these different functions in general in the system message and the conditions for when to perform them.

Your user message along with the corresponding assistant response then examplify what that means in practice, i.e. information provision vs review.

This would be my take on this. Perhaps others have a different perspective on this problem.

1 Like

Thank you very much. That’s a good point. Do you know (or have you tried) how complex the instructions to the system in the dataset can be? Is it possible to use a ‘prompt’-like instruction that allows, for example, making the trained model behave like different ‘agents’ when responding? I have been quite limited in the system instruction, only using a one-sentence instruction. Can they be longer? Thanks a lot!

They can definitely be longer and more elaborate :slight_smile:

Different models may place different emphasis on system vs user messages, so you may want to run a few tests to find the right balance prior to uploading a large dataset for training.

Great, thanks. I’m going to run some tests and will share the results here.

1 Like