Fine-tuning 3.5Turbo (1106) questions

  1. For fine-tuning (1106), do system instructions need to be in every Q/A pair? [1]

  2. If not, why does the documentation show it that way with the example below? [3]

  3. What is an “open ended training session”, where “just user or assistant filled in and the other is blank”? How does this work, is it documented anywhere? [2]

@Foxalabs said another thread:
[1] You of course don’t need a “system” prompt in your examples, it could just be the user and assistant roles,

[2] or even just user or assistant filled in and the other blank as part of an open ended training session.

Example from fine-tuning documentation:

[3] {“messages”: [{“role”: “system”, “content”: “You’re a factual chatbot.“}, {“role”: “user”, “content”: “Capital of France?”}, {“role”: “assistant”, “content”: "Paris it is.”}]}

{“messages”: [{“role”: “system”, “content”: “You’re a factual chatbot.“}, {“role”: “user”, “content”: “Capital of France?”}, {“role”: “assistant”, “content”: "Paris it is.”}]}

{“messages”: [{“role”: “system”, “content”: “You’re a factual chatbot.“}, {“role”: “user”, “content”: “Capital of France?”}, {“role”: “assistant”, “content”: "Paris it is.”}]}


Did you need anything clarifying?

Yes that would be great.

Specifically I’m looking for answers to the three questions in my post. Partly to confirm your other post still stands, partly to understand more how it works.

Thanks for the fast reply!

Ok, so, you don’t need a system instruction. However, if you subsequently use a system instruction when using the fine tuned model, performance will be degraded. It is highly recommended, but there may be some situation (can’t think of any off hand) where that might be of use.

The documentation does not show it because most people will always include a system instruction, however this is a developer forum and so I tend not to sugar coat anything and instead give the actual basic requirements as that tends to be what developers want.

Open ended is where you do not have a Q in the Q/A part of the training, so, in this case you leave the user role blank and only supply the assistant role with text. This can be useful for application such as creating a fine tune of an authors works to create a model that creates text in the style of the author. It is a more advanced and less used aspect of fine-tuning

1 Like

Very helpful thank you. To confirm I’m hearing you right:

Using a system instruction with the fine-tuning data is not technically necessary but highly recommended.

On the other subject of providing only an answer instead of full Q/A, is there anywhere else I can read more about the applicability of this, like docs or a research paper? Or is it just kind of a heuristic / learned through experience?

Just a minor addition / reiteration here: the consistency in your training prompts and the prompts when you apply the model in practice is very critical. For instance, if you include a system prompt with specific instructions in your training, you can’t leave that out when you use the finetuned model later. The model will eiter return poor output or you might get an error message altogether.

I would also be interested in reading a bit more about the open ended training sessions. Sounds quite interesting.

1 Like

I don’t have any papers on open-ended fine-tuning I’m afraid, everything I have done and learned has been in collaboration with members of this forum, it’s a rather new discipline and lacks a lot of the traditional “dusty tombs” for reference, we are basically the ones laying the groundwork.

I should add, I would not be surprised if there are papers on this topic, but I have not come across any in the wild, if anyone has, please feel free to link them.