Fine Tuning Concept Question

I had a quick question that came to my mind as I was thinking about how to structure my assistant output for a GPT 3.5 fine tune that i’ve been working on.

As an example, let’s say I was training a model to generate an argument but I fine tuned the model to generate that argument in a certain structure with different points that were to be made.

As follows:
Argument 1

  • Point 1
  • Point 2
  • Point 3

For the sake of the model to produce a cohesive argument, would it be best to have the assistant structure the output like so?:

{“role”: “assistant”, “content”: “Point 1. \nPoint 2. \nPoint 3.”}

or would the model be able to evaluate previously generated messages as it generates the next ‘point’? Like so:

{“role”: “assistant”, “content”: “Point 1.”}
{“role”: “assistant”, “content”: “Point 2.”}
{“role”: “assistant”, “content”: “Point 3.”}

Or would this just completely silo each result and make it much less cohesive?