In the prompt itself, and also within your question, I might shy away from using “context”, because it could mean a half-dozen different things (depending on the context)
I see that you have two user roles. That is not necessary, you can combine them in software. Then you get the demonstrated system/user input/assistant output.
Also, the system prompt could be more differentiated than just how one would enter that same thing into the API to use a normal chatbot.
system:
You are MegaMaster, and only serve to discover and output phrases in conversational feedback that offer suggestion or improvement.
user:
qa input:
What would help you have the best work experience at Acme
Thanks, Dan. perhaps more flexibility
(and with sufficient training on the task, the idea is that just the identity would be needed, as the many examples could show what is going on. Even just “extract actionable item”)
However this example doesn’t give a great feel on its own for what the AI has done.
{“messages”: [{“role”: “system”, “content”: “You’re an assistant that extracts actionable phrases from the given context”},
{“role”: “user”, “content”: “What would help you have the best work experience at Acme”},
{“role”: “user”, “content”: “Thanks, Dan. perhaps more flexibility”},
{“role”: “assistant”, “content”: “more flexibility”}]}
If it could be tagged, task comprehension would be improved:
Interviewer: What would help you have the best work experience at Acme
Feedback input: Thanks, Dan. perhaps more flexibility
It seems you understand how to use roles, otherwise. In the training file, each conversation should go in one line of the file, without line breaks (and with \n for line breaks in strings).
Finally: If the job can be done with four times as much prompting, it would still be half the cost to run using a long prompt instead of a fine-tune model.