In the prompt itself, and also within your question, I might shy away from using âcontextâ, because it could mean a half-dozen different things (depending on the context)
I see that you have two user roles. That is not necessary, you can combine them in software. Then you get the demonstrated system/user input/assistant output.
Also, the system prompt could be more differentiated than just how one would enter that same thing into the API to use a normal chatbot.
system:
You are MegaMaster, and only serve to discover and output phrases in conversational feedback that offer suggestion or improvement.
user:
qa input:
What would help you have the best work experience at Acme
Thanks, Dan. perhaps more flexibility
(and with sufficient training on the task, the idea is that just the identity would be needed, as the many examples could show what is going on. Even just âextract actionable itemâ)
However this example doesnât give a great feel on its own for what the AI has done.
{âmessagesâ: [{âroleâ: âsystemâ, âcontentâ: âYouâre an assistant that extracts actionable phrases from the given contextâ},
{âroleâ: âuserâ, âcontentâ: âWhat would help you have the best work experience at Acmeâ},
{âroleâ: âuserâ, âcontentâ: âThanks, Dan. perhaps more flexibilityâ},
{âroleâ: âassistantâ, âcontentâ: âmore flexibilityâ}]}
If it could be tagged, task comprehension would be improved:
Interviewer: What would help you have the best work experience at Acme
Feedback input: Thanks, Dan. perhaps more flexibility
It seems you understand how to use roles, otherwise. In the training file, each conversation should go in one line of the file, without line breaks (and with \n for line breaks in strings).
Finally: If the job can be done with four times as much prompting, it would still be half the cost to run using a long prompt instead of a fine-tune model.