Questions about Learning How to Fine Tuning GPT-3 involving Multiple Steps

Hi Folks:

I am new to GPT-3. So far, I’m having a great time!

I am writing a program to summarize a data structure representing a report, into a human readable style. In the playground, I found the results promising, especially if I:

  1. representing the data structure as a SQL query. Representing data as a Python structure (list of tuples) tended to result in more incorrect results, i,e, miscounts.
  2. giving a few heuristics for summarizing.

I want to learn how to fine tune. My main goal is to reduce errors to an acceptable level. The second goal is to have the summary sound good (this is more subjective).

I thought about starting by giving say 1000 training examples. How do I perform the multi-step process? Or I do away with this?

  1. Single step
    {“prompt”: <“summarize + report \n:Answer:”, “completion” : <answer_as_human_readable text> \n"}

  2. multi-step

{“prompt”: <“process + report \n:Answer:”, “completion” : <intermediate_result - a data structure> \n"}
{“prompt”: <“apply rules + intermediate_result\n:Answer:”, “completion”: intermediate_result - a data structure> \n"}
{“prompt”: <“summarize + intermediate\n:Answer:”, “completion” : <answer_as_human_readable text> \n"}

That said, I am developing a test harness that will evaluate the result. I would like to try both approaches. Am I on the right track? What should I be doing? How do I properly represent a SQL table in a prompt?

Thanks in advance!

Cheers,
Andrew