Fine tuning for custom corpus of data?

I have a large corpus of data that I’d like to do Q&A on. It seems like the /answers endpoint would have been the best solution, but it’s deprecated. Also, I don’t think embeddings is the appropriate solution as I think more complex questions would not have sufficient context to fit within a single prompt (e.g., how many occurrences of X happened over the past month?).

Is there some way to use fine tuning to do something like this? Basically, pass a bunch of context (>10MB and less than 1GB) and then do Q&A.

Wasn’t sure if you can just pass a bunch of json lines like:
{“prompt”: “Context: <first chunk of context>”, “completion”: “”}
{“prompt”: “Context: <next chunk of context>”, “completion”: “”}

Then, using that fine-tuned model, send a question to the completions endpoint.