Fine tunning with raw data

I have a large user manual which I want to use for fine tunning a model

Is it possible to upload the manuel as a whole to the model as “raw” knowledge
and then ad the custom q/a prompts

Or do I need to add every possible answ /question from the doc to the
{“prompt”: “”, “completion”: “”} File ?

Check the “open-ended” part of the fine-tune documentation

While I don’t generally jump other questions, this one kind of hits on something that I’m struggling with.

I have a small corpus of factual information as a document that I split up into a blank prompt and content in the compression. The competition all ends with the tag END (that might not be great but it’s a start). I also have sentences that are between 500 bytes and 6k. While I suspect that the 6k lines are too long, I don’t think that is my only problem.

In this case I have some facts about a non existent country and it’s laws. So when I inquire in the playground after selecting the fine tune, the responses has the country name, so that is recognized, but one specific thing that I seem to be losing is some context, such as specifics of a law.

I tuned with Currie and Davinci, and while I can ask questions and get responses that return with the specific m country I’ve put in, I don’t think I understand how to return “facts” from the model, in context of open ended generation. So, if I want to all the system to generate a few lines of text for completion, it seems to respond outside of my corpus.

Am I missing the point of the competition endpoint? And I not using the endpoint correctly?