Can I fine-tune without question/answer pairs?

The tutorial suggests that to finetune a model I need a set of question and answer pairs based off of my main corpus of text data. However I have followed this process and found that the question and answer pairs generated are not as specifically good as I’d have hoped. Furthermore generating these pairs is quite expensive, when compared to the cost of actually fine tuning a model.
I am wondering if there is a way I can finetune the model by just using my base text corpus (which i have as a txt file, but happen to convert to other formats). Surely one can fine tune based on next word prediction for my corpus, but maybe this is not offered etc.
If what I want is not possible, then maybe someone can tell me if I could just in a really simple way split my text into 2 parts (prompts and completions) whereby I chunk my data into pieces of length N and then just use each pair of chunks in contiguous order as a prompt and completion. Maybe I just separate my texts into paragraphs and use consecutive paragraphs for the pairs.
If someone has done this, let me know if it works of if it just results in rubbish. I would be very interested to know. Seems like openai wants you to use question and answer generation as this costs a fair bit more than model fine tuning would but i would like to know if there are specific technical reasons for following this process.