Ideal input / output form for Fine Tuning?

I am taking a several long books (biographies) and transforming them into input/output pairs to fine-tune GPT 3.5 on.

My method is to transform the entire book into ~500 word “chunks”. Then tell the GPT 3.5 to look at each “chunk” and write a question that would plausibly elicit the chunk as a response. This will transform the book into input/output pairs.

As far as fine-tuning, I’m wondering what the ideal “input” would be. A brief, generic question? A detailed, really specific question? A long writing prompt? Anyone have an opinion?

The ideal input would be what a user would actually type to generate that response - and the response being the output the person would get and expect.

However, just take some paragraphs of Harry Potter, and there is no question beyond “give me a random section of Harry Potter” that would give you that as answer.

You cannot train on a book for knowledge unless you are specifically answering user questions. You’ll probably want to transform both the question and the passage by AI into a user question and salient information that is in the form of an answer.

Database semantic knowledge augmentation would likely be a better technique for answering about biographies.

Funny that user used a harry potter simile which is exactly what i did in another thread. Anyways…

When fine tuning, the optimal data is always the same.

its sorta like how you get trained at a job for a specific task.
It really does no good to have to engage with training in ‘other departments’
Well in the case of LLMs, training is cause and effect, and the variance is exstrapolated in large number theory or something (no idea)

But when youre trying to teach an LLM the right response to something, it needs examples of that cause and effect.

Synthetics dont work very well - yes, they can work “fine” if you train it rigorously enough, but thats kinda like second hand.
Kinda like an artist who only plays “covers” ya? Theres no genius there.

Imagine if you know a musician who only plays covers and can only play covers and you ask him to play a unique song, what will he do?
Well, he will mix-match different covers of course, because the patterns are the results of pattern extrapolation.

thats my 2 cents.

My case is a non-ficiton biography that is rich with information (proper nouns, dates, etc.).

From your response, it seems the better idea is feed GPT each chunk with the instruction “Write a question / answer pair using the information in this chunk” rather than using the chunk as an answer itself.

Do you agree this is a better course of action?