Hi, apologies if this is in the wrong forum section.
I’m working at an E-learning company and we’re creating a lot of content that needs to follow certain guidelines, principles, etc.
I have a large PDF, 46 pages containing guidelines, examples, tips, and a checklist to check if exam questions and answers adhere to certain requirements.
Basically there are some 20 rules defined, each with a good and bad example.
I would like ChatGPT to test an exam question against each of those rules. Not just looking up each of those rules, but also applying them.
I have tried to construct prompts to use the PDF to score and change input, but ChatGPT never wants to use all of the rules, it seems to randomly pick some and be done with it. Even in a conversation, it simply will not use them all. Results vary wildly.
The PDF is probably not structured in a proper way for ChatGPT to use, and even though we have a large context window, it likely exceeds it.
What would be a better strategy to tackle this instead of trying to RAG with ChatGPT? I’m thinking finetuning on the materials, but that requires turning the document into something completely different for use with finetuning. That is/sounds like a colossal amount of work. are there any tools or strategies to make this easier? Or should I keep using RAG but in a different way?