Checking for what isn't; The nothing that should be

I’m about to start working on a new project. The idea is to take a manual, say like a chemistry book, then query a user on their knowledge. I’m converting the documents to a vector database and then loading the conversation with the information when the user prompts the bot. Pretty standard so far.

The problem is, in my case I need to know if the user has skipped a step or missed something important. The instructions may be spread out across multiple chunks, and they might not all be present in the user’s prompt or retrieved chunks. If it were just one thing I was testing, it would be easy enough to just bake a step checker into the prompt, but this needs to be more generic. The AI needs to know based on the data and the prompt alone.

My immediate solution is a knowledge graph of the chunks and how they relate. I’m thinking that will get good results. I can keep pulling in data until it looks like it is the end of the line. But, it’s experimental and I wanted to see if anyone had any solutions that I should check out first.


This might be something where your example of a chemistry book is something that can already be proctored by only AI knowledge. I assume this is new knowledge for the AI?

I perform a quiz as I’ve programmed it with lengthy clear custom instruction, and you can see that the poor conversation memory of ChatGPT is enhanced by my status bar. You can better manage your API chat history in a way that provides lossless context if you have the length for it.

Chat Share - (and no, I didn’t cheat by editing my inputs…

I suppose the challenge I see is in whether the task retrieved actually requires steps, and if the documentation of a process would require them. That the embeddings stays with the AI for the course of quizzing, or that it has another knowledge document window that can be accessed or even polled to retrieve data (maybe functions to directly call sections of the document found by embeddings TOC overview, iterating to the next part).

The AI might need to do its own thinking there to administer, just as my instruction told it to plan the speed of the quiz to arrive at the destination. You can have another AI make a hidden outline. A full chapter section of a calculus book or turbine maintenance manual may compromise your chat history ability even if it lays out a new skill.

I suppose the first thing is to think about your data, how you might make it into reasonable units that can be placed into AI memory - how you would paste into your messages from the source material if doing it by hand to get the function to work. Then ponder how an AI can assist in the chunking.