Reading Longer Documents/Inputs

Hey everyone,

I hope you’re doing well. I’ve been working on organizing my old lesson plans, aiming to spruce them up and transform them into more structured, updated versions using AI.

The challenge I’m facing is that for certain units, I have a multitude of files. When consolidated into a single text file, they can be as large as 140kb.

I’m wondering if there’s a way to leverage AI to read and learn from all of these plans, and then generate new lesson plans based on this comprehensive understanding.

Apologies if my question seems a bit vague. Any insights or guidance on how to approach this would be greatly appreciated. Thanks!

This reminds me a bit of this thread: Poor quality response on trained LLM with pdf files

GPT 4 has a max context length of 128 kilotokens https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo

And then there’s embeddings…

It’s certainly not a technology issue.

I think “learn” is the wrong word, unless you want the model to just emulate your style.

I think “work” is a better word. Use the AI to work through your stuff.

In my opinion* the best way is to sit down and think about your process of creating a lesson plan, and then using the models to handle the legwork. Finding the balance between automation and manual work will be crucial - I’d advise to not automate too much up front, and just iteratively use the models to solve more and more complex tasks as you get more familiar with how to use them.

Good luck!

*The field of prompt engineering is currently somewhere between science and alchemy, meaning different people can give you different answers that can also be equally irrefutably correct.

1 Like