Running Local models without GPUs

I wanted to ask, if I want to run local LLMs only on CPU.

I do not have access to GPUs and wanted to ask how much slower CPU would be, compared to GPU.

I would love to run a small Open Source LLM only on CPUs to read 500 pages PDFs and be able to ask it questions.

It very much depends on your hardware and RAM.