Sparse Priming Representation vs. MemGPT

I watched a couple of videos on the subject of Sparse Priming Representation vs. MemGPT.

I posted this question on both, but have so far not received an answer. Anybody know enough about this to provide some insight?

I watched David’s video and this one. I’m still wondering how does this work in the real world? I see what you’re doing in the Playground, but I have a RAG implementation using the gpt-4 api. Let’s say a user submits a prompt, I run a cosine similarity of the prompt against my vector store. Now, let’s say further that I’ve already also embedded the SPR generated representations of the embedded text, so that is what is retrieved. I now have this list of SPRs. Is this what I send to the model as context along with the original prompt for an answer? Or, do I decompress the SPR representations first? And, if I do that, aren’t I taking up the same amount of context that I would have if I had returned the original embedded texts?

So, if I don’t decompress the SPRs, how is the model going to be able to arrive at an accurate answer with just the SPR representations?

In short, how does this technique work in a current RAG implementation?

2 Likes

I think SPR decompressor is just for some kind proof that LLM can re-generate full information from SPR (semantic summary) or when we as human want to read full text from SPR data. In short i think we only need put SPR data (we need to state that this context is SPR and also define SPR definition in the prompt) into RAG context as we assume that the LLM is fully understand the full context of that SPR data.

1 Like

I just wrote a blog post on how to install MemGPT. It is pretty good, and they are updating it to use open-source LLMs to reduce costs. https://lachieslifestyle.com/2023/10/22/how-to-install-memgpt/

Dave has a wealth of knowledge and is generally on point.

1 Like

I read your post on installing MemGPT and its unique approach to memory management in language models. MemGPT’s architecture, mirroring a computer’s RAM and disk storage for managing large data sets, is great for handling complex language tasks​​​​.

I agree that integrating SPR data into RAG contexts seems a promising way to overcome the limitations of context windows in LLMs. SPR, by condensing complex ideas into succinct statements, could indeed streamline the process for language models to grasp and utilize vast amounts of information efficiently. It aligns with MemGPT’s capability to manage and recall extensive data, as it operates in cycles similar to an OS, deciding when to act and request specific operations​​.

Your perspective on using SPRs for in-context learning, especially in the context of MemGPT’s innovative memory management, great help for enhancing the efficiency and effectiveness of LLMs in processing and generating language-based solutions.

Best,

1 Like

Thanks for your kind words and welcome to the forum :slight_smile: