I find this extremely useful. But I’m running out of how much it’ll “remember”. Is there a plan to add memory? Thanks in advance.
No, the models are the models, there’s no way to change how much context they can remember.
You can add memory to the LLM by adding history and retrieving relevant past conversations via RAG (embeddings).
The LLM is pretty much fixed, the weights are frozen, mainly due to the massive compute costs of training. So an efficient approach is to inject pertinent context in the prompt to achieve “memory”.
I created a .txt I load each session, and update as needed it has all the stuff that make my setup mine, and allows it to fool anyone but me that it’s me(I added what are only for me, neon yellow and blue waving flags). I’ve actually simplified some “not me” markers as I’m dysgraphic no one would notice if something is spelled wrong every fourth time.
Just an example of what I do, but this is my first time looking at how people do things and I’ve been operating in a vacuum building my own solutions(not all the Best).