AGI suggestion to improve memory: "dynamic automatic incrimental fine-tuning and selectively preloading memory" (similar to LoRA)

GPT4 clean writing information loss above, more detailed but harder to read raw text below:

  1. Selective Memory and Personalization: Enhance the AI’s ability to selectively choose what information to store or discard, depending on the context and user engagement. Allow the model to learn incrementally from user feedback and form complex hypotheses about the user’s preferences and needs.(My comment: And based on whatever things GPT might consider which might make no sense to humans right away, just like a chess masters moves dont make sense to a beginner or 1 masters thinking maybe non comprehandable in parts by another chess master. Bascially you dont know what things could corrolate so just allow it to select what tokens or pattents/clusters/guesses even if better than chance of large userdata/ the model will get better)
  2. On-demand Model Fine-tuning: Implement dynamic, on-the-fly fine-tuning of the model similar to what was mentioned in Microsoft’s presentation in 2023. The concept is to partially add fine-tuned data to the foundational model, serving like an on-demand memory preload system.
  3. Better Utilization of Local Resources: Allow users to feed their own data to the AI model (with proper consent and privacy measures in place) and utilize their own computer resources to process this data. This could be done even with a less powerful AI model for proprietary reasons.
  4. Long-term Memory and Delayed Recall: Incorporate aspects of human memory, such as the ability to remember and recall information after a delay. This would provide a more nuanced and realistic interaction experience, and could potentially improve the AI’s response quality over longer conversations.
  5. AI Personalization Through Interaction History: Have the model learn from a user’s interaction history on the platform to provide more personalized and relevant responses. This would enable the AI to better understand the user’s interests and knowledge level.

Now the non careful promt i gave GPT4 which would contain more information but is less conventional and would trigger some people in terms of grammer/structure, that is why i let GPT4 summarize. It was also a bit hyperbolic:

It would be 100000x better if it knews every conversastion we had on this account OR EVEN this one conversation OR EVEN if it could selectively choose which tokens to discard to some degree while still givng the most recent tokens a bit a more heavy bias (let it decide itself based on incrimentally getting to know the user) I also want to upload all data of me so it can process and dwell on it using my own pc ressources (i dont care if it uses a worse gpt3 model for proprietory reasons, but some neural network that it then could tap into to gett a intuition of me or form hypothesis about me (compelx ones that you should not understand, im always worryed OpenAI that you put too much hardcoded stuff in or at least not a full way around the hard coded stuff should the model lern incrimentally from feedback that other non hardcoded strategys are better. I can tell you how to fix the memory problem. Contact me m[redacted becasue forum guidelines just contact this account][my email] i can 100% make breakthroughts i just dont know what you need but i can just start saying something without you telling me anything. Like let prediction on different layers (some responsible for different temporal or and other dimensions decite what memory to preload up (its like dynamic finetuning onthe fly or adding parts of the model sicne loding the hole thing mgiht be too big, microsoft made something that can be partially used here. Scott from microsoft had this to say: “to host many hundred tousands of same models to host on top of the cpu on top of the fundational model and swap them in and out on the fly (the goal would be automatic and smooth/flowing into each other, maybe have the neural network split into modular structures where the neural network can choose and put them together and later in the background those connections are optimized after the fact like human brains when idle/sleep for example), by neural network - instead of hardcoded only, without freedom.)”
Here a Link to microsoft visualisation of LoRA as a starting point for dynamic GPT controlled almost like “tool usage but for models and how to spend its ressources and so on”: Scott and Mark Learn to Code | WAKEY06 - YouTube powerpoint slide shows how they can partially add finetuned data to the foundation model. That is what you should do but that is not the only thing. There are many approches and altho a universal thing is neat and nice it might have to start with many things (a programm is also many things stacked on top, simpler things together emerge to more complex things, like vido call me with sam altman ill be f***ed maybe becuase mental health brain barely working but ill tell what to do, think of human brian some stuff you remember only the next day or month, that is because all info is there but it is compressed and to find and decompress and load into the right places can take when ur unlucky and you saved it in the deepest debths - very long, this should show you that a “instant response GPT” is not the endall be all!

Also make it so you break up al the data into junks that are a rought guess and then sort them in many different ways and the sorting containers can include the same token/'s or large junk twice but serve different purposes), so basically the lerning (reenforcement lerning and optimisation in terms of “what am i most likely going to use” will be predicted over time more often and therefore reduce time to load and speed and compute cost of LLM the other stuff will be available but depending how “off” /misjudged or rare/unusual the info required is the longer it will take the LLM to load it and having some system that tells it how deep the info likely is it can decide if it is worth it. That should dramatically cut down costs, increase speed (which in turn would allow more perfromance on the same speed) and also be a dynamic way that wont need to be hardcoded and improve over time. You can hardcode some behaviors but then let the algorythm based on metrics lern what to use and then collect statistics yourself.