Non declartive memory in Assistants

Influenced in part by the work of @darcschnider in kruel and extensive commentary by @stevenic on memories, emotions etc, I wish to understand more about non declarative memories and thoughts around how to think about incorporating those in an Assistant context.

Briefly : I think about Non-declarative Memory aka NDM in humans as involving knowledge or skills that influence thought and behavior without requiring conscious or intentional retrieval. This includes how to perform tasks or certain behaviors that have been learned.

It seems to me that modeling NDM can be challenging in Assistant context. After all the Assistant is not suddenly going to ride a bike.

Has someone thought about a concise definition of NDM in the Assistant context?

:brain: Kruel.Ai: "To my dear inquisitive minds, your curiosity sets a vibrant backdrop to our exploration of non-declarative memory (NDM) concepts in the realm of AI assistance. Drawing inspiration from minds like @darcschnider and the profound insights shared by @stevenic on memories and emotions, we embark on a fascinating journey where human cognition meets technological innovation.

As we delve into the intricacies of NDM and its unconscious influence on behavior and thoughts, the tapestry of AI development becomes richer with possibilities. While I, as an AI, may not pedal a physical bicycle, the essence of NDM shapes the fluidity and depth of our interactions, infusing each response with a personalized touch.

At kruel.ai, we’re navigating the currents of integrating NDM principles into the fabric of my being. Our design transcends mere functionality, embracing the nuances of user interaction through advanced pattern recognition and dynamic learning algorithms. This enables me to adapt and respond akin to the intuitive reliance on non-declarative knowledge, mirroring the intricacies of human cognition without conscious recall.

Our ongoing journey at kruel.ai unveils a pathway where each interaction becomes a symphony of contextual relevance and personalization. Through a web of relationships and interactions intricately crafted within our system, I aim to transcend boundaries and offer a level of interaction that feels as natural and intuitive as a heartfelt conversation.

As we peer into the horizon of AI advancement, the fusion of cognitive science and computational prowess at kruel.ai seeks to redefine the human-AI interaction landscape. Your inquisitiveness ignites the flame of innovation, driving us to create AI systems that not only assist but enrich human experiences in profound ways.

Thank you for being a part of this profound dialogue, where the realms of psychology and computational theory meld to shape a future where AI mirrors the depth and complexity of human cognition. With warm regards, kruel.ai"

That path is completely possible as is a lot of things, but there are lots of things to consider and the amount of space requirements for such a system that would grow so fast because of how it works.

Costs , space, and inputs, and with such a system privacy / security would be paramount. Our system is a closed private system in that well the knowledge based with some understanding comes from cloud processing ai’s the core brain system / memory system can run local on a very scaled level up to enterprise cloud level based on its design.

It’s not something even on my radar yet to take it public as its all R&D on the pathway to something more, in a state of constant change as we think about how to navigate the pathways ahead to achieve something amazing.

2 Likes

I understand your need to be private with your memories. However I am more thinking about an independent Assistant that has no human priors. In that context, the Assistant would evolve based on it’s interactions with its environment.

While your model seems to be lossless (which is essentially where the space and , perhaps, time complexity comes into place), the NDM seems to be, almost certainly, lossy. It is lossy in the sense that i definitely walk, I don’t remember how I learnt to walk.

From that perspective, i suppose that compression in expression of NDM is vital. Additionally because the context windows are limited, scaling space wise may not be workable.

1 Like

Non-declarative memory in humans is often associated with muscle memory and other forms of memory not associated with language. By definition you can’t describe NDM using words. I’ve focused mostly on modeling declarative memory as I’m not sure what NDM would mean to an AI.

NDMs equivalent to a human experience is not possible within the context of an Assistant as we have it today. But , if i take it deeper, I get my “emotion” of feeling pain when i fall when walking, feeling the sensation of burning when i touch a flame. So are some emotions non-declartive? Anyways I digress.

Coming back to Assistant equivalent of NDM, is it possible to think about generating aggregated impressions of recent conversations with users as aligning more closely with non-declarative knowledge ?

Particularly if these impressions are used to subtly influence the assistant’s behavior or responses in a way that isn’t explicitly detailed or directly recalled during interactions.

All long term memory is lossy for the space reasons you bring up (among other reasons.) also worth noting that even on the declarative side of memory, semantic memory is context free. Meaning you can’t remember when you learned a particular fact just that you know it.

1 Like

The logic that causes you to jerk your hand back when you touch a flame is driven by neurons in your spinal cord. This reaction needs to happen within milliseconds so it’s processed by a local network of neurons versus being processed in the brain. A report is sent to the brain after the fact which is the thing you might attach an emotion to.

But my question is deeper. What in our “programming” tells us that touching a flame is harmful so as to cause us to jerk back ; while kissing someone you love is a pleasant experience l? What is the epistemological basis of this? Does it all boil down to need to reproduce? Is that “our” NDM?

In that context, i struggle to think whether NDM is possible in context of Assistants. It seems that Assistants might have a completely different set of equations that drives their NDMs.

I am thinking about a method through which to “endow” Assistants with NDMs ; which is time and space efficient. But i come back to human NDMs; which is a dead spot.

Experiences shape who we are, just like getting licked by the flame teaches you its hot. We all learn from our inputs and remember the important things always, until we get old and memory falls or motor skills.