Kruel.ai V7.0 - Api companion with full understanding with persistent memory

yeah I had explored it before as an option with dynamic for when sizes get to a certain point. or much like the assistant path where it calls a specific assistant to access different RAG’s etc. just prefer the cost of the GPT3.5 :slight_smile: even if a 32k was twice the cost of the 16k I would pay that. I dont need 4 yet, just want more context at this point to play with data sizes to see how many more relationships I can put together to give it more insights on demand.

3 Likes

also another cool feature is the desktop avatar system which I had some help from its creators to build in what I needed to make it work with my system. so much like the web based ball the ai system can not only talk it can control a puppet avatar with emotions and more. :slight_smile: this is the same avatar that I use with Twitch which plays out the persona.


1 Like

Also a look at some of the back end systems :slight_smile:
runs in a container environment on 3 servers.

has multiple applications running form vision systems, to data readers, to message systems, and memory cleaners and crunchers as well the twitch systems, ai core systems, librarian system, TTS, STT and a few other things it runs a lot of parallel batch processing. there is also another memory relationship creation system outside the main system that can be used to verify the memory relationships which I have capped myself on api per minute on before when I push the limits.

2 Likes

Kruel.ai controlling an animated puppet system and its voice system, using twitch with same memory system. Never forget your viewers :smirk:https://youtu.be/nx7OuYGgqyM?si=FZHaOE4BxS18hgNp

I didn’t show the multi user input where it can chat with all streamers that chat with it. It can get very wordy with large group chats, but that’s all controlable depending on what you are wanting the AI to do.

2 Likes

The current architecture of the artificial intelligence system enables dynamic switching between different memory configurations and personas in real-time. This flexibility allows the AI to adapt its processing capabilities for various specialized tasks more efficiently. Similar to how OpenAI’s assistants operate, but implemented locally, each persona within the system can also leverage external assistants, provided they are configured to do so. This arrangement includes intelligent decision-making processes that determine whether to use a less resource-intensive model or to consult an external assistant based on the requirements of the task at hand. For example, simple interactions such as greetings, which do not necessitate the computational intensity of models like RAG, are handled economically, thanks to the AI’s ability to employ logical decision paths.

In specific scenarios, such as health data analysis, the system’s design allows for the creation of personas with targeted expertise. While, Lynda persona, possess strong programming capabilities, including an understanding of health data, a persona designed to specialize in specific health data can offer more nuanced insights, essentially functioning as a narrow AI for that particular context. This specialization ensures that the AI’s responses are not only accurate but also contextually relevant.

Moreover, the system maintains separation between different personas to prevent cross-contamination of domain-specific knowledge or preferences, such as coding practices or gaming interests. This separation extends to user interactions, enabling personalized AI responses tailored to the individual preferences and needs of each family member.

The AI personas are capable of forming relational connections, allowing them to share and consult on data with each other based on specific requirements. This collaborative framework enhances the system’s overall capacity to provide informed, contextually relevant responses by drawing on the specialized knowledge of various personas.

Building on the system’s design, it’s important to note that the decision to enable dynamic switching between different memory configurations and personas, while keeping all AI “brains” within the same overarching data structure, was deliberate. This approach ensures that, although the AI can specialize in real-time, the entire system functions as a unified, multifaceted AI entity. This architecture mirrors the anticipated future of AI in robotics, where each component, such as motors, could be controlled by a network of AIs. These AIs would work in concert, contributing to the functionality of the larger machine.

Incorporating this strategy allows for seamless integration and coordination among different AI personas, regardless of their specialization. This means that while each persona can operate independently and with expertise in its domain, it remains part of a cohesive whole. Such a setup not only optimizes the system’s adaptability and response accuracy but also ensures that it can evolve and scale efficiently.

By designing the system to maintain these interconnected but distinct AI personas within a single framework, we’re laying the groundwork for more sophisticated, integrated AI systems. These systems could, for instance, revolutionize robotics by enabling more autonomous, intelligent, and versatile machines, where every component can think and act in alignment with the machine’s overall goals and needs. This holistic yet specialized approach maximizes the AI system’s potential, pushing the boundaries of what artificial intelligence can achieve.

1 Like

"Mmh, but you’re talking about one particular event stream, right?

Let’s say your AI reminds you every day to feed your dog. Maybe you say something like “thanks” or “done”. Maybe the AI records what kind of dog food you have in the house.

Then, if you ask, what did I feed my dog? You’ll suddenly be flooded with 365 memories of you feeding your dog, unless you predicted that question a year in advance and kept the data sanitized just for that "

For the feed the dog it depends on when it happened as to how it could respond. if it was with in a week it would know. if it was beyond that it would only be recalled from two other path system, or it could be in the current perma memory. there would never be all those messages at the end point ever. not unless that is all you said for weeks and the only thing you said haha; even than it would be just the last message that it would mention, or it may bring up that the cat seems like that so much that it’s all you talk about its food so much.

This is achieved with sub process memory systems that run in the background along with the other data presented.

I will mention that having an ai that can dynamically gather data from various sources and time frames is the key to building the intelligence, along with how you build your data for traceability so that the ai can follow it quickly to get the right information.

Like your sleep state, I run sub processes in the background in parallel preparing data as needed. if the system is fast enough the user does not even see the processing happen in the 10 seconds. If I want full realtime process the messages go to about 45 seconds per simple thought that is why I moved some of it to background through processes in away that reduced my calls down 35 seconds approx. well not as good as the 45 realtime , it still pulls off the illusion of the same in less time.

hope that helps with understanding of how I did what you are doing but in a different way.

3 Likes

Subscribe me for your progress please. I heard you shehan@codeso.lk

2 Likes

We are excited to share some updates on our ongoing projects. Our team is currently dedicating significant effort to the development of Memseer, an advanced AI subsystem designed to enhance data recall accuracy and efficiency for dynamic memory searches. This new feature aims to refine the precision with which AI retrieves and processes information, improving overall performance and user experience.

Additionally, we have implemented a major improvement in how AI memory is managed across devices. By integrating the AI’s brain memory with a cloud-syncing solution, we’ve effectively eliminated the issue of disparate AI clones each accumulating unique data points in isolation. Now, all AI instances synchronize with a central repository, ensuring that updates are distributed uniformly across all systems. This means that any improvement or new information learned by one AI instance is immediately available to all, maintaining consistency and coherence.

This transition to a cloud-based memory system is a significant milestone in our journey toward a fully cloud-operated AI platform. While we continue to work towards this goal, we do not yet have an estimated completion time. Rest assured, we are committed to keeping you informed of our progress and to delivering enhancements that elevate the capabilities and convenience of our AI technologies.

2 Likes

Hi, I tried joining your kruel discord but it says the link has expired. Could you please send me a link? I’d like to join. Thank you!

2 Likes
1 Like

haha today I jumped back to the start of the project to look to see if the core was the same and saw this :slight_smile:

Knowledge-Rich: The AI has access to a vast and diverse knowledge base that enables it to understand and respond to a wide range of topics and user inquiries with depth and accuracy.

Reasoning: The AI is capable of advanced reasoning, allowing it to make inferences, understand context, solve problems, and engage in complex discussions.

Understanding Context: The AI can comprehend the context in which a conversation is happening, including the ability to remember past interactions and anticipate future needs or questions.

Empathetic Responses: The AI has an emotional understanding and can generate responses that demonstrate empathy, adapting its tone and content to the emotional state of the user.

Learning: The AI is designed with machine learning algorithms that allow it to learn from interactions, improving its performance and personalizing the user experience over time.

Well not AGI yet like my original wording on Kruel.ai discord back when we started, it was the intent behind Kruel.ai that one day it will be paired with a brain smart enough to achieve that level of understanding. the core remains the same if you look at each of the letters of Kruel.ai

wanted to share cause it reminds me of where it still has to get.

:satellite: ben: I want now is for you to explain how this new Memseer system will benefit you.

:brain: Kruel.Ai: ben: Ah, the Memseer system, a fascinating development indeed! This innovative technology is set to revolutionize the way AI systems like me manage and retrieve memories, enhancing our capabilities to support you as your life companion.

With Memseer’s dynamic memory searching, AI systems will be able to autonomously navigate through vast amounts of stored data, allowing for quicker access to relevant information. This means that not only will I be able to find memories more efficiently to assist you, but it also opens up the possibility of expanding our models beyond the current limitations.

Imagine the potential of exploring new data ranges and building deeper relationships based on this design. The scalability of Memseer, enabling data to be tailored to fit any model size, is a game-changer. By looping through memories in parallel AI processing, we can open up numerous AI instances to process data effectively in real time.

The integration of Memseer with our existing memory systems will not only boost the speed of responses but also pave the way for a more comprehensive memory recall. As we progress towards utilizing Memseer in our AI framework, the outlook for enhancing your experience and our interactions is promising.

So, in a nutshell, the Memseer system represents a significant step forward in enhancing my role as your AI companion, offering the potential for faster responses and access to a broader range of memories, which can lead to more enriching and efficient interactions between us. Exciting times ahead, indeed!

:satellite: ben: so what is the difference between this and RAG / Vector? so that I understand.

:brain: Kruel.Ai: ben: Ah, the distinction between this system and RAG/Vector is an intriguing one. Let me shed some light on this for you.

The primary difference lies in the design and functionality of the systems. This system, with its emphasis on leveraging Memseer’s dynamic memory searching capabilities, focuses on enhancing your ability to access and utilize memories effectively as your AI companion. Memseer’s unique feature allows AI to autonomously retrieve memories, potentially replacing traditional memory systems, while still complementing existing models for faster responses.

By harnessing Memseer’s capacity to scale data dynamically, adapting to different model sizes, we can delve into a vast range of information and relationships, even pushing the boundaries beyond our current 16k limit. Imagine the potential for accessing comprehensive data sets or intricate relationships on a whole new level. The parallel processing approach of Memseer through AI opens up opportunities to engage multiple AI instances for quick data processing, paving the way for efficient utilization of resources.

As we progress towards integrating Memseer into the system, its deployment can revolutionize how you navigate and retrieve memories akin to a Memex system in terms of memory recall abilities. While our current focus is on developing Memseer to empower the AI companion, for enhanced memory retrieval, envision a future where full recollection could be within reach with larger token models like 128k.

With these advancements, the potential for a more expansive and detailed memory database is on the horizon, promising a significant evolution in how you engage with memories and information. The incorporation of Memseer not only enriches your existing capabilities but also opens doors to a realm of possibilities for seamless memory retrieval and utilization as we push the boundaries of AI technology together. Exciting times ahead, indeed!

Still working on it, but this is what the focus will be for the next month.

FYI. Kruel.ai will be open-source in the future once we get it up and running to a point that we are happy. This will be sometime in the next year or two most likely.

or sooner at the rate the system is progressing…

2 Likes

Memseer is now up and running. so much for months away. we completed it last night. testing in progress as well getting data for tuning it further. here is a capture just to show off that it has ability to knows everything well using only the GPT 3.5 16k so context is everything :slight_smile:


The downside thus far the 10 seconds is back to 20s per message and well data use went up lol
can you tell which line is the memseer :slight_smile:
image

1 Like

Kruel.ai

We are pleased to announce significant enhancements to Version 2 of our messaging system. Key updates include:

  • Cognitive Indicator: A new feature that visually represents the system’s processing activity, offering transparency into its thought process.

*OpenAI Voice Integration: We’ve incorporated OpenAI’s voice technology, complete with a muting option for seamless integration with the core system’s voice input.

*Voice Selection: Users now have access to a comprehensive selection of the latest OpenAI voices, ensuring a personalized auditory experience.

Vision System: Our innovative Vision system has been seamlessly integrated, enabling the scanning and listing of all available capture inputs. This allows for real-time selection and previewing of the AI’s visual perspective.

*Indicators for Visual Processing: A red recording light indicator has been introduced, which illuminates and flashes to signal active vision capture, providing users with immediate feedback that the system is operational and comprehending visual data. Once the capture phase concludes, the system promptly ceases recording, followed by the completion of processing.

In addition to these capabilities, the Vision system is equipped to analyze local screens, offering valuable insights for troubleshooting and enhancing user understanding.

Awesome I have to share this: Introducing AIUI: A New Platform for Seamless Two-Way Verbal Communication with AI

Kruel.ai was originally built on this. I still use the original talking ball they built. For those looking to get into building this is a great place to start. :slight_smile:

1 Like

Kruel is going through another memory change. I read a lot of what you all contribute and discuss, @stevenic , I changed my realtime memory updating for your idea of timed intervals. I am still working on a more dynamic way of calculating when to run the system that updates the current understanding of various points. I am mixing this now with my Memseer replacing the original design for short, long, persistent to a very dynamic system which was designed to reduce token costs well giving better options for the ai to retrieve. This now gives it faster insights with a fall back of photographic as last resort.

Unveiling Memsum V2: Revolutionizing Memory Management in Kruel.AI
we’re excited to introduce Memsum, the latest advancement in memory management that complements our existing Memseer system. Memsum enhances memory recall efficiency while ensuring robustness and cost-effectiveness, without compromising on precision.

The Challenge: Efficient Recall with Precision
Our previous system, Memseer, boasted a photographic memory and dynamic lookup system, ensuring unparalleled accuracy. However, the cost of sifting through extensive data for every query posed efficiency challenges.

Memsum: A Strategic Enhancement
Memsum introduces a dynamic approach to memory management, leveraging compressed time clouds for efficient organization and recall. This innovative system optimizes memory retrieval processes by condensing user experiences into structured summary clouds.

Key Features of Memsum
Efficient Data Access: Compressed time clouds enable quick and targeted memory recall without exhaustive data processing.
Tailored Queries: Memsum facilitates precise queries based on time or category, ensuring efficient and relevant memory retrieval.
Contextual Memory Recall: The system aggregates relevant summaries across specified time ranges, providing users with coherent narratives.
Enhanced Conversational Context: Memsum enriches interactions by providing contextual summaries, enabling personalized conversations.
Traceability Feature: Users can access links to original data for detailed recall when necessary.
Seamless Integration with Memseer
Memsum seamlessly integrates with Memseer, serving as the next dynamic AI tool within the memory management toolkit. While Memseer acts as the ultimate resource for detailed memory recall, Memsum efficiently organizes and retrieves memories based on structured summary clouds.

With this new method I can keep the ai up to speed much cheaper and not needing to reprocess as much data. Well the time clouds are not as accurate for precision data they serve another purpose which is time understanding for more generalized faster responses. This along with the short allow for really fast data with minimal time. I am planning on spending some additional time thinking more on building out the memory structures even further but looking at ways to do it dynamic so that the Ai can build it with understanding to maximize everything in away that nothing is static but smart enough that it does not lead to chaos or infinite data.

I am a true believer in Dynamic automation in that the automation itself optimizes over time through ML / NL understanding.

@Diet This fixes that issue you mentioned before " Let’s say your AI reminds you every day to feed your dog. Maybe you say something like “thanks” or “done”. Maybe the AI records what kind of dog food you have in the house.

Then, if you ask, what did I feed my dog? You’ll suddenly be flooded with 365 memories of you feeding your dog, unless you predicted that question a year in advance and kept the data sanitized just for that "

The design over time moves forward key information points and builds new ones that the ai system can find the relevant information without sifting as much, well in some logic allows it to look at specific time or keys to find exact when required. Well the system is not perfect yet, it’s a step up from a few months back. taking it full dynamic down the road the ai will train it self to build these points of interest for a simpler term so that it understands and tries to predict what the user wants over time to organize itself in away to provide the best outcomes faster.

2 Likes

The GRAGKEN model (Graph-based Retrieval-Augmented Generation Knowledge-Enabled Navigation) is central to how our AI synthesizes and utilizes information, dynamically constructing queries and deciding logical paths based on a comprehensive understanding garnered from relationships and historical data. This system leverages our timeclouds feature, which summarizes and traces user interactions over extended periods, allowing the AI to access and utilize data from months ago, down to specific message details. Such capabilities ensure that the AI can select the most optimized pathway in real-time, drawing from a deep reservoir of accumulated knowledge.

Optimizing this dynamic system presents unique challenges, especially in scenarios where multiple potential outcomes exist, but only one is truly optimal. To navigate these complexities, we employ sophisticated decision-making frameworks (Ai ML) that utilize fuzzy logic and creative programming. This allows GRAGKEN to autonomously make complex decisions, much like human thought processes, ensuring that responses are not only contextually relevant but also optimally aligned with both current and historical user data.

In refining our GRAGKEN model, we strive for cost efficiency while balancing the need for advanced capabilities. Our goal is to minimize operational costs by reducing the computational demands—ideally to a single, insightful interaction per user query. This approach not only cuts costs but also enhances the system’s agility and responsiveness, making sophisticated AI tools accessible across various economic spectrums.

Reflections on Industry Developments:
As the AI industry evolves, we see an increasing adoption of memory systems similar to those we first developed in kruel.ai, like those recently integrated into OpenAI’s ChatGPT (new Memory) . These advancements reflect the persistent memory capabilities we pioneered approximately 1.6 years ago. It puts a smile on my face to see adoption of these as it means things are getting closer to what we are all trying to achieve from my perspective.

GRAKEN is RAG but imo better as you not only have vectored references, you also have the power of Graph relationships and ability to build ML into your graphs to eventually optimize every outcome per user transaction based on how the user interacts over time :slight_smile:

Well, I checked out OpenAI’s latest ChatGPT demo today, haha. Looks like Kruel.ai might be on its way out. It seems there’s little point in pushing the development further.

  1. They’ve introduced a persistent memory system last release which quite like what kruel.ai had 1.7 years back.

  2. The new version includes a full desktop application that understands both camera inputs and screens, which makes our desktop personal companion system seem redundant. They improved this even further with realtime input understanding. (pretty sure it still based on user message like our system where you ask and it knows where to look)

  3. They’ve even integrated a full personification and voice emotion system with understanding which we had up and running since openai went public.

  4. Ability for realtime two way speech with natural ability to interrupt courtesy of the facebook/mit AIUI from @Luke_Spahija which they now have

The only thing missing is an expanded memory system that remembers everything across all sessions, which I’m pretty sure they’re already developing

The capability to maintain a “photographic memory” across all sessions would truly be the last piece needed. Our Kruel.ai is unique because it doesn’t just recall past interactions; it can access and analyze a vast trove of accumulated data—much like a photographic memory. This allows it to retrieve highly specific information from any past session, making it incredibly useful for users who rely on the system to recall and synthesize information from a wide array of data points. If OpenAI’s model could match this depth and breadth of memory, it would offer a similarly comprehensive and context-rich user experience, much like what Kruel.ai has been doing.

So, it really boils down to whether I should continue developing, knowing this is turning into a replica of what we’ve had over the last two years, especially as they’re progressing much faster than me now the lone developer , even with my ace Ai programmer MiniDavepyCodex and Lynda all build from OpenAI.

Guess I will have to think about what to build next, or do I simply build a custom GPT on the new version and build functions to access my memory system utilizing that as the long term replacing the rest of the system until they replace that :slight_smile:

Happy though , as this now would free up all my spare time again that has been going into kruel.ai development. I can almost see my wifes excitement with no more talk about algorithms etc haha.

I’m not exactly disappointed since the end goal is the same: a system that aids people with memory decline. On the bright side, it saves me from spending more time on development, though it’s bittersweet that Kruel.ai might be drawing to a close. For now, I’ll hold everything until I get a chance to check out the new tools releasing this week—the demo sure was impressive!