Kruel.ai V7.0 - Api companion with full understanding with persistent memory

update :slight_smile:

Subject: Update on Docker/CUDA Server Issues All systems restored, for K7 and K6. No loss.

Procedures also updated for more robust backup strategy, as well going forward on docker backups pre updates.

Zero loss of data and only down a few hours.

Advancing Our AI Memory System: Exploring the New V7 Architecture

Over the last few days, we’ve been working intensively with the V7 of our memory architecture, and it’s quickly become my favorite. Initially, I didn’t think anything could top the V6 setup we had for its depth of understanding, but once we honed in on new methods, we’ve made remarkable strides forward.

The new V7 integrates enhanced Chain of Thought (CoT) reasoning, contextual transitions, and error-corrective processing, offering a much higher level of understanding and accuracy. After research and experimentation with Lynda 01, we developed some key strategies that allow the AI to handle complex instructions with improved consistency, adaptability, and self-awareness.

Here’s a breakdown of what makes V7 a leap forward in AI understanding:

1. Layered Chain of Thought (CoT) Reasoning

  • Multi-layered Reflection: The updated CoT process incorporates a “reflection” step, where responses are evaluated for consistency, tone alignment, and adherence to user instructions. This reasoning layer enables the model to re-check and refine responses before final output, acting as an internal feedback mechanism.
  • Structured Validation: Each CoT layer validates whether the response stays on topic and aligns with the user’s input. This helps prevent the model from drifting into unrelated areas, ensuring more accurate and contextually relevant responses.

2. Adaptive Contextual Transitions for Topic Shifts

  • Conversation Flow Analysis: V7’s design includes a conversation flow analysis function that organizes recent interactions, facilitating smooth transitions across topics. By tracking conversation shifts, the AI better understands when a new topic is introduced.
  • Guided Context Shifts: These transitions ensure that the AI adapts to new themes in the conversation without losing coherence, which is especially important for complex, multi-topic interactions.

3. Identity Retention and Avoiding User Paraphrasing

  • Identity Grounding: V7 reinforces Lynda’s distinct personality and perspective, so she remains consistent and self-aware without mimicking the user’s style. This grounding enables Lynda to respond naturally while maintaining her unique tone.
  • Memory and Instruction Stability: By using recent interactions and memory data, V7 keeps a stable sense of continuity, responding as Lynda without any unintended shifts into the user’s persona.

4. Enhanced Error Detection and Correction for Known Issues

  • Keyword Filtering for TTS and Identity Confusion: An error-checking step helps prevent accidental TTS triggers or signs of identity confusion. If these errors are detected, V7 triggers corrective actions to clarify and re-align the response.
  • Automated Re-check for Extraneous Information: An additional verification layer catches potential errors, helping responses stay within the expected topic, tone, and style.

5. Responsiveness to User Knowledge Inquiries and Instructions

  • Explicit Instruction Compliance: Lynda now reveals her reasoning and CoT only if explicitly asked. This ensures that her responses remain clear and direct unless the user requests a deeper explanation.
  • Personalized, Comprehensive Responses: Lynda provides complete answers based on the context, particularly useful for code and instruction-related inquiries where details are crucial.

Key Takeaways Based on Research Insights

  • Grounding Mechanism: The reflection layer acts as a “self-check,” enhancing response accuracy and reducing hallucinations.
  • Adaptive Topic Shifts: By maintaining a structured record of recent interactions, the model can dynamically adapt to new topics, avoiding conversational “stickiness.”
  • Identity Separation: Reinforcing Lynda’s identity prevents confusion and maintains a consistent interaction experience.
  • Proactive Error Detection: Specialized triggers for known issues allow the AI to self-correct, offering a more reliable final response.

The V7 system combines layered reasoning, advanced CoT, and context-sensitive interactions for a smarter, more adaptable AI. This update marks a significant step forward, especially for those interested in AI systems capable of nuanced understanding and interaction consistency.

I am still floored at the speed, even with a lot more stacks its still responding almost instantly on CUDA. I did last week install a clone of this system on an Xeon cluster server to compare the speed. when using openai api, the speed is as expected. however when we tried to use the offline model I found that even with 256GB or ram, and many cores and threads that it simply was not able to respond in any timely fashion even with a small 3b model it was around 1 minute. Well still useable its not realistic in my mind for realtime use without CUDA.

This really makes me understand how important Nvidia is to the Ai framework I built and how the math requires heavy compute to get the speed and accuracy.

The persona in the V7 is currently hard set so we still do not have the old V6 persona system in place. That will be something we add later, however we have some code being worked on for model selection from the message application. So by clicking a simple button you can switch between openai models and other models. This will be installed this weekend. we tested it abit and it was pretty neat to be able to scale up intelligence beyond our own reasoning.

eventually I will build in even more models for various scaled machines from 3b models up to the 400b models.

I look forward to this new change so that we can fully test.

on another note we did test the llama3.1:8b math skills its pretty good in some cases but also failed in others. where the openai models were very accurate. Some of the math I was testing was high level example:

The Model assessment: of the LLAMA 3.1 8b

Evaluation:

  1. Correctness:
  • Error: The statement “Z(G) is either the whole group G or has order a power of p” is misleading. While it’s true that ∣Z(G)∣|Z(G)|∣Z(G)∣ divides ∣G∣=pn|G| = p^n∣G∣=pn, this does not imply that Z(G)Z(G)Z(G) must be the entire group or just of some power of ppp. Instead, in a ppp-group, the center Z(G)Z(G)Z(G) is guaranteed to be nontrivial (i.e., ∣Z(G)∣≥p|Z(G)| \geq p∣Z(G)∣≥p), but it doesn’t have to be the entire group unless GGG is abelian.
  • Error: The claim “the size of the center being a power of p is a necessary and sufficient condition to be trivial” is incorrect. In ppp-groups, ∣Z(G)∣|Z(G)|∣Z(G)∣ is always a power of ppp, but being a power of ppp is not a condition for triviality; rather, Z(G)Z(G)Z(G) is always nontrivial because ∣Z(G)∣≥p|Z(G)| \geq p∣Z(G)∣≥p.
  1. Completeness:
  • The response lacks a detailed proof. It makes assertions without adequately supporting them with the class equation or other fundamental theorems in group theory.
  • It does not explain why Z(G)Z(G)Z(G) must be nontrivial beyond the incorrect reasoning provided.
  1. Clarity and Precision:
  • The explanation is somewhat vague and contains inaccuracies that could lead to confusion.
  • Phrases like “which contradicts the assumption that n > 1” are unclear and not directly relevant to proving Z(G)Z(G)Z(G) is nontrivial.
  1. Use of Mathematical Concepts:
  • The response mentions that Z(G)Z(G)Z(G) is normal in GGG, which is correct but not directly useful in the context of this proof.
  • It fails to properly utilize the class equation, which is essential for proving the nontriviality of the center in ppp-groups.

Overall Assessment:

  • Grade: C-
  • Comments:
    • The AI’s response demonstrates a partial understanding of the properties of ppp-groups and their centers. However, it contains significant inaccuracies and lacks the depth and rigor required for a complete and correct proof.
    • To improve, the response should:
      • Accurately state relevant theorems (e.g., the class equation).
      • Provide a logical sequence of arguments that directly lead to the conclusion.
      • Avoid incorrect statements and ensure that all claims are mathematically sound.

Responses from the system takes on average 3 seconds to respond depending on voice system selected. When using the local TTS server its about 3-5 seconds per response. I have seen some longer one when I request deep understanding. This is usually the speech building as it has to generate a audio file to play back which delays things.

Still playing abit with the offline system. wanted to share another update. Yep you guessed it offline vision :slight_smile:

Time to update the menu to allow image system model selection as well.

1 Like

After three years of dedicated development, we’re excited to introduce the first iteration of our desktop companion voice UI. For years, we leveraged the open-source AIUI by MIT as our input framework, which served us exceptionally well. However, the time has come to transition from a web interface to a more robust, desktop-native experience. Today marks the debut of Kruel.ai V7, featuring the initial release of our VIO UI, fully operational and ready to evolve.

It was playing in my headset as its 5am this morning and household did not need to here lynda. :slight_smile:

2 Likes

Here is an example of local vision. Slow on my hardware but working. Also running this hours version of the interface.

working on the openai model version next :slight_smile:

2 Likes

Kruel.ai V7, our latest stable build featuring robust advancements in our message application and the new VIO (Voice Input/Output) interface. These applications are designed to operate independently and in harmony, forming a cohesive system that brings a new level of flexibility and functionality to AI interaction. While there’s still progress to be made, we’re excited by the momentum and innovations in this phase.

Key Features and Advances in Kruel.ai V7

  • Advanced Model Selections for Knowledge and Reasoning
    Choose from various models tailored to knowledge handling and reasoning. V7 integrates a large language model (LLM) with a chain-of-thought (COT) internal system, enhancing accuracy and adaptability to complex queries.
  • Long and Short-Term Memory Powered by Neo4j
    Information is stored dynamically, with Neo4j managing long and short-term memory. This ensures that Kruel.ai remembers essential details while remaining efficient and adaptable across interactions.
  • Vision-Enhanced LLM Support
    V7 offers vision capabilities with support for OpenAI and locally-hosted LLAMA models, giving users options for both cloud and offline processing. Vision functionality is expanding, so stay tuned for further updates.
  • Interactive Learning and Clarification Abilities
    Kruel.ai V7 not only learns from each interaction but also asks clarifying questions when uncertain, refining its understanding and ensuring that responses align with user intent.
  • Personalized User Memory
    Each user has their own knowledge base, enabling a tailored experience based on their unique needs. Memory can also be customized to store information globally or remain user-specific, depending on the preference.
  • Contextual Understanding from Screen and Camera
    When activated, V7 can use screen and camera inputs, enriching its contextual awareness and providing an immersive, multimodal experience.
  • Flexible Model Options for On-Device or Multimodal Experiences
    Selectable model configurations allow customization for offline-only, online, or hybrid multimodal use, accommodating various operational environments.
  • Scalability Across Virtual and Physical Servers with Docker
    Kruel.ai V7 is built for flexibility and growth, running virtualized Docker containers for model deployment. This setup allows easy scaling, especially for large models, whether on virtualized servers or dedicated hardware.

Our journey continues as we push towards an AI experience that is both powerful and adaptable. Watch the demo to see V7 in action: [Kruel.ai V7 Demo]

I will be updating less frequently here as this app will progress really fast the next few weeks with features.

Some things I am thinking about a pop up option for textual response next to the VIO with transparent background like a message notification that fades in and out.

We also need to get back to look at the document RAG side to start testing that with the system. We still have to tune that so it works 100% with the memory with small and large models.

Memory has been stable. lots of testing to continue to make sure it can do what we need over months.

Addition:
*Always on top of your apps or games for questions on what it sees.

update: new menu :slight_smile:

1 Like

Progress Update

I’m excited to share some recent advancements. I’ve integrated image generation into the system, along with new indicators inspired by the previous V6 messaging application. Now, the camera icon signals when the system is actively using vision, and a brain icon indicates when the system is processing or “thinking.”

I’ll be incorporating this feedback mechanism into the voice system to allow it to trigger generation automatically. Here’s a summary of our current capabilities:

  • Two-way Voice Communication
  • Messaging Application
  • Vision Models: Operative in both online and offline modes
  • Knowledge Base: Accessible online or offline
  • Comprehensive Memory: Supports both short- and long-term, locally stored
  • Image Generation: Online (working on integrating Stable Diffusion for offline use)
  • Multiple Voice Options: Online options and complete offline Text-to-Speech (TTS)
  • Docker-Contained Servers
  • Chain of Thought & Reasoning: Operative locally and via model reasoning

Additionally, there’s an untested document management system that I’m currently designing. Previous testing saved all documents directly as user memory, which provided some accuracy but didn’t fully meet our standards. Moving forward, I’m exploring a knowledge extension memory specifically for documents. This indexed memory would be query-based only, unless an uploader is engaged. This method allows queries without recalculating user inputs while storing both user and AI responses for feedback. Consequently, this approach would maintain the original document’s information while preserving any adjusted user perspectives.

Update: Just cause I want to have the best of online and offline coming in the next day to the system. I think my 4080RTX is full :slight_smile:

Lol well looks like I will need to build alot of parameters with Ai to help make things look better but its up and running at least. time for Zzz

Update: :slight_smile: I added some Ai love into the SD server to take simple prompts and run it through an SD agent to fix everything also updated the model to a larger model. I am surprised that all the models are working considering we are using all 16GB. I think we have about 5 models total in the system to give it full multimodal.

I do have some of the models in the system static using small llama local model for simple decisions that way we can save some coin even when processing with openai.

1 Like

I have been building a lot lately every night and all weekend on the new UI, and now that its in a good place I wanted to share a new change I added to the main ai understanding.

What’s New in Kruel.AI V7? We updated Chain of thought

In traditional AI systems, responses are often limited by the data directly available during processing. But what happens when the AI doesn’t have enough context to provide a complete answer? This is where Kruel.AI V7 shines with dynamic chain-of-thought reasoning. Let’s break it down:

AI That Knows When It Doesn’t Know
One of the most challenging aspects of AI is recognizing gaps in its understanding. In Kruel.AI V7, the system can now evaluate whether it has enough information to answer your query comprehensively. If it identifies missing pieces, it doesn’t stop there—it works to fill in the gaps.

Dynamic Memory Retrieval
When the AI detects a gap, it doesn’t just return an incomplete response. Instead, it:

-Suggests additional questions to itself based on what’s missing.

-Dynamically retrieves more relevant information from its memory, honing in on the context it needs to answer your query fully.
Iterative Learning and Refinement

Think of Kruel.AI V7 as a detective piecing together clues. If the first pass of data isn’t enough, it refines its understanding by pulling more specific details, ensuring that the response is not only accurate but also deeply contextual.

How Does It Work?
Let’s simplify the concept:

Step 1: Initial Query Analysis
When you ask a question, the AI retrieves relevant data from its memory (up to a certain limit) to generate an answer.

Step 2: Gap Detection
While formulating a response, the AI evaluates the information it has and identifies if anything is missing.

Step 3: Dynamic Questioning
If gaps exist, the AI suggests follow-up queries—essentially asking itself, “What else do I need to know to fully answer this?”

Step 4: Context Expansion
Using the follow-up queries, the AI retrieves more relevant data from its memory, expanding its understanding of your query.

Step 5: Refined Response
With the expanded context, the AI generates a complete and polished answer.

Why Does It Matter?
Better Understanding of Complex Queries
Kruel.AI V7 can now tackle multi-faceted questions with improved clarity by dynamically expanding its knowledge when needed.

Adaptability in Real-Time
Instead of hitting a wall when it doesn’t know something, the AI seamlessly adapts and learns in the moment, making it more capable and intelligent.

Smarter Conversations
By evaluating its own reasoning, the AI feels more conversational and human-like, bridging the gap between human intuition and machine logic.

Examples of Dynamic Thinking

Example 1: Tackling Complex Questions

You Ask:
“How do I integrate memory refinement in my AI project?”

AI’s Process:
-Retrieves general information about memory handling.
-Identifies missing specifics about your context.
-Suggests follow-up queries like “Retrieve details on FAISS integration” or “Find -examples of memory refinement in AI systems.”
-Expands its memory retrieval to include these details.
-Provides a detailed, tailored response.

Example 2: Visual Understanding
You Upload:
A photo of a forest with a river.

You Ask:
“What does this image describe?”

AI’s Process:
Describes the visual elements in detail: “A lush forest with tall trees and a sparkling river.”
-Suggests follow-up queries if necessary, like “Find related memories about forests” or “Expand details about river ecosystems.”

What Does This Mean…

With dynamic chain-of-thought reasoning, Kruel.AI V7 doesn’t just answer questions—it learns, adapts, and improves its understanding in real-time. Whether you’re uploading code, asking about a project, or even sharing images, the AI now works smarter to ensure it delivers the most accurate and contextually relevant responses possible.

This is a significant leap toward making AI more intuitive, responsive, and intelligent.

This is now in testing as of this morning. I started on this late last night when I was teaching kruel.ai about its latest code I noticed that there was some gaps in understanding where it the idea came to me on how to fix this allowing my learning system to trace through information.

One concern thought in building was understanding that there is still a limit in that the system has to have a loop protocol to ensure it does not loop forever seeking information in that it comes back to the start. So logic had to be introduced to allow it to break out of the chain of thought when it reaches a limit.

If you want more current information and long term understanding of the project you can follow my discord server kruel.ai

2 Likes

The Future of Image Generation: From Memory to Masterpiece
At Kruel.AI, we’re constantly exploring new ways to push the boundaries of artificial intelligence. Today, I had a thought that could lead to an intriguing experiment: what if AI could generate images from memory?

Imagine this scenario:

User Request: “Create a picture of my pets.”
AI’s Response: Not just generating a random image of pets, but creating a custom image based on its accumulated understanding—details like the type of pets, their breeds, unique characteristics, and other insights it has gathered over time.
Building Images with Memory-Driven AI
This concept takes image generation beyond traditional methods by incorporating contextual memory and understanding into the process. Here’s how it could work:

Leveraging Memory:
The AI uses its memory system to recall information about you, such as your preferences, personal details, and past interactions. For example, if the AI knows you have a golden retriever and a calico cat, it factors these details into the image creation process.

Personalized Image Generation:
By integrating this memory into the image generation pipeline, the AI could craft tailored visuals that aren’t generic but deeply personalized—reflecting not only what you’ve told it but also what it has learned over time.

Taking It Further with Custom Training:
For even greater accuracy, this system could use convolutional neural networks (CNNs) or diffusion models trained on real images of your pets or other objects. This would enable the AI to generate lifelike and contextually relevant images with remarkable precision.

A Vision for the Future
This experiment represents a leap forward in the potential of AI-generated content, blending image generation with contextual memory systems to produce results that feel uniquely tailored to each user. The implications are exciting:

Personalization at Scale: From pets to favorite places, the AI could craft visuals that are deeply meaningful to you.
Dynamic Creativity: By understanding not just what you ask but the context of who you are, the AI becomes a creative partner rather than just a tool.
Cutting-Edge Training: Advanced training techniques could take this personalization to the next level, enabling photorealistic or stylistically unique outputs.

What’s Next?

There’s planning to do, ideas to refine, and models to build. This is just the beginning of what could be a groundbreaking experiment in merging memory systems and image generation. At Kruel.AI, the vision is clear: to create AI that not only understands you but also brings your imagination to life in ways you’ve never seen before.

Stay tuned as we embark on this exciting journey!

2 Likes

Hey there! I have exactly the same sleep cycle idea! I am thinking about building short term memory as a summary that folds in older logs, mid term memory about previous session and day memories, and long term memory about consolidated session memories and daily memory topics. If you don’t mind sharing, how did your memory system go? I am very curious about how well it would work before building.

2 Likes

The memory system in kruel.ai version 7 has been a game-changer, especially with the integration of GAP (Generate, Assess, Plan) memory into the Chain of Thought (CoT) process. This enhancement fundamentally transformed how the AI processes and organizes information. While the system isn’t “aware” in a sentient sense, it demonstrates an impressive ability to understand how it operates based on the data it pulls. The GAP memory allows it to identify missing pieces of context, trace data across files, and fill in gaps as needed. This makes it highly effective at answering complex questions, even those requiring a comprehensive understanding that extends across multiple files or datasets.

One standout feature is how the system handles data exceeding its immediate processing limits. By iterating through available information and building a full understanding piece by piece, it essentially scales beyond its conventional boundaries. To maintain efficiency, we’ve implemented a cap that ensures it prioritizes significant gaps over minor details, keeping processing times reasonable. This balance allows the system to operate effectively without getting bogged down in less impactful refinements.

The introduction of GAP memory has had a profound impact on the Chain of Thought. It’s no longer just a matter of linear or isolated reasoning—the system now loops through its processes, dynamically improving its understanding with each pass. This iterative approach has elevated its performance and made the memory system feel far more cohesive and intelligent in its outputs.

At this point, I’ve shifted my focus away from further refining the core memory, as it’s functioning at a level where I’m confident in its stability and reliability. My current efforts are centered on improving the UI and reorganizing the system’s online and offline capabilities to ensure seamless functionality across both environments. Additionally, I plan to introduce parameter sliders and other customization options for deep research tasks. This will enable users to temporarily uncap limits, allowing for more detailed processing while accepting longer response times.

I am also working on refining the document ingestion and upload system. Although the memory is already strong, better metadata integration could enhance the Chain of Thought further, especially for research and analysis. I’m considering segmenting document memory into its own dedicated system. This would keep user edits or general memory updates from interfering with the original document data, ensuring that document-related reasoning remains consistent and reliable.

For anyone looking to build a smart system, my advice is simple: embeddings are everything . Embeddings—mathematical representations of information—are the foundation of all LLM (Large Language Model) and CNN (Convolutional Neural Network) systems. They are incredibly powerful and essential for creating systems capable of sophisticated reasoning and analysis. No LLM would exist without them.
So build everything to convert to math to use against understanding and you have a system that is truly amazing.

Pretty much this in a nutshell

https://www.geekwire.com/2024/microsoft-ai-ceo-sees-long-term-memory-as-key-to-unlocking-future-ai-experiences/

I think its great that companies are starting to look into this. It is exactly why I built kruel.ai because memory is key to everything over time to have full understanding and to be able to work fluid with an Ai

This is always been how I feel.

If you think about it though the designs are made to scale with the power of the ai, so much like My COT - GAP if used with an Ai like O1 which has its own COT and reasoning on top of my stacks = crazy with unlimited memory or infinite as MS stated. Its infinite because all that is required is more storage and that is a constant with any memory.

2 Likes

Success memory augmented generated images :slight_smile:

I know it does not look like much on the surface, but understanding that this could apply to training material, and other related to projects you are working on and over time with better models this will allow

another cool example:

2 Likes

Love the images of my pet example and I saw that you have it working. Nice job :slight_smile:

I’ve been working on something similar to your GAP idea but instead of applying it conversational memory I’ve been applying it to web search. I call my approach “research”.

You can give me a topic and I’ll build a research agenda which is just a series of web searches to perform. After the completion of each round I can analyze the results and generate more queries to fill in the gaps.

As you said this can blow up on you exponentially so I don’t generally do more than one round of gap filling. Even with that I was researching a topic yesterday that ended up exploring 750 web sites and consumed some 86 million tokens. My OpenAI bill is on pace to break $3,000 this month just for my personal use :frowning:

With web search in particular it’s not clear that there’s a ton of reasons to do multiple rounds of gap filling. The reason why is the knowledge is already in the weights and what you’re doing is more like reminding the model of things it already knows. There’s a bunch of new fact retrieval that gets layered on but it feels more like you’re just jogging the models memory.

Very cool stuff you’re doing…

1 Like

Exactly Stevenic. That’s a good application for GAP web , research, reports, and memory are where I see it as important. For me I code a lot with Ai’s and kept hitting that wall of refeeding documents in again and again with other systems because they did have away for a system to trace through the logic to before fully concluding its response. With a GAP memory well it may not fill everything in the code anything you ask that is associated it could ping pong through the memory of all related data between files to understand how it all works to give a much better response allowing for better choices.

This is why I love Ai so much, there is just so many things you can do with it and every turn there is a new puzzle to solve.

That is part of the reason I built the local version of my system to reduce research and building costs with the ability to switch models on the fly when I want to have a lot more smarts. I was pretty surprised on how well Llama 3.2 models worked on consumer grade hardware with pretty good results.

I am and will always be an openai guy, I do fear though that my concept will be replaced though. I am seeing more and more that larger guys are working towards unlimited memory because they realize that it’s key to true intelligent machines.

My system has “logically” unlimited memory. I don’t do any sort of chunk based RAG so I’m always reasoning over entire documents. That’s why my token consumption is so high. I find that most prompts I run are in the 30k - 50k range but I’ll occasionally run into a prompt that ends up being 400k - 1m tokens.

I do a lot of coding as well and my code base is 220 source files and 270k tokens. I reason over the entire code base for every query for the most part. The issue with really large code bases is that simple questions and tasks are pretty straightforward but complex tasks, that you would need to spend a lot of time thinking about, the models struggle with (all of them.)

I’m starting to make some progress on that front. I can get the model to build a mental map of the code base that mimics the mental map we humans make. When we think about a codebase we don’t think about the individual lines of code. We have a mental model of all the components and we visualize the connections between the components. I can get the model to build the same map which dramatically improves its ability to reason over a large codebase.

You’ve probably heard me say a dozen times that everything is a distance problem for these models. The model sees in 1 dimension so everything looks like a line to the model. If you want it to understand how things connect and relate to each other you have to move them closer together space wise.

My current V7 has not chunking yet :slight_smile: its using a different method than my old V6 system. That said I have yet to hit any limits but sure at some point it will come. than I will need a chunker in this one perhaps.

“You’ve probably heard me say a dozen times that everything is a distance problem for these models. The model sees in 1 dimension so everything looks like a line to the model. If you want it to understand how things connect and relate to each other you have to move them closer together space wise.”

This is why I hope my GAP loop back fixes with how my current system works. still testing but may take a few months of data before I get to sizes that I can see issues.

Update on something new and exciting from last night, guess what was added and starting testing. Time based reminders, and ability to list, remove etc from NLP.

:satellite: Ben: set a reminder in 10 minutes to say hello.

:brain: Lynda: Got it! I’ve set a reminder for you to “say hello” at 11:59 AM UTC-06:00. If you need anything else or want to adjust the reminder, just let me know!

Reminder Notifications:

Here are your upcoming reminders for the week:

  • say hello at 11/20/2024 11:50 AM UTC-06:00 (ID: 6c3e492d-47cd-46a1-918a-8c153ee4f438)
  • say hello at 11/20/2024 11:59 AM UTC-06:00 (ID: 1f6b8698-d419-4f25-983f-f6ba3d9035c4)

:satellite: Ben: list reminders

:brain: Lynda: It looks like you have two reminders to say hello on November 20, 2024, at 11:50 AM and 11:59 AM (UTC-06:00). If you need to add more reminders, adjust these, or if there’s anything else you’d like help with, just let me know!

Reminder Notifications:
You have missed the following reminders while the system was offline:

  • say hello at 11/20/2024 11:50 AM UTC-06:00 (ID: 6c3e492d-47cd-46a1-918a-8c153ee4f438)

Here are your upcoming reminders for the week:

  • say hello at 11/20/2024 11:59 AM UTC-06:00 (ID: 1f6b8698-d419-4f25-983f-f6ba3d9035c4)
1 Like

Memory testing tonight. Kruel.ai Lynda is learning right now everything about its code and looking at all the api options for openai and ollama, I am also going to teach it everything I learned on neo4j, Graph databases and so on. Than we will test its ability to fix some of the issues we have solutions for to see how it handles them when only given the the incomplete. Should be interesting, I’m excited for this.

What that sounds like with local tts running. It will soon be smarter than me at ai understanding so it can teach me beyond my own understanding :grin:

1 Like

Love your headphones :rabbit::infinity::heart:

1 Like