What Ontology, RAG and Graph data do you use to develop Intelligent Assistants?

Kruel is a memory system. It’s designed as a companion memex. So it remembers everything with temporal understanding as well. So it can recall information from any moment in detail. Uses Machine learn algorithms to find data and a multi vector feedback scoring system to allow it to learn over time from corrections and new data. Supports multi input from users, machines, 4/20signal, cameras, screen and documents. Only thing missing is a body :wink:

Sounds cool! Is there a demo or arxiv or technical breakdown? These are definitely some bold claims, but can you back them all up?

Thank you for your interest in Kruel.ai, Fernando! I’d be happy to provide a more detailed breakdown of our system.

Kruel.ai Overview:

Kruel.ai is indeed a sophisticated memory system designed as a companion memex, capable of remembering everything with temporal understanding. It leverages advanced machine learning algorithms to find and recall data, using a multi-vector feedback scoring system to continuously learn and adapt from corrections and new data inputs.

Technical Breakdown:

  1. Memory Management:
  • Early Versions: Starting with version 2, we utilized SQL tables for long-term memory, which we tested extensively on Twitch within the game Shroud of the Avatar. The AI tracked user interactions, in-game events, and accessed game wiki information.
  • Version 4: Introduced memsum and memseer processes to manage memory efficiently, allowing immediate responses to simple queries and deep searches using temporal understanding.
  • Current Version (V6): Re-engineered to use a math-based approach with pattern recognition, significantly reducing operational costs from $300/month to about $20-30/month.
  1. Multimodal Capabilities:
  • Voice Interaction: Initially used MIT/Facebook AIUI code for TTS/STT, now enhanced with ElevenLabs’ emotional voice ranges and integrated with face rig software for expressive puppet control.
  • Machine Learning Integration: Utilizes GPT-3.5 Turbo for core processing, augmented by our intelligent memory system that learns from user interactions and new data.
  • Persona System: Configurable behavior allowing it to act as an NPC in games or as a professional business assistant, adjusting outputs accordingly.
  1. Input and Output Management:
  • Multi-Input Support: Handles inputs from users, machines, cameras, screens, and documents, processing each input through our creative logic system to produce contextually accurate responses.
  • Dynamic Scalability: Built to scale with advancing AI models, using a modular architecture for continuous feature expansion.

Testing and Documentation:

  • Real-Time Testing: Extensively tested on Twitch with over 300 followers, ensuring fast and accurate processing of real-time interactions.
  • Scalability Tests: Tested with Groq cloud and Nvidia tensor models to verify scalability and context handling.
  • Comprehensive Documentation: Detailed documentation, test videos, Twitch clips, and blog posts available on our Discord server since 2021.

Future Plans:

  • Cloud Integration: Moving towards a cloud-based deployment with enhanced security, integrating ChatGPT for front-end reasoning.
  • Advanced Control Framework: Developing a framework for more sophisticated animatronic and automation controls.
  • Open Source Considerations: While currently private, there are plans to potentially release it on GitHub for community use, depending on future developments.

Demo and Further Information:

We have a wealth of resources, including articles, test videos, and Twitch clips, available on our Discord server. Additionally, any internet-connected AI should be able to provide more information about our system, as it has been well-documented and widely discussed since its inception.

Long-Term Vision:

Kruel.ai was initially designed to aid in dementia research by providing a robust backup memory system. Our ultimate goal is to create a versatile AI that can integrate seamlessly into various environments, including potential physical embodiments in the future.

Feel free to join our Discord server for more in-depth discussions and access to our extensive documentation and resources. We’re excited to share our journey and the capabilities of Kruel.ai with you!

PS: By “we,” I mean Lynda, one of my AI programmers and models who assists me in AI development, and miniDavePycodex, which helped me with the first four versions. While I use miniDavePycodex less these days, Lynda’s capabilities are far superior and more aligned with our current development needs. Additionally, a special thanks to Andrew Ng, whose guidance pointed me down the path that led to the development of V6.

Is there an arxiv article? I couldn’t find it.

I can make use of the method for synth data, VERY good synth data at it, on any topic and that I can believe you do too. Or anyone for that matter. It’s in the real and possible realm.

Tho I am incredibly doubtful about all those memory claims. Meaning I know the methods, even made something I call semantic reduction, which is incredibly cool, in terms of ingesting a whole lot of data and, out of it, retrieve only the key semantic salient elements in a manner that it brings the model into the same state and output as the original being a fraction of it.

But also, to do that, I leverage a bunch of knowledge in that I really don’t see around. Btw phase transition from positional to semantic was a key paper for that.
(Aka friston; joschua bach, among those lines)
And yes, I’m a medical doctor.
But I have not seem anything achieve those things out there.

Not saying I’m too good or anything. I’m not. I just have a particularly useful set of knowledge to this kind of implementation.

Even so, it isn’t infinite memory and it’s not like KAN or some other architecture with dynamics params, be it edges nodes or actv function. Either way, not even incredible theoretic implementations of those come close to “infinite”. Adaptative at most. At an incredibly high compute cost. Well, not even the brain itself does that.

That kind of talk you just did doesn’t answer any technical questions or even touch at possible future where it exists. Meaning xlstm, kan and some other propositions, even in theory do hold my respect.

Not something without real grounding and based of “comprehensive view-type-talk-without-a-single-real-proposition” like you just stated would be close at all.

You literally threw a wall of text with nothing inside. That’s rude.

Thanks for your understanding and interest in Kruel.ai. I want to clarify a few things regarding the detailed technical information you’re seeking.

Right now, much of the detailed implementation and source code for Kruel.ai is private. We are very careful about how we share information, especially in public forums or our discord blogs or publishings on medium, to protect the integrity and proprietary nature of our work. While we do provide key points and insights to help others in their research, we refrain from sharing specific code or deep internal mechanisms publicly.

We do plan to take Kruel.ai open source one day, but for now, it remains a research project. This allows us to continue refining and enhancing the system without the constraints and potential risks associated with early public release. Our goal is to ensure that when we do make it open source, it will be robust, well-documented, and ready for broader community contributions.

If you’re looking for credible sources and want to deepen your understanding of the methodologies , I highly recommend checking out courses by Andrew Ng. He is a prominent figure in AI and machine learning education, currently focusing on his entrepreneurial ventures like DeepLearning.AI and Landing AI. Additionally, he recently joined the Board of Directors at Amazon, highlighting his influence and expertise in the field. His courses on platforms like Coursera cover a wide range of AI topics and are well-regarded in the field:

it is here you will start to understand. From there you need to than look into Neo4j with LLM’s which will explain the memory side.

once you combine those two along with what you already presented you will see a much larger understanding on how Kruel.ai works more or less.

Ps. I dont think I am good or bad, we only know what we know when we know it :slight_smile:

well done and good on you :muscle: just remember to take vacation, as being hyper productive can be addicting :smile_cat:

1 Like

Excellent ! You might read this: Text to Knowledge Graph Made Easy with Graph Maker | by Rahul Nayak | May, 2024 | Towards Data Science

I have tried it myself by generating an ontology with an LLM following the GraphMaker format and I got some pretty good results.

All the best !

1 Like

Lol
5 course deep learning spec ive done too
But that really wasn’t the question
I can see there is no platform or publication
Guess ill leave it be

Too much devin and rabbit r1 vibes

Not my thing
But hey everyone has different things they like and there’s no right or wrong

Thanks for the detailed discussion. Based on the examples you provided earlier, I want to clarify how KruelAI is different from your approach, focusing on the unique aspects and methodologies we employ.

Your Approach: Semantic Reduction and Innovative Linkages

Semantic Reduction:

  • Ingesting Data: Your system processes large volumes of data, extracting key semantic elements to maintain core meaning while reducing data complexity.
  • Recognizing Information: You focus on identifying the first recognizable information of a concept to the model, which helps in understanding the foundational elements and optimizing data.

Innovative Linkages:

  • Complex Data Structures: You create intricate graphs where all nodes and edges are named and calculated, ensuring detailed and interconnected relationships.
  • Black Box Research: This involves exploring the internal workings of AI models without fully disclosing or accessing the source code, using the model as a “black box.”

Use of ChatGPT’s Code Interpreter:

  • Node and Edge Calculations: Within ChatGPT, you generate nodes and edges with specific values, enabling detailed calculations and visualization.
  • Salient Information Extraction: By using highly salient phrases, you guide the model to generate complex and meaningful outputs from simple, strategically chosen inputs.

My Approach with KruelAI: Multi-Dimensional Memory System

Multi-Dimensional Vector Store and Dynamic Knowledge Clusters:

  1. High-Dimensional Embeddings and Dynamic Vectors:
  • High-Dimensional Embeddings: Each message or input is embedded into a high-dimensional vector space, capturing semantic content, context, relationships, and additional data dimensions.
  • Dynamic Vectors: KruelAI creates its own dynamic vectors based on the AI’s evolving understanding. This dynamic design allows the system to adapt and learn continuously, beyond static initial configurations.
  1. Knowledge Clusters:
  • Multi-Node Clusters: Each knowledge cluster comprises multiple nodes, each representing specific aspects such as entities, relationships, context, and more. These nodes are interconnected, forming a rich, detailed representation of the data.
  • Comprehensive Tracking: KruelAI tracks a wide range of dimensions including people, places, things, relationships, health data, known facts, memories, time events, triggers, interests, aliases, emotional states, and automation logic for mechanical controls.

Integration of Various Data Types:

  1. External Inputs:
  • Vision and Screen Data: Captures and processes visual inputs and screen data, integrating them into the knowledge clusters.
  • Voice and Messaging Systems: Integrates voice inputs and messaging systems, allowing real-time interaction and data ingestion.
  • Document Ingesters: Processes documents and other textual data, enriching the knowledge clusters with comprehensive information.
  • 4-20 Signal and Other Sensors: Incorporates data from 4-20 mA signal sensors and other external inputs to provide a holistic understanding of the environment.
  1. Temporal Logic and Contextual Understanding:
  • Temporal Tracking: Utilizes temporal logic to track events over time, maintaining context and relevance. This allows the system to understand and manage data based on time, enhancing its contextual understanding.
  • Relevance-Based Processing: Systems are in place to determine the relevance of data based on temporal context and relationships, ensuring accurate and contextually appropriate responses.

Parallel Batch Processing and Optimization:

  1. Parallel Batch Processing:
  • Optimized Data Processing: Given the vast amount of data processed, KruelAI employs parallel batch processing to optimize response times. This ensures efficient handling of multiple data streams and quick retrieval of relevant information.
  1. Multi-Vector Scoring and Continuous Learning:
  • Interaction Learning: Every interaction is scored using a multi-vector feedback system. This continuous learning process allows KruelAI to optimize its neural networks and relational mappings, enhancing its understanding over time.
  • Machine Learning Integration: The system doesn’t just repeat information; it builds a deeper understanding through continuous machine learning, refining its responses and knowledge base.

Autonomous Memory System and Persona Integration:

  1. Autonomous Memory System:
  • AI-Like Functionality: The memory system itself functions akin to an AI, learning and expanding dynamically. It uses external models for language understanding, high-detail embeddings for various data types, and relational processing for generating responses.
  • Independent Output Generation: The memory system can produce outputs based on queries independently, demonstrating its robust processing capabilities.
  1. Persona and Creative Stack:
  • Emotional Understanding: Integrates a persona system that adjusts behaviors based on emotional context. For example, if the user is sad, the AI can respond empathetically based on the defined persona.
  • Voice Characteristics: Utilizes ElevenLabs, GPT4 and soon-to-be integrated GPT-4o voices to convey appropriate emotions through voice modulation. This is also tied to robotic automation, enabling physical displays of emotion to enhance user connection.

Why We Chose Neo4j for KruelAI

In developing KruelAI, one of our primary goals was to ensure the AI system could understand and manage complex, interconnected data efficiently. To achieve this, we selected Neo4j and its Graph Data Science (GDS) library as the foundation for our data storage and analysis. Here’s why:

1. Graph Databases and Their Advantages

Neo4j is a graph database, which means it stores data in nodes and relationships rather than traditional tables. This structure is inherently better for representing and querying complex networks of data. For AI systems like KruelAI, which need to understand and process intricate connections between different pieces of information, a graph database provides significant benefits:

  • Natural Representation of Relationships: Nodes represent entities (such as users, messages, and topics), and edges (relationships) represent the connections between them. This closely mirrors real-world data, making it easier to model and query relationships.
  • Efficient Traversal: Neo4j is optimized for traversing relationships. Queries that would be complex and time-consuming in a relational database can be executed quickly and efficiently in a graph database.

2. Graph Data Science (GDS) Library

The Neo4j Graph Data Science library offers a suite of powerful algorithms for analyzing and understanding graph data. This is particularly beneficial for KruelAI because:

  • Advanced Analytics: GDS provides algorithms for various graph analytics tasks, such as community detection, centrality measures, and pathfinding. These algorithms help KruelAI identify key entities, uncover hidden relationships, and understand the structure of the data.
  • Machine Learning Integration: GDS supports machine learning workflows that can be applied directly to graph data. This enables KruelAI to learn from the relationships in the data, improving its ability to make predictions and recommendations.
  • Scalability and Performance: Neo4j GDS is designed for performance, allowing us to run complex graph algorithms on large datasets efficiently. This ensures that KruelAI can handle growing amounts of data without compromising on speed or accuracy.

3. Why Neo4j is Ideal for Understanding Data

Understanding data is crucial for any AI system. Neo4j’s ability to model and query complex relationships directly aligns with this need:

  • Contextual Awareness: By using graph structures, KruelAI can maintain contextual awareness of entities and their interactions over time. This is essential for providing relevant and accurate responses based on historical data.
  • Dynamic Schema: Neo4j’s schema-less nature allows for flexibility in how data is stored and connected. This adaptability is crucial as KruelAI evolves and its data needs change.
  • Intuitive Query Language: Cypher, Neo4j’s query language, is designed for expressing graph patterns. It allows us to write expressive and readable queries to extract meaningful insights from the data.

Key Differences

  • Purpose and Focus:
    • Your Work: Focuses on semantic reduction and efficient data distillation, maintaining core meaning while optimizing data complexity.
    • KruelAI: Emphasizes comprehensive data storage with multi-dimensional embeddings and relational data management, enabling rich contextual understanding and retrieval.
  • Data Management:
    • Your Work: Employs complex internal graphs and black box research techniques to understand and optimize data.
    • KruelAI: Uses Neo4j to manage multi-dimensional data and relationships, integrating various data types and employing temporal logic and parallel batch processing for optimized performance.
  • Learning and Adaptation:
    • Your Work: Utilizes advanced prompt engineering and model understanding to generate complex outputs from simple inputs.
    • KruelAI: Implements continuous learning through multi-vector scoring, allowing the system to adapt and refine its understanding and responses over time.

Conclusion

By comparing these approaches, it’s clear that while both systems aim to optimize data understanding and retrieval, KruelAI focuses on a multi-dimensional, dynamic approach with extensive data integration and continuous learning capabilities. In contrast, your approach leverages semantic reduction and innovative linkages to achieve similar goals through advanced prompt engineering and model understanding.

Both have some similarities in some of the core functionality, but they are completely different in what they do and how they work. All I can say is that in time I will release more information and once V6 reaches a stable fork, more demos etc. will be showing off.

Here is a picture of my clusters back with Version 4 in the twitch days, just to show you a fraction of how much data. This picture was built on a very simple memory structure and not the same as today so there is less vectors. This is also capped in view as we could not render the whole brain system because of limitations with the browser interface and its memory haha.
image
Kruel v4 persona on Twitch 2022.

I read it, there is good information in it. If I was not on neo4j it would be a path to explore for certain. That is for shary

What are you even comparing
“My method?”

Of playing with chatgpt? There is literally ultimately no goal, i just talked about things one can do inside the chat

Of course I use neo4j or nebula for rag

You are right, this is just scratching the surface of the problem. I find your approach very complete and well designed. I am currently working on a way to integrate multimodal data to both a neo4j KG and index. I will opt for an agentic pathway which I find efficient but hard to implement and control. All the best !

1 Like

it’s every evolving @zepef, there is not enough time in the day to progress it fast enough in my mind. It’s like an obsession to get it where I want it. I look forward to the next level of ai understanding so that eventually I can have it self work on the issues with understanding and solve the limitations.

Here is the voice inference system running the new V6 machine learning model.

Nice laptop :laughing: It is really impressive. I have experimented with autogen[teachable] for a French official project about mental health. It is quite convincing but I am still lacking emotions. Another challenge, since my AI is full audio like yours, is finding a way to know when you can interrupt the user or continue the conversation when it is necessary. I am even trying to have the AI listening and speaking at the same time. I am working on this as well as diarization. Let’s keep in touch. All the best!

1 Like

My two way is using AIUI . That ball is MIT/Facebook open lic. It allows interrupt, and well playing does not listen to self. Been using their voice input for over 2 years. Can also make yourself trigger word to if your using in public.

Laptop I just updated , usually I use my bigger server as it’s faster. But the system even runs on old surface pro 4. I test on various hardware to see how it performs. The math is really slow on older hardware.

Thank you Ben. AIUI is great. It will save me lots of coding hours !

Best regards,

Pierre-Emmanuel

1 Like

Some key points on the mathside I am using for understanding :
Mathematical Foundations
Mathematics is crucial to the kruel.AI system’s understanding and processing capabilities. Here are some key mathematical concepts and techniques used:

a. Embeddings and Vector Representations:

  • Text Tokenization: Text is broken down into smaller units (tokens), which are then converted into high-dimensional vectors (embeddings).
  • Vector Representation: These vectors represent the text in a continuous vector space, where semantically similar texts are close to each other.

b. Cosine Similarity:
Cosine similarity measures the similarity between two vectors, which helps in identifying similar entities, topics, categories, and other vectors than narrowing data to retrieving relevant understanding.

  • Dot Product: Measures the component-wise product of two vectors.
  • Magnitude: The length of the vector in the vector space.
  • Cosine Similarity: The cosine of the angle between two vectors, indicating their similarity.

c. Neural Network-Based Language Models:
Neural networks perform various mathematical operations to process and generate text:

  • Matrix Multiplication: Transforms input vectors through layers of weights in the neural network.
  • Activation Functions: Introduces non-linearity into the model.
  • Backpropagation: Updates model parameters based on the gradient of the loss function.

d. Statistical Analysis and Probabilities:
Statistical methods help in decision-making, message classification, and response generation:

  • Probabilistic Modeling: Calculates the probabilities of different outcomes based on input data.
  • Maximizing Likelihood: Selects the most likely outcome based on computed probabilities.

-Temporal Logic: The system manages and processes temporal information to understand the timing and sequence of events, which helps in making time-based decisions.

-Multivector score system on the ML it allows it to do realtime corrections based on feedback understanding with temporal understanding

These are some of the key parts.

What are your thoughts on using Graph RAG as an intelligent CRM system?

1 Like

100%, if built right it is very powerful for information. especially when combined with multi vector embeddings.

graph based RAG can offer significant benefits to CRM by making the data highly relational, enabling powerful contextual search, and providing dynamic, real-time responses that adapt as your CRM evolves. A graph’s ability to model real world connections closely matches the way CRM users think about their relationships, making this perfect fit imo.

1 Like

Thank you for the insights :slight_smile:

I agree 100%. I’m curious to learn how / why there aren’t more CRM / relationship intelligence startups leveraging GraphRAG to help relationship-heavy teams …

I’ve watched some demos from microsoft on the Graph RAG tool - people were saying that the price was very high. But why do you think this technology is under-leveraged in the relationship management space?

1 Like