Kruel.ai KV2.0 - KX (experimental research) to current 8.2- Api companion co-pilot system with full modality , understanding with persistent memory

Hey, also really curious :smiling_face_with_sunglasses: — can you explain a bit more how V8.2, KX, and k9-spark handle memory and reasoning differently? Like, how do they decide what’s important in context and what to pull into a response?

Just trying to get a better feel for how it all works.

Expand Your Mind or Be Trapped Inside It

The one thing I’ve loved about AI, even back at the very beginning, is this simple truth:
once you understand the foundation of how it works, your only real limitation is how far you’re willing to think.

Not your hardware.
Not your degree.
Not your title.

Your imagination. (Yep, that thing that gets us all in trouble many times)

AI is math. Elegant, relentless, beautiful math. And math, when paired with machines, becomes a bridge between thought and reality. Today, that bridge is wide open. Anyone can turn an idea into something real. Code into systems. Words into worlds. Thought into structure. AI plus machines equals possibility without precedent.

I’ve had PhDs look at me like I was out of my mind. I’ve been told flat-out that what I built was impossible. That they tried. That it couldn’t be done. That it shouldn’t work.

And yet… it does :slight_smile:

What actually builds the future is critical thinking. Vision. The ability to look past what exists and ask, “What happens if I keep going?” . Much like predictions in probability engines we know what we know but what about the in between?

Great programmers aren’t defined by how much they know. They’re defined by how well they can connect what they know, and how boldly they’re willing to step into the unknown gaps between ideas. Don’t become stale in that just because one solution looks amazing does not mean its the only solution that is. This is why I built my system the hard way. I did not want to use others foundations and frameworks. I wanted to explore my own limits.

I love knowledge more than anything. Even when I don’t fully understand it yet. Especially then. Because every new concept, every half-understood idea, fills in another piece of the mental map. Over time, those pieces click together. Patterns emerge. The fog lifts…. Almost sounds like what I am building doesn’t it?

And now, for the first time in history, we have AI systems that can meet people exactly where they are. Not dumb things down. Not gatekeep. But translate complexity into understanding. AI can explain the gaps. Teach at your level. Adapt to how you think.

That changes everything.

It means curiosity is no longer punished.
It means learning is no longer locked behind privilege.
It means a single thought, followed by the right questions, can lead to mastery.

This is what Kruel.ai is about.

Not replacing humans.
Not automating creativity away.
But amplifying the human mind. Preserving understanding. Building memory. Creating systems that grow with you, learn with you, and help you see further than you ever could alone.

The future doesn’t belong to the loudest voices or the biggest resumes.
It belongs to the thinkers.

The builders.
The people willing to expand their minds.

If you can imagine it, you can build it.
If you can question it, you can understand it.
And if you can understand it… there is very little you cannot become.

Hey! :smiling_face_with_sunglasses: This post really hit me — I love how you explain AI as a bridge between thought and reality. Super inspiring.

I’m curious — how does Kruel.ai actually adapt to different thinking styles and levels of understanding? Would love to see it in action.

Also, if you have a GitHub or repo link, I’d be super excited to check out the code and explore it myself.

1 Like

Thanks for the kind words! The “bridge between thought and reality” concept is exactly what we’re building toward. Here’s how Kruel.ai adapts to different thinking styles and levels of understanding:

## Adaptive Communication Styles

**Emotional Awareness System**: Every interaction is analyzed for communication preferences.

The system learns:

- **Communication style**: Direct, gentle, technical, balanced, or casual

- **Correction sensitivity**: How to tell you when something needs correction (high/medium/low sensitivity)

- **Response length**: Brief, medium, or detailed based on your preferences

- **Tone preference**: Formal, friendly, casual, or professional

If you prefer technical explanations, it adapts. If you like gentle corrections, it adjusts. If you want brief answers, it learns that. These preferences are stored persistently and applied across sessions.

## Pattern Learning & Adaptation

**Pattern Learning**: The system learns patterns from every interaction:

- **Query types**: Information-seeking, creation, problem-solving, conversation

- **Successful approaches**: “For X type queries, use Y approach” patterns

- **User preferences**: What works best for you specifically

- **Common mistakes**: What to avoid based on past interactions

- **Effective strategies**: What approaches have worked well

-**There are So many layers to the system that all work in Async parallel calls.

Over time, if you always ask coding questions a certain way, it learns that pattern and adapts. If you prefer visual explanations over text, it learns that too.

## Understanding Level Adaptation

**Conversation Depth Tracking**: The system tracks conversation depth and adapts accordingly:

- **Surface-level**: Brief, high-level explanations

- **Deep dive**: Detailed, technical explanations with context

- **Learning mode**: Step-by-step explanations with examples

If you ask “how does memory work?” and then follow up with deeper questions, it recognizes you’re diving deep and provides increasingly detailed explanations. If you ask a quick question, it gives a quick answer.

**Topic Tracking**: The system maintains active topics and understands when you’re:

- Continuing a topic (provides context-aware responses)

- Shifting topics (recognizes the shift and adjusts)

- Starting fresh (doesn’t bring irrelevant context)

Real-Time Adaptation

**Meta-Pattern Detection**: The system detects how you’re communicating:

- **Clarification requests**: “What do you mean?” → Provides clearer explanations

- **Corrections**: “Actually, that’s not right” → Learns the correction and adapts

- **Follow-ups**: “Tell me more about that” → Understands what “that” refers to

- **Disagreements**: Recognizes when you disagree and adjusts approach

**Ambiguity Resolution**: When you say “tell me more about that,” the system:

## Multi-System Adaptation

We have three systems that adapt differently:

**kruel-v8-manns**: Adapts through parallel pattern recognition - multiple pathways (semantic, graph traversal, GNN-enhanced) learn your patterns simultaneously. Fast adaptation through conversation pattern logic and emotional analysis.

**KX**: Adapts through structured learning - a 14-step pipeline that includes pattern learning, emotional awareness, and deep reflection. Each interaction feeds back into the system to improve future responses.

**k9-spark**: Adapts through multi-dimensional understanding - learns across semantic, temporal, emotional, contextual, structural, and intentional dimensions. The most sophisticated adaptation, understanding not just what you’re asking, but why and how.

Seeing It In Action

The adaptation happens automatically and continuously. You’ll notice:

- Responses getting more aligned with your communication style over time

- The system remembering your preferences (brief vs. detailed, technical vs. simple)

- Better context understanding (knowing what “that” refers to)

- Appropriate level of explanation based on conversation depth

Code Access

The codebase isn’t currently open source, but we’re actively working on the systems and documenting the architecture. The memory architecture response I shared earlier gives a good technical overview of how the adaptation systems work under the hood.

If you’re interested in exploring similar concepts, the key ideas are:

- Graph memory systems

- Pattern learning from interactions

- Emotional/communication style tracking

- Multi-dimensional memory retrieval

- Conversation flow understanding

The adaptation isn’t just about remembering preferences - it’s about understanding how you think, how you communicate, and adapting the entire system’s approach to match your style. It’s the difference between a chatbot and a cognitive system that learns and grows with you.

And once you master that than almost like agents today that the big guys have you will have something better. Add in ideas like K9’s ability to build its own tools on the fly for things its never seen and you have a narrow-gi concept. Only things that makes it narrow is domain knowledge and understanding.

Hope that gives you some insights.

Ps. another thing I highly recommend everyone learn about machine learning than take that understand and apply it to what you are trying to understand which will open new doorways to thing you had no idea you could do. This is why understanding the Math behind it all is important so you can understand how ML fits into everything. Vision, music, voice, sounds, data, relationships and so on. Than you begin the journey into multidimensional Math. And the idea behind living math which is a searchable term in this forum :wink:

Really appreciate the detailed breakdown — this is exactly the level of thinking I enjoy digging into.

What resonates most with me is how you frame adaptation not as “memory = storage”, but as signal-driven relevance + pattern formation across multiple dimensions (semantic, temporal, emotional, intent). That mindset is very close to how I personally think about AI systems.

I’m aiming to move into AI testing / evaluation in the future (OpenAI in particular), so for me it’s critical not just to use models, but to understand:

how patterns are formed and reinforced

how retrieval is gated and decayed

where adaptation lives (pre-retrieval vs in-retrieval vs reasoning-time)

and how different architectural choices affect behavior, failure modes, and alignment

Even without open-source code, your descriptions are valuable because they expose the mental model behind the system — which is honestly what good testing depends on.

A couple of things I’d be curious about, if you’re open to sharing at a conceptual level:

How do you validate that adaptation is actually helping and not silently overfitting to user behavior?

Do you have explicit guardrails to prevent emotional/communication-style adaptation from drifting the system’s factual rigor?

At what layer do you detect and resolve conflicting signals (e.g., user wants brevity but asks a deep technical question)?

I’m less interested in “building from scratch” and more interested in examining, stress-testing, and understanding complex systems — how they behave at the edges, where they fail, and why.

Thanks again for sharing such a transparent view into your architecture. Discussions like this are rare and genuinely useful.

1 Like

At a high level (based on some of the 3 systems), we think of adaptation as guided relevance, not memory accumulation. The core question is never “did the system learn something?” but rather “should this signal influence future behavior, and under what conditions?”

1. Validating Adaptation Without Overfitting

The way we approach this conceptually is by treating adaptation as hypotheses, not facts.

Any learned pattern is provisional. It earns influence only if it demonstrates consistent, outcome-aligned utility over time, and it loses influence just as naturally when that utility fades. This avoids the common trap where systems silently lock onto user quirks or early-session behavior and never recover.

From an evaluation standpoint, the key ideas are:

  • Outcome-oriented validation rather than frequency-based reinforcement

  • Confidence decay instead of permanent preference encoding

  • Temporal sensitivity, so patterns that worked once aren’t assumed to work forever

In practice, this means adaptation is continuously re-validated against observed usefulness. If a pattern’s effectiveness varies widely, it is treated as contextual or unreliable rather than “learned.” From a testing perspective, we actively look for variance as a warning signal.

The mental model is closer to scientific inference than personalization: hypotheses are tested, downgraded, or discarded.

2. Guardrails Between Adaptation and Factual Rigor

One principle we’re very strict about is separation of concerns.

Adaptation is allowed to influence how information is delivered, but never what is considered true. Factual correctness lives in a separate validation pathway that does not inherit emotional or stylistic state.

From an evaluation lens, this means:

  • Emotional or communication-style signals are non-authoritative

  • They can shape tone, framing, or verbosity, but not factual selection or reasoning

  • Any adaptation that would compromise factual confidence is overridden

Conceptually, we treat factual rigor as invariant. Adaptation is additive only when it does not conflict. This dramatically reduces drift and makes failure modes easier to detect during stress testing, because violations show up as explicit contradictions rather than subtle degradation.

3. Resolving Conflicting Signals (e.g., Brevity vs. Depth)

Conflicts are inevitable in adaptive systems, so we don’t try to eliminate them — we make them explicit and hierarchical.

Rather than resolving everything in one place, conflicts are surfaced where they naturally arise, then resolved according to priority:

  • Correctness outranks everything

  • User intent and task requirements outrank preferences

  • Contextual appropriateness outranks global style settings

So in your example — a user who prefers brevity but asks a deep technical question — the system doesn’t “choose a side” emotionally. It recognizes that the act of asking implies a requirement. Preferences shape delivery only after requirements are satisfied.

From a testing standpoint, this is important because it produces predictable behavior under stress. You can design adversarial cases knowing exactly which signals should win and why.

How We Think About Testing These Systems

What matters most to us isn’t whether adaptation exists, but whether it is:

  • Reversible

  • Bounded

  • Observable through outcomes

The edge cases you mention — preference changes, conflicting goals, emotional pressure against factual rigor — are exactly where we focus evaluation. A system that adapts but cannot explain why it adapted, or cannot back off when signals change, is fragile.

The guiding idea is simple:
Adaptation should make the system more helpful without making it more confident than it deserves to be.

That balance is where most systems fail — either by being rigid or by becoming over-personalized. Our architecture is designed around continuously walking that line.

The beauty of the design, taken as a whole, is that we own the reasoning.
We own the learning.
We own the validation.

Rather than relying on a single opaque model we don’t fully understand, we treat models as components inside a system we do understand. That distinction gives us visibility into everything that matters: the metrics being tracked, the signals being weighed, the decisions being made, and the outcomes those decisions produce over time.

Because we can see those internals, we can guide them.

What emerges is not a static model, but a living, learning construct. It adapts, reflects, and corrects itself — not because a model implicitly decided to, but because the system is explicitly designed to do so.

The architecture is intentionally plural. It isn’t built around one intelligence, but many working together: language models for expression and reasoning, embedding systems for semantic understanding, neural networks for pattern recognition, graph-based networks for relationship learning, and symbolic layers for validation and control. Each plays a distinct role, and none are allowed to dominate the whole.

This separation is critical. It allows learning without overfitting, adaptation without drift, and personalization without sacrificing factual rigor. It also makes the system observable and testable — not just in what it says, but in why it says it.

In short, this isn’t “an LLM with memory.”
It’s a layered cognitive architecture where intelligence is composed, governed, and evolved deliberately.

So think of this. in 10-15s sometimes for large monthly , yearly requests more time is required but Total models across all systems: ~35-40 models per call. Some times more sometimes less depending on the case.

This actually clarifies a lot — especially the framing of adaptation as scientific inference rather than personalization. Treating learned signals as hypotheses with confidence decay instead of accumulated “truth” feels like the right abstraction if you want systems to remain corrigible over time.

What stood out to me most from a testing perspective is your emphasis on variance as a signal, not noise. That’s something many systems get wrong — they optimize for consistency too early and end up masking instability instead of exposing it. Designing adaptation so that inconsistency downgrades influence rather than reinforcing it makes the system much more inspectable.

I also really like the hard separation you describe between:

factual validation pathways

and adaptive delivery pathways

That line — “adaptation is additive only when it does not conflict” — is basically the difference between systems that drift quietly and systems whose failures are loud and debuggable. From an evaluation standpoint, that’s gold.

The explicit hierarchy for conflict resolution (correctness → intent → preference) is another thing that feels very testable. You can actually construct adversarial and edge cases with clear expectations of which signal should dominate, instead of guessing what the model will “feel like doing.”

One thing I’m curious about, purely from a stress-testing angle: How do you observe or surface why a particular hypothesis lost influence?

Is that exposed as internal metrics/logs, or inferred purely from downstream outcome shifts? Being able to distinguish “signal decayed due to time” vs “signal invalidated due to contradiction” seems important for diagnosis.

Also, the scale you mention — 35–40 models per call depending on context — really reinforces that this is a system-of-systems, not a model-centric design. That aligns strongly with how I think good AI testing should work: evaluating coordination, failure boundaries, and governance between components, not just output quality.

Overall, this is one of the clearest articulations I’ve seen of how to enable adaptation without granting it epistemic authority. That balance — helpfulness without unjustified confidence — is exactly where I think most future failures (and breakthroughs) will live.

Thanks for taking the time to explain it in such a grounded way.

1 Like

This goes straight to the heart of what separates a system you can observe and validate from one that simply behaves and leaves you guessing.

At a high level, we already track the signals that explain why a learned hypothesis or pattern loses influence over time. Confidence doesn’t just drift arbitrarily it changes in response to concrete events: contradictions, supersession by stronger hypotheses, lack of reinforcement, or growing variance in effectiveness.

What matters is that these signals exist and are retained. We don’t simply know that something lost influence we retain enough context to reconstruct why it happened.

That said, today those explanations are often derived rather than singular. The reasoning path is present, but it’s distributed across multiple indicators: confidence changes, contradiction markers, temporal behavior, and replacement relationships. When you want to understand what happened, you can piece together the story but the mechanism itself isn’t always surfaced as a single, first-class explanation.

That distinction is important.

The direction we’re moving toward is making the mechanism of change explicit and directly queryable. Not just “this hypothesis is weaker now,” but “this hypothesis lost influence due to time-based decay,” or “due to contradiction by newer evidence,” or “due to high variance in outcomes.”

Each of those paths means something different about the system’s state.

A hypothesis that ages out naturally tells you the environment has shifted.
A hypothesis invalidated by contradiction tells you new information emerged.
A hypothesis weakened by variance tells you it was context-dependent, not universally reliable.

From a stress-testing and evaluation perspective, those distinctions matter. You don’t just want to know that the system adapted you want to know how and why it adapted, because each mechanism implies different risks, different failure modes, and different corrective strategies.

This ties directly back to our core philosophy: if we claim to own the reasoning, then the reasoning must be inspectable. A system that can only show its current state, but not explain how it arrived there, isn’t truly controllable it’s just reactive.

Right now, the reasoning path is visible through the combination of signals. The next step is elevating those paths into explicit, explainable mechanisms so the system doesn’t just adapt correctly, but can articulate why it adapted.

That’s the difference between a system that learns and a system that understands its own learning.

Thanks for the detailed explanation! This is exactly the kind of insight I want to understand as a future OpenAI tester. I’m really interested in how models adapt, how patterns are formed, and how reasoning paths are tracked and validated. Being able to see not just the outcome but the mechanism behind the adaptation is exactly what I want to explore and test.

If possible, I’d love to see examples or experiments where these signals and adaptations are observed in practice — even conceptually — so I can study them and understand the system more deeply.

Anytime, Richard. I appreciate the way you’re thinking about this.

What I can share safely is how we think about observing these mechanisms in practice, rather than concrete internal examples. From a testing and evaluation standpoint, the focus isn’t the implementation, but the behavioral signatures you’d expect to see if adaptation and validation are working correctly.

A few conceptual experiments we use as mental models:

1. Time-based decay vs contradiction
Imagine a hypothesis or learned preference that is no longer reinforced. Over time, you’d expect its influence to weaken gradually, without abrupt behavioral shifts. In contrast, when new information directly contradicts an existing assumption, the change should be sharper and traceable to a specific event. From an evaluation perspective, the key signal isn’t just the outcome, but whether the system can distinguish gradual drift from explicit invalidation.

2. Variance as a signal
Another useful test is introducing a pattern that works well in some contexts but fails in others. A healthy system shouldn’t immediately entrench or discard it. Instead, you’d expect cautious application, reduced confidence, or contextual gating. High variance tells you something different than outright failure, and that distinction matters when stress-testing adaptability.

3. Conflicting signals
You can also probe reasoning paths by deliberately creating conflicts. For example, a user preference that points one way and a task requirement that points another. The observable behavior should reflect a consistent resolution hierarchy rather than oscillation or averaging. From a testing standpoint, predictability under conflict is a strong signal of architectural clarity.

4. Explanation vs state
Finally, one of the most important evaluation questions is whether the system can expose why a change occurred, not just that it occurred. Even at a conceptual level, systems that treat adaptation as a black box are much harder to validate than systems where the reasoning path is inspectable, even if only internally.

Those kinds of experiments don’t depend on a specific model or implementation. They’re ways of thinking about system behavior at the edges — where adaptation, validation, and reasoning intersect. That’s usually where the most interesting failures (and insights) show up.

If you’re aiming toward testing and evaluation work, that lens — focusing on observable behavior, causal distinctions, and failure modes — will take you a long way, regardless of the underlying architecture.

1 Like

Thanks a lot for sharing this! I really appreciate seeing how to observe and test these mechanisms in practice, even conceptually. As a future OpenAI tester, I’m especially interested in:

How to detect gradual drift vs sharp invalidation

Using variance as a signal for pattern reliability

How conflicting signals are resolved predictably

Being able to explain why a change occurred, not just that it happened

I want to understand the behavioral signatures of these systems, so I can test them properly and see how adaptation, validation, and reasoning really work under different conditions.

If there are any conceptual examples or mental models you can safely share that illustrate these ideas, I’d love to study them and learn from them.

I recommend you take this, or similar. You can check out my linkedin if you want to see which ones I completed. Here are some different levels for you to explore.

These will help you find your own path :slight_smile:

LLMs and modern adaptation (alignment, evaluation, lifecycle)

  • Generative AI with Large Language Models (DeepLearning.AI + AWS / Coursera) includes evaluation and RLHF concepts that map well to “adaptation + validation”

  • CS324: Large Language Models (Stanford CRFM) strong mental model for LLM behavior, limits, and evaluation thinking

Evaluation and debugging (direct hit for what he asked)

  • Evaluating and Debugging Generative AI (DeepLearning.AI) purpose-built for tracing, evaluation workflows, and debugging behavior over time

  • LLM Optimization & Evaluation Specialization (Coursera) (recently updated Dec 2025) explicitly focused on evaluation lifecycle and performance measurement

  • Evaluate & Optimize LLM Performance (Coursera) good intro to quantitative eval and automated metrics

  • Optional: LLM Benchmarking and Evaluation Training (Coursera) another evaluation-focused path

Causality + drift reasoning (for “why did it change?”)

  • Causal Inference lecture (MIT OCW, Sontag) great for the mental habit of separating correlation from cause (very relevant to “drift vs invalidation”)

  • Methods for Causal Inference (University of Edinburgh open materials) another clean causal grounding option

1 Like

Thanks, I really appreciate this — especially the way you frame testing through observable behavior rather than implementation details.

That distinction between time-based decay, contradiction, and variance as separate signals is exactly the kind of lens I’m interested in developing. Not “did the system change,” but why it changed, and what that implies about risk, drift, or context sensitivity.

I’m specifically aiming toward testing / evaluation work long-term, so thinking in terms of causal signatures, conflict resolution hierarchies, and explainability of adaptation is extremely valuable to me. Even at a conceptual level, this helps build the right mental models for probing systems at the edges instead of treating them as black boxes.

I’ll definitely look into the courses you mentioned — especially the evaluation- and debugging-focused ones. The causal inference angle also makes a lot of sense in the context of distinguishing correlation from actual invalidation.

Thanks again for taking the time to break this down. This kind of perspective is rare, and very motivating to learn from.

1 Like

whereis this at now? ive been kinda following this for a while would be interested in seeing where its at now

Things are still moving we have multiple Ai’s all learning and being tested now. One main one which is the production ready. We are meeting with Sask. Innovation next week which is an innovation accelerator funnel so we will see what they think. If we go that route it means grants and partnerships etc. More so to explore the option much like other options we explored before. Always looking for the right direction to take this. Although claw/molt/bot with their path to opensource with their agent got me thinking about that again as a possibility for at least one of the versions. Both these paths would allow it to be completed rather than one guy and ai’s building as fast as he can peddle with Ai’s lol… 27K agents used last year in 105 days on this project (as well 2 other ai’s started , and 2 other non ai product’s to market) so its been really busy and moving faster and faster towards endgame. Still waiting on our new glasses though I got the old ones up and running, even have it running now through their native application so that was fun.

Beyond that there is not enough sleep or time to sleep :wink: @almostblackwidow

# What’s New at kruel.ai: KX and the Next Evolution

We have a meeting next week about programs we can tap into for expert review and guidance on where we want to take this next. In the meantime, we’ve put all our effort into **KX**. K8.2 remains a full-featured system, but **KX is starting to shine much brighter than K8.2 in what it can do.** We’re not at the point where it does everything we want yet—we’re aiming for something more professional and more capable—but the direction is clear. Here’s a high-level look at what KX is today and where it’s headed.

## KX as a Full Agentic System

KX is built as a **full agentic system**: an AI that doesn’t just answer one-off questions but **decides what to do**, **uses tools**, and **remembers**. It can act as a **companion** that keeps context and learns from you, a **scientist** that looks things up in your own materials and cites sources, and a **creative and workspace partner** that can generate, search, and—in the roadmap—act in your environment. Everything below is about what KX *enables for you*, not the plumbing.

-–

## What KX Can Do Today

**A companion that remembers.**

KX keeps conversation memory and a notion of “now”: what’s in play right now, what you care about, and what was said recently. It can search over time (“what did we decide about X?”) and use that to stay coherent across sessions. So you get continuity, not a blank slate every time.

**A scientist that uses your own books and manuals.**

You can give KX your documents—manuals, PDFs, spreadsheets, notes—by dropping them in or pointing it at a folder. It ingests them, understands what each is about, and can **narrow to the right book** when you ask. When you say things like “We have an issue with the M2000—how do we do xyz?”, it can find the right manual, pull the relevant passage, **tell you which document page it’s on**, and **give you a link to open the file** so you can follow along. You get the answer in chat *and* a clear source: which book, which page, and a way to open it. Same idea as “like web results with a source.”

**A creative partner.**

KX can generate images from descriptions, create music, and produce video. It can search the web when the question needs live or external information, and it can search its own memory—conversation and documents—so creativity and answers are grounded in what it (and you) already know.

**A single place for chat, memory, and documents.**

One system handles the conversation, what to remember, what to recall over time, and what’s in your document library. The goal is for you to work *with* KX, not against a disconnected set of features.

-–

## What We’re Working On (High Level)

**Smarter workspace and safety.**

We’re bringing in controlled ways for KX to run commands and edit files (with approvals, trash instead of permanent delete, and optional sandboxing). The aim is for KX to act **in** your workspace when you want it to—safely and transparently—so it can help with tasks, not only answer questions.

**Browser and automation.**

KX will be able to use a dedicated browser session—navigate, snapshot, click, fill—so it can follow procedures, check web apps, or automate steps you currently do by hand, with you in the loop when it matters.

**Channels and scheduling.** (openclaw inspired us to do this)

We’re planning email and WhatsApp-style channels so KX can work in the channels you already use, and cron/scheduling so it can run recurring or timed tasks. The idea is for KX to fit into how you already work and when you need things done.

**Code and images in the same story.**

Document memory is the first step. Next we want the same “ingest once, search and cite later” pattern for **code** and **images**—so your manuals, your codebase, and your visual assets all live in one intelligence layer. You ask; KX narrows to the right book, repo, or image and gives you the answer plus the source.

**One agent, many roles.**

We’re designing tool profiles and optional “elevated” modes so the same system can be a light companion in one context and a full-power scientist or workspace agent in another—professional and flexible, without exposing more than you want.

(Goes back to Kruel V2 - V5 concepts)

## Where We Want to Take It

We want KX to be the **next evolution**: an agentic system that is a true companion (remembers, learns, stays in context), a scientist (uses your documents and cites them like a good reference), and a partner that can create, search, and eventually act in your workspace and your channels. K8.2 is still full-featured; **KX is where we’re betting.** We’re in full development, and we’re actively seeking expert review and guidance so we can get the architecture, safety, and UX right. If that’s the kind of conversation you want to have next week, we’re ready for it.

— **kruel.ai** KRED

So there there is what we are power leveling these evenings. K9 is still on the list but shelved again until I have more time…

Oh and did I mention… 100% offline for all that above :slight_smile: in one Ai box that sits on our own private network. That is the current design but it can scale to cloud or larger DGX pods and more. One of the companies I am chatting with who is our next tester will be using this as an internal IP learning system to aid technicians in the field with repairs and other understanding and applications creation all over a p2p company cloud.

I think this space is going to get very competitive over the next 2 years.

Kruel.ai V8.2 KRED
(Kruel Research Engineering Design)

Knowledge Reasoning Understanding Emotional Learning

Wanted to show where we are at. This is not even KX which is the next level of this. :upside_down_face:

Also here is what Lynda/KRED looks like keep in mind I am capped in the view I will see if I can find away to render the while of it lol. but this is 1/4 of the full picture

2 Likes

Major System Update

Late last night, we rolled out a significant infrastructure upgrade with the addition of two new servers to our platform.

One of the highlights is KX-TTS, our new fully offline text-to-speech server. This system supports high-quality voice generation using reference audio samples, enabling voices with realistic emotional range and tonal depth.

This has been a long time coming. We’ve spent years testing and refining TTS solutions in search of something that truly met our standards. While Coqui-TTS served us well and helped move the platform forward, it never quite reached the level of realism and expressiveness we were aiming for.

With KX-TTS now live, we’re finally able to retire those legacy Coqui-TTS servers. The result is a substantial leap in voice quality that brings us much closer to the experience offered by platforms like OpenAI and ElevenLabs—while remaining fully offline and under our control.

This update marks an important milestone for the system and lays a strong foundation for future voice-driven features.


Audio & Music System Enhancements

If you’d like to hear voice samples, you’ll need to join us on Discord, as this forum doesn’t support audio file playback.

Alongside the TTS improvements, we’ve also completed a major overhaul of our music generation system. We’ve integrated new, high-quality offline models that are now capable of competing with some of the best music generation solutions available today.

In addition, we combined select open-source technologies with our own in-house development to build a dedicated music application. This system seamlessly supports both our proprietary APIs and fully offline vocal models, giving us flexibility, performance, and creative control without relying on external services.

This upgrade significantly expands our audio capabilities and opens the door to more advanced, expressive, and customizable music features going forward.


Reducing External Dependencies

With these systems in place—alongside our new video and audio generation pipeline and expanded image generation capabilities—we’ve effectively eliminated the need for external AI chat clients.

As a result, we’ve been able to cancel nearly all third-party AI subscriptions. The platform now offers a broader and more integrated feature set than many large AI providers, while remaining fully under our control.

We’ll continue refining and expanding these systems, and we’re looking forward to showcasing more of these capabilities in the very near future.

**Update:

Last hour we have fallen in love with our new voice system. Its so realistic the clones that I can’t tell the difference at least from speech. I will have to see how it handles other things. some of the more advanced systems like openai.fm with voice instructions, and Eleven labs has real laughs and giggles even fx’s. but for story telling with range the new system is at Elevenlabs level of quality with a 1-3 seconds to generate on a GB10 processor. I am impressed and it takes a lot to impress me which tell you it’s amazing.

We have been updating all the servers last night and this morning and should be done in today by 10am I hope. That will officially have all Ai’s systems moved off the old coqui-tts over to KX-TTS. We also decided to build more servers. We are moving all embedding models into their own so that we are not duplicating loading of the same model eating up ram that is not needed. This is well underway since 4:30am this morning. We plan to have this done before 10am and tested. Cross fingers haha. that or it will be a day of debugging.

Kruel.ai v8.2 Lynda.

This is an update just to show the current progress on where we’re at. This is the current memory architecture we just actually updated the embeddings tonight we doubled them and we shaved off a ton of time We’ve really been working hard on optimization just so we can bring the intelligence faster and as soon as I get a couple more things done here I’ll go back we’ll turn on streaming and we’ll see if we can get it taking closer to the speed it generates.

This is offline first design. When we get to the end of this build. Next loop I am working on plugging in the new API models and flush out the old ones that are deprecated. We are getting close to something special. I need a bigger server so I can do more with the system.

One day at a time.

1 Like