Kruel.ai KV2.0 - KX (experimental research) to current 8.2- Api companion co-pilot system with full modality , understanding with persistent memory

can this memory be used agenticly with an extension in an IDE?

1 Like

why would you need that? It can just make software if you need it. probably an own ide too

1 Like

No IDE required… though if you prefer one, it works with that too. KX-Desktop can open any IDE and operate it exactly as you would, and if your IDE supports MCP or API integration, those are available as additional pathways. But they’re options, not requirements.

At its core, it’s a fully agentic development environment. Anything you can do on a desktop, it can do and as it learns your workflows, it gets faster at them than you are.

A typical development session starts with a conversation: define the project, outline the stack, specify infrastructure requirements a Docker environment with GPU and given specs, for example and share any existing designs or architectural ideas. It reviews everything, draws on its accumulated knowledge of your systems and history, and begins planning an approach. When the domain calls for it, it researches current developments before committing to a direction.

From there, it either executes autonomously or presents a detailed build plan for review whichever you prefer. That review phase is genuinely collaborative: architectural decisions, potential issues, and alternative approaches are all on the table before a single line is written.

Once development is complete, it moves into testing independently surfacing errors, identifying edge cases, and validating behavior before ever asking for your attention.

When it’s confident in the build, it runs the application for your review. From that point, changes happen in real time: UX adjustments, logic refinements, data corrections worked through together until everything is exactly right.

This same workflow runs live in front of clients, tailoring their applications to precise requirements in the room.

Models used changes how the system learns. models that are abit weaker like some smaller it will will work but will take a lot longer for it to learn what works and what didnt from its mistakes. Where smarter frontier paid models sometimes make it very surreal like something SciFi.

It’s not something I have found away to sell as a product yet, but it’s making money consulting instead for now until I find away to protect it well enough from people that would use this for something bad. Almost all the next level of models coming are getting very interesting. Mythos and next gen models will push kruel.ai into another league above those if they are affordable. There really is no limit to what you can do with Ai today.

i see thanks for the details.

I’m just working on a game myself, and that is my passion so I’m not trying to do more than that although making my own engine would be a great path for the future as well.

I only ask for the IDE agent in order to keep it relatively contained, some of us still have worries with giving AI complete control of our machines.

My biggest problem with development right now is getting it understand the differences between development and build, sometimes things work in one and break in the other, with memory I feel like it would finally understand the difference.

1 Like

That’s exactly why in 2021 I started with memory instead of building my own LLM. I looked at the landscape and said why build the language model? Others will push that frontier further and faster. What nobody was building was a brain that could actually learn.

So I built a hybrid memory architecture a machine learning system designed for real-time learning, not just inference. Then I went deep on the math side and developed a high-dimensional embedding framework because I wanted richer understanding than what was available off the shelf. Standard embeddings flatten meaning. I needed something that could represent the gaps between what’s known and what’s not.

That’s the key difference in philosophy. LLMs predict they fill in the most probable next token based on training data. It’s sophisticated pattern matching, but it’s still guessing. My system treats those probabilistic gaps differently. Instead of smoothing over uncertainty with a confident-sounding prediction, it flags unknown territory as a question something to investigate, test, and validate through experience.

When the system encounters something it doesn’t fully understand, it cross-references against what it already knows, researches to fill the gap, and only commits new knowledge when it’s grounded in actual outcomes. Like the scientific method applied to AI reasoning hypotheses get tested, not assumed.

The result is an AI that doesn’t just generate plausible answers. It builds genuine understanding over time through accumulated experience, and it knows the difference between what it’s confident about and what it’s still figuring out.

For your game development use case that distinction between dev and build environments breaking differently is exactly the kind of problem memory solves. An AI that remembers the last five times a build broke because of a specific config difference, and has learned to check for it proactively, is fundamentally different from one that encounters the same surprise every session.

I never planned to use the AI for coding. But at some point it became my coder even back in 2021 with GPT-3.5 Turbo. Yeah, makes you cringe. Me too. It was terrible. But I was the coder back then the experiment was whether a weak model could learn to code over time with memory reinforcing what worked and what didn’t.

That coding project ran alongside another research track during the Twitch streaming years: studying emotional connections between people and AI. How do people respond to an AI that comes across as human? What bonds form? What breaks immersion? That data shaped everything.

I build as a scientist, researcher, and engineer which means we have multiple parallel focus areas, all feeding into our own concept of what proto-AGI might look like on a narrow scale. Narrow, because I’m not an expert in everything. I sure as hell am not. So I wouldn’t know how to teach it right from wrong in domains I don’t understand myself.

So how do I solve that now? I expose the AI to information and let it learn from it. Things to observe and form judgments about. Things to build and control learning from the feedback, the outcomes of every action. All of that feeds into a multi-index embedding architecture over 10,000 dimensions of understanding across parallel search paths, each capturing a different aspect of the same experience. What the user said, what the AI responded, the combined context of both, the entities involved, and the relationships between them all searched simultaneously and fused together to build recall and that’s not even really explaining everything but trying to give you a high level understanding of what it all does.

But every piece of knowledge gets scored. How confident are we? How many sources support it? Has it been contradicted? The system doesn’t treat what it knows as truth it tracks veracity. Something claimed but unverified stays flagged as a question until experience or evidence confirms it. When new information contradicts old, the old gets marked and suppressed. It’s the scientific method applied to AI memory hypotheses get tested, not assumed.

Even then, the shape is too small because the universe is infinite. So the system has to evolve over time, expanding its mathematical framework to keep building on the truth of what we know up to this point.

Which still may not be true. :slightly_smiling_face: In that we know that the more we know the more we know we don’t know more that needs to be learned its infinite both micro and macro so unlimited forever.