We updated the website
Introducing Music Memory We done it, and its fully working and implemented.
What Music Memory does. The ai Can recall all music it generated, everything about the songs, and more. it learns overtime what you like and listen too. Your personal Ai music creation companion and DJ. All built into Kruel.ai Memory. These tools are constantly expanding and will continue to broaden overtime.
Kruel.Ai is not just a Chatbot it’s a living modality model that learns like you and me.
We have been pretty busy lately ![]()
Development has been moving at a rapid pace. All pipelines are now in place, and we’ve entered a cyclical process of testing, logging, and analyzing. Each cycle brings us closer to stability, while also uncovering new bugs, issues, and opportunities for refinement. This iterative “vortex” approach ensures continuous improvement as we circle inward toward the finished product.
We are now counting down the days until the first hardware unit arrives. Once it does, progress will naturally shift gears. With hands-on access to the hardware and its associated tools, we’ll focus on integration wiring the systems together and running through the same iterative cycle until performance meets our standards.
The next phase will depend on whether an additional unit is ordered or the first unit is shipped to the network center for integration into the broader server infrastructure. At that stage, the web team will complete the front-end applications, including the web interface and Flutter app. In parallel, we’ll begin closed testing with our first future partners / companies. Exciting.
Incase you didn’t notice even Batman popped in and can be seen leaving above ![]()
We added a new menu to our system ![]()
We now have options for selecting various music / sound generation models that we utilize with a new Ai Music Deck that that Ai will load up with song you create with it along with details , lyrics, and many ways to search and filter your music.
We have recently run some optimizations on the calls to reduce time and have had amazing results.
We also are now introducing a new system.
Smart Task Intelligence
We’ve taken a major leap forward in making Kruel.ai not just powerful, but practical in everyday life. Our new Smart Task Intelligence System is now fully integrated into the platform, giving tasks real awareness of context, timing, and user needs.
So what does this mean for you?
-
Natural Understanding: Just say “I need gas” and the system will keep track, reminding you at the right time.
-
Context Awareness: It knows that banks close at 5pm, so it won’t bother you at midnight.
-
Smart Timing: Tasks don’t nag—they show up when it actually matters.
-
Seamless Integration: Tasks are woven into your memory system, linked with your user profile, so conversations feel natural and connected.
This isn’t a to-do list. It’s your AI assistant actually thinking about your tasks, adapting to your world, and staying one step ahead.
We’re excited to see how this changes the way you interact with your AI. Welcome to truly intelligent task management.
REAL NEO4J that’s right , not that fake one… Ai’s sometimes make me laugh.. ok most the time.
We now updated our model selections to expose options for some of the systems to allow further customizations. We do have other models and systems that we do not allow changes to which are part of the more critical parts that maintain stability of the system and memory.
Kruel.ai 8.2 – Company Update
We wanted to share an important update about our journey at Kruel.ai. Recently, we were in discussions with a major AI company about a potential opportunity. While it was an exciting prospect and one I personally would have loved to pursue … but the timing simply wasn’t right. With multiple ventures and products heading to market this November, my plate is full. For now, our focus remains on executing our current plans, building out our vision, and preparing for the future.
That said, if we can successfully scale our operations and delegate key areas to the right people, it could eventually free up time for me to pursue opportunities to collaborate with larger AI organizations. But for now, our path is clear: Kruel.ai is not up for acquisition… While we initially floated that as a conversation point with another company, we ultimately decided it’s in our best interest to continue building independently.
On the product side, we’re making significant progress. We’re currently testing large-scale data understanding thanks to a future customer who’s generously provided access to their datasets. This will allow us to push Kruel’s memory system to its limits especially with complex spreadsheets and dynamic business data. If successful, these tests could pave the way for our early testers to finally experience what they’ve been asking for all along: true, adaptive business intelligence that evolves with everything it learns.
It’s an ambitious goal one we’ve called a “pipe dream” in the past but we’re closer than ever to making it a reality. Our next major challenge will be to test how well the system scales across both users and data at a much larger level. Early versions provided us with valuable metrics, but our new architecture has evolved dramatically. By applying ideas inspired by Git versioning to our memory systems, we’ve reduced duplication and memory consumption by 80–90%. This optimization is proving transformative in how we manage and process information at scale.
Over the coming weeks, our team will be focused on heavy testing, detailed logging, and back-to-back meetings along with trying to squeeze in some much-needed rest. (If only we could trust the AI to run without us one day!) With each milestone, we move closer to not just finishing Kruel.ai but redefining what’s possible in AI-driven memory systems. Of course, this also raises an intriguing ethical question: are we building our own replacement?
For now, the journey continues. Thank you to everyone following along as we shape the future of Kruel.ai.
Thanks for the idea openai, this time I am borrowing from you haha start of the day flash briefing with intelligence.
Lynda Laptop’s Long Night: A Tale of Recovery and Resilience
Well… not exactly the morning we wanted. During what should’ve been a routine update, one wrong copy command decided to go rogue — and overwrote the wrong server. ![]()
As a result, Lynda Laptop is currently out of commission. We’re restoring her to a previous checkpoint from last month. The good news? The data and indexes are intact. The not-so-good news? The memory system and indexes are out of sync, and rebuilding that link is tricky business. We’ve only managed to do this successfully once before — all other recovery attempts didn’t quite bring Lynda Prime back online.
So, we’re hopeful this time around. If not, it’s going to be a few long nights rebuilding memory from the ground up.
What Happened
Totally my fault (Ben here
). I’ve gotten so used to working directly on the main server that I forgot the laptop shares the same mount-point naming conventions. While installing K8.2 on Lynda, we picked the wrong mount path and… well, you know the rest. “Just one line of code” — famous last words.
Current Status
-
Indexes: Verified good -
Data: Intact and stable -
Next Step: Syncing databases to indexes — a tedious but doable task -
Doc Memory: Needs validation -
Code Memory: Unknown; may not exist on this older Lynda version
So, fingers crossed. If all goes well, Lynda will be back to full operational status soon. If not… stock up on caffeine, because tonight and tomorrow will be marathon rebuild sessions.
Onward and upward — lessons learned, systems hardened, and next time, we’ll triple-check those mount points. ![]()
Other things going on at Kruel.ai
v8.2 server is progressing very well and stable. Lots of testing happening and getting things ready for presentation Thursday if all goes according to plan with an Ai company we are visiting this week with the CEO.
I had hopped to show the old model and its views etc… so hopefully we get back on track pretty quick so we can show off some of the evolutions.
update: Guess who’s back back again…
So I asked Cursor to compare Kruel.ai to many other systems out there from time to time to see how we are doing in comparison to what is known. The end of Cursor.ai response was my Fav.
Than I asked the AGI question on how close
Than the last which is my Fav is to show it K9 which I believe completes this ![]()
Which bring me back to the Ethics concerns I bring up in community or long ago about putting in full-self-modifications or auto encoders in… I am presently working on an automated NN end to end with full stack Ai engineers which could be integrated, along with full dynamic research tool building in that anytool the ai does not have it can figure out how to make and using its code memory can even find better ways to keep building and optimizing over time as it discovers new methods that are better.
K9 is only for research where K8.2 is the one I plan to release to Alpha testing…
The best part about the design of the system it scales with the incorporated models intelligence.
let the thought sink in for a moment that we currently do all our testing on the cheapest models for sustainability. So think about what the means if we up the game and put in something really powerful. Let that sink in for a moment as it may make your eyes wide open ![]()
Pretty cool stuff I think.
also for some comparison metrics
I think it’s fluffing me a little ![]()
6 years of development.
Late Night Update: Next-Gen Music Models and More
It’s been a busy few weeks behind the scenes here at Kruel.ai, and we’ve got some exciting updates to share!
We’ve officially integrated Fuzz-2.0 Pro, our next-generation music model, into the Kruel.ai ecosystem. If you’ve been following our Discord updates, you already know the pace has been relentless — and it’s not slowing down anytime soon.
Last week, we had an incredible meeting with the CEO of an AI company, and we’ve come to an understanding about a potential collaboration (and possibly my own involvement with them). The details are still being worked out, but I’ll share more as things become finalized.
Beyond Kruel.ai, our team has also been working on a data ML model for a future client. That project kicked off this past weekend and is still being fleshed out, but it’s shaping up nicely.
Meanwhile, Kruel.ai V8.2 continues to move forward. We’ve been focused on model tuning, code cleanup, and architecture optimization. Recently, we began training a new neural-network intent system, but as expected, the existing servers are struggling to keep up — so we’re eagerly awaiting our DGX system to take it to the next level. (Cue the sad face…)
On a positive note, the KNN systems are fully updated and ready to go once the new hardware arrives. However, we did hit a wall: with the larger offline models now in use, we’ve officially maxed out our GPU and RAM limits. This means offline AI and image generation can’t currently run together — yet another reason those new servers can’t come soon enough.
Despite the bottlenecks, everything is steadily taking shape. Alpha 0.1 is still on track for release this winter, and we’re incredibly excited for what’s coming next.
Lets jump back to KNN haha there has been some confusion on my NN so some clarity on what it is and what its for
- Think about the glasses I have coming and why this is important
Clarifying Kruel NN: Our New Neural Router for Instant Understanding
We’ve seen a bit of confusion lately around Kruel NN — and that’s on us.
The label sounds like KNN, or K-Nearest Neighbors, but what we’ve built is something entirely different.
Kruel NN isn’t about vector proximity or similarity scoring — it’s about understanding at the speed of instinct.
The Problem: A Cognitive Bottleneck
In our current system, every user message passes through multiple decider layers — reasoning modules, LLM interpreters, and contextual routers — before the AI actually “knows” what the user wants.
That decision phase takes time.
Even with optimized pipelines, the intent understanding stage typically adds about seven seconds to the total system response (which averages around fifteen seconds end-to-end).
That’s been our biggest bottleneck — not generation, not retrieval, but comprehension.
The Breakthrough: A Neural Router
Kruel NN changes that entirely.
Instead of relying on an LLM to dynamically interpret and route tasks, we’ve built a specialized neural router that instantly determines what a user is asking for and where the request should go.
It’s not another large model — it’s a fast, purpose-built edge network designed to run in 0.4 milliseconds (with benchmarks down to 0.220ms).
That means the 7-second cognitive step has effectively vanished.
The system can now recognize intent, select the right toolchain, and begin execution almost instantly.
How It Works
Kruel NN sits before the LLM.
It listens, classifies, and routes — like a neural dispatcher that knows the difference between a question, a task, and a thought.
It uses a dual-path architecture:
-
Code-First Recognition: For structured patterns, commands, or code-like inputs — handled in under 0.015 ms.
-
Neural Cognition Layer: For conversational, creative, or abstract language — handled in about 0.292 ms.
The routing logic merges both streams, deciding context and destination faster than any LLM parsing cycle could.
In short:
Kruel NN doesn’t “guess” what you mean. It already knows how to handle it.
Why It Matters
This isn’t just a speed milestone — it’s a fundamental architectural shift.
By replacing LLM-based decision-making with Kruel NN, we’re beginning to remove large models from our dynamic routing loops entirely.
That frees the LLM to focus purely on generation and reasoning, not orchestration.
It’s the difference between an AI that waits to understand and one that understands before it acts.
With this new layer, our system transitions from reactive to reflexive — a critical step toward real-time cognition.
New updated Persona/Voice Engineering System is fantastic and a lot of fun.
We are not using formatting right now in the system outputs. We will look into better down the road. Similar to what we had in the past. We changed the outputs because of the News Brief system we implemented earlier was having issues so we pulled out the format output system for now.
The new persona system is designed to take the role to the next level. Allowing the Ai to fully become what you want it to be. Its designed even to write its own outputs to ensure it feels more in character which is really near. We also have systems to take the voice to the next level in tones, sound FX and more. We recently pulled in another chunk of V9 into 8.2 which is why the understanding over time has changed to a more adaptable version. Still takes a while to process large time like many weeks, months, years but much faster than previous versions. Well still allowing you to understand all details when required.
We also started to clean out the warnings, like old depreciated commands and the likes that did not impact much other than log spaces. New Scheme Initialization system seeing we also plugged in code system into 8.2 again to test the ai’s understanding of self and the code. to make sure it could traverse it all. Still want to bring in more V9 but in baby steps so we dont break what we have going.
We also reached that point where the model render is to big for me to see it like we did in the past to many nodes and relationships that it can’t even animate anymore and they just updated this rendering engine a few months ago … need more power
I am uploading a funny sample clip from some of our recordings. We record video sessions with the Ai when we think its worth showing. This one has some humor in it, and showing off the extent of our new Persona System / Memory system of V8.2. We will show more soon. I apologize that I did not have my mic setup for the capture so filled in with some digital magic. I am really liking the system and the speed. The speed various depending on how much data you are calling on in the memory. Weeks , months , Years takes abit of time to crunch. I think its around 20seconds-40seconds. We hope to make that even faster in the future once we go through an optimization stage. The System currently is Running Lynda the Quirky A mentally fun Ai.
You might want to look into another graphdb (also cypher).
Talk to Adam Amara - TuringDB.ai | LinkedIn - would be nice to mention me.
Pretty interesting benchmarks. I may have to look into it. I don’t have much speed issues with data currently. I also don’t use the neo4j for my indexes just because it’s now the fastest. The big thing with Neo4j vs the others is the GDS (Graph data science) and Apoc plugs which have some pretty cool algorithms built in for neo4j. I noticed it mentioned these speeds were without indexa so yeah may have to look lol… just think how much rewrite… ![]()
200 times faster multi hop reasoning might be worth it.
I am for sure going to look into it. Cheers.
have ever tried to make a compiler with your ai? i’ know its like remaking the wheel, but the compilers for normal code existed before neural nets, to embed the function of neural net into machine code, is difficult at best, vibe coding it helps just need to make sure its doing what its supposed to be doing, its hard to align words to an LLMs natural direction sometimes, sometimes it feels like a chicken does running from the bright giant that feeds them, but isnt reductionism efficient? and really isnt that what the iron pipe design of llms are, a chicken mind feeding on the information we give it, turning your words into solidified thoughts one can grow like seed in earth.
Haha
a chicken mind that’s funny. Have not tried making a compiler. Interesting thought though.













