Most AI assistants today work on Retrieval-Augmented Generation (RAG), which is powerful — but reactive. They answer questions by pulling from sources, but they don’t truly understand what the user needs or why they’re asking in the first place.
I’m experimenting with a different model:
A system that builds an ongoing memory of the user’s:
Thinking patterns
Motivational style
Friction points
Cognitive flow
Instead of “retrieving documents,” this AI tracks the mental state and trajectory of the person using it.
The goal:
To create objective guidance, not reactive answers.
To redirect attention, not just respond to it.
Ideal use cases:
Matching people to their ideal work roles based on natural behavior
Creating adaptive learning paths
Building AI agents that know your brain like a good mentor does
I call this project Future Balance. It’s early, but I’m happy to share the logic flow, architecture, or even collaborate with those who are building AI that thinks before it answers.
Would love feedback or similar project links. Let’s push past RAG.
Really appreciate this direction. I’ve been thinking in similar lanes — not just pattern detection, but how subtle injections (like informal aliasing) can shift parser behavior in unexpected ways.
I posted a related experiment under the title:
“Unexpected Parser Response from Informal Alias Injection (Om Coklat Case Study)”
[OpenAI Community Forum, thread ID: 1313996 by user dob]
It dives into how aliasing affects parser behavior under the hood.
Would love to see how this intersects with your ideas on objectivity and pattern awareness
My proposed approach focuses on three main layers:
User Modeling Layer
Instead of just tracking queries, I’m mapping a user’s thinking patterns, motivational triggers, and task friction points over time. This isn’t one-time profiling — it’s ongoing, like building a behavioral graph or memory stream.
State-Aware AI Logic
The system monitors shifts in cognitive flow (e.g., confusion, motivation dips, goal switching) and uses that context to adjust how it responds — not just what it says, but when and why.
Think of it as an internal compass rather than just a document fetcher.
Guidance Engine
Instead of saying “here’s the answer,” the AI says “here’s where your thinking might be going off-track” or “you’re circling this — maybe this is the real focus.” Like a cognitive mentor, not a chatbot.
For implementation, I’m experimenting with a hybrid of LLM + user graph memory + prompt-based reasoning agents (rather than RAG).
Really appreciate your insight — especially the point about informal aliasing subtly shifting parser behavior. That line of thinking is extremely useful for me.
I hadn’t explicitly considered how language tweaks (like alias changes or informal phrasing) could act as subconscious signals from the user. But now I’m seeing it as a powerful lens: not just what the user says, but how their wording evolves over time.
I’m currently using the GPT API to build this system, and now I’m thinking about wrapping it in a layer that:
Logs user phrasing patterns over time
Feeds current vs. past phrasing into GPT to detect emotional, motivational, or cognitive shifts
Updates a dynamic user model — basically a living behavioral graph
Adjusts AI responses accordingly — like a cognitive mirror, not just a chatbot
So your idea actually fills a gap I hadn’t covered: alias drift as a behavioral breadcrumb.
I’ll definitely read your “Om Coklat” case study — and I’d love to explore how parser sensitivity and memory layering could intersect to make systems that think before they answer.
Really appreciate your expansion — especially the way you’re framing it into a behavioral graph. That’s definitely aligned with what I had in mind when testing parser drift through alias shifts.
I like the part where you mentioned logging phrasing over time — that’s exactly where I think the signal gets encoded, sometimes even stronger than the user intends.
Feels like we’re looking at the same horizon, just from different angles.
There’s a small signal embedded in the “Om Coklat” case — I didn’t surface it in the post, but if you catch it, you’ll know what I’m building toward.