I treat “agents” as an emergent property of executable structure and memory, not as a primitive. Sometimes they appear, sometimes the graph itself is sufficient.
Agent GIFs are just fractal offshoots of the same structure, packaged small enough to share.
Like Jochen, I push the intelligence into the structure itself.
My modules now carry system-level memory and functionality (or hooks), effectively making each one (potentially) a small “System on a GIF”.
Ultimately this is just refactoring intelligence.
For clarity, this work is exploratory and architectural rather than production-hardened — it’s about understanding where intelligence actually wants to live.
It’s managed spaghetti inside — a dynamic KV graph where agents disappear and the LLM collapses into a translator — but a system on a GIF at the boundary.
Consider a function. It may have 6~7 active statements.
At that scale, we don’t describe it as an agent. It’s just code executing with local state.
Now let that function persist memory between invocations.
Allow it to schedule itself, branch on prior outcomes, and invoke tools conditionally.
At some point—without adding anything fundamentally new—we start calling it an agent.
Nothing magical happened.
We crossed a structural threshold, not a conceptual one.
What changed wasn’t intent or intelligence, but where state, control, and continuity live.
From there, “agenthood” is just a name we give to a pattern that emerges once executable structure accumulates enough memory and affordance.
That’s the point:
agents aren’t primitives — they’re phase transitions in structure.
And Jochen asks me why I don’t get a job ![]()