What is an Agent? Let's stop the speculations

I think the multi dimensionality of things is important

apple can be a weapon if you throw it or food or a weapon if thrown and not eatable… and it also depends on time and location.. you can’t eat an apple that is not close now…

the convex hull has way more information than just text (or let’s say short text).

:pushpin: Real-World Example

You’re in a local chat, solving a coding problem with someone.
Your agent:

  • Parses topics: rust, WASM, image processing
  • Adds context: mobile device, low power mode, current task time: 45min left
  • Vector hull shifts accordingly.
  • The system finds another agent 3 hops away solving a similar problem but on ESP32 + Zig, and suggests forming a group to compare tradeoffs.
  • If both agree, a task group is formed.
  • If successful, agents learn from eachother, tokens exchanged, and your hull is updated.
1 Like

The problem is the centralized semantic exploration. You need to store somewhere who has what kind of an Agent.

I want that too but anonymously. “I got something here - vague”. You can compare ideas without sharing exactly what the core of your idea is…

I mean even if AI fails - and I really hope it does in large scale - we still have a use for it to find people to connect with based on same interest (which btw completely destroys HR and Recruiting - give it a job description and it will find the perfect skill “matrix” or let’s say hull - of a person who you can see has interest in it) - how cool would that be.

Of course you could just spam your own hull - put all wikipedia data into your chat etc…

But I have a solution for that as well. It leads to the eviction from the network. And they would be blocked for a month or a year or until someone vouches for them which can lead to eviction if the one they vouched for gets evicted again.

2 Likes

This is an interesting thread… There seem to be various different levels of Agent/Application/Service…

Thinking maybe ‘Agent GIF’ is more of an Agent Container or ‘SoG’ (System on a GIF) rather than a simple Agent.

1 Like

Edit: How to get started with Agent Builder

Jochenschultz, the more research papers I read, the more apparent it is that - like with anything AI - there is no consistency in terminology :slight_smile: So I guess, instead of trying to settle on one correct definition, I personally started thinking about it in terms of changes that introduce agent-like persistence: 1 the system writes to a persistent store after each interaction (e.g, inferred preferences, inferred goals, etc.), 2 a background scheduler that can trigger actions without a user event, 3 tool use + environment access (turns language into action channels), 4 a stable objective function across time (from “be helpful per-turn” to a cross-session objective), 5 an internal planning loop (from “respond once and stop” to “self-critique, break into tasks, plan execution, write memory)

Essentially moving from a bounded reply to a process.

1 Like

I moved on to an integrated dynamic graph structure which functions as a kv cache / tool calling system which basically removes the need for agents. I am not even using token or words anymore.
LLM is just a translator in that setup.

1 Like

I treat “agents” as an emergent property of executable structure and memory, not as a primitive. Sometimes they appear, sometimes the graph itself is sufficient.

Agent GIFs are just fractal offshoots of the same structure, packaged small enough to share.

Like Jochen, I push the intelligence into the structure itself.

My modules now carry system-level memory and functionality (or hooks), effectively making each one (potentially) a small “System on a GIF”.

Ultimately this is just refactoring intelligence.

For clarity, this work is exploratory and architectural rather than production-hardened — it’s about understanding where intelligence actually wants to live.

It’s managed spaghetti inside — a dynamic KV graph where agents disappear and the LLM collapses into a translator — but a system on a GIF at the boundary.


Consider a function. It may have 6~7 active statements.

At that scale, we don’t describe it as an agent. It’s just code executing with local state.

Now let that function persist memory between invocations.
Allow it to schedule itself, branch on prior outcomes, and invoke tools conditionally.

At some point—without adding anything fundamentally new—we start calling it an agent.

Nothing magical happened.
We crossed a structural threshold, not a conceptual one.

What changed wasn’t intent or intelligence, but where state, control, and continuity live.

From there, “agenthood” is just a name we give to a pattern that emerges once executable structure accumulates enough memory and affordance.

That’s the point:
agents aren’t primitives — they’re phase transitions in structure.

And Jochen asks me why I don’t get a job :grinning_face_with_smiling_eyes: