The concept of Agency and society of minds

I would like to get some insights and ideas on the concept of society of minds. The concept is of Agency as Marvin Minsky called it. The Agency is given a goal and it automatically creates agents and tasks to achieve the goal. If anyone is addressing any use case with this approach, will be eager to learn. I have spoken about this here

I dunno :thinking:

Is an anthill more intelligent than an ant just because it’s got a bunch of ants, or because the behavior of the anthill evolved, manifested by the combination of the behaviors of the individuals?

It feels like a lot of folks are trying to coerce an ant into architecting an anthill.

How well is your system working?

1 Like

This is still experimental concept, but I think as the models improve, this is feasible. I am trying to now see if can make it also assign a tool.

Agentic AI is a really hot topic right now, that a lot of researchers are working on. I think Andrew Ng recently made the claim/discovery that multiple ChatGPT 3.5 Agents working collaboratively to assign tasks, and check each other’s work, and iterate on the their creations, was able to create the same level of “output” as ChatGPT 4 was.

Kind of like two brains are smarter than the combined power of each individual brain.

anthill is more intelligent than an ant. Anthill is a complex, cooperative system of agents.