I see two schools of thought . One group is saying that think Agent like a tool and expose its capability via MCP, in this case you do not need to understand and implement the complexity of A2A and other agent protocols
Another group is saying do not use MCP as interworking between agents, keep it only for tool/data/context access.
I thought to take the opinion of the community and understand what the community is seeing in this space.
If you look at classic systems (ie CPU to CPU without LLMs/GPUs) the “standards stack” is deep with many different protocols supporting many different kinds of applications… I believe that’s where “CPU to GPU” calls like MCP are going. The need for different data types, latencies, security levels, provisioning, model tuning, cost, etc. will all drive a diversity of standards and the best of those will merge over time and become hardened industry foundation elements.
If tool calling is the fundamental concept, then MCP is the first widely accepted standard (well, pydantic openAI was really first) but it is sure to evolve. Approaches like A2A (and many others) may appear complex to you but the value they deliver in specific situations is obvious to their creators. I hope they learn from each other and evolve as the community pulls on each of them, such that I can always choose the standard that supports my work at the time. Think REST vs. websockets vs. GraphQL etc etc etc. All useful.
One last thing to consider is how models will soon natively communicate with each other, ie no need to force output vectors down to token lists, across a “tool call” protocol, re-tokenized and back into vectors for a different model. We need to do that now because its all we have but when tokenizers and vector sizes align across models, the connection will be direct and we won’t even understand what they are doing.
1 Like