AI agents can finally talk to each other

Per usual, there was a lot to write about this week, and frankly I was torn between the memo from Shopify CEO Tobi Lutke requiring employees to use AI, and Google’s announcement of the Agent2Agent (A2A) protocol at Google Cloud Next, but in spite of the attention the the Shopify AI call to action received, I think A2A is actually more significant.
Over the last several months we have been starting to see the first fledgling steps of an agent communications infrastructure. First, there was Anthropic’s MCP protocol, designed to let agents communicate with external services, and now we have the A2A protocol to let agents communicate with other agents.
Agentic AI is supposed to bring us to a new AI-fueled workflow promised land. One that doesn’t just rigidly move through steps defined by a human expert, but can actually (eventually) begin to reason when it runs into exceptions without breaking. That’s pretty powerful if we can pull it off.

But first, it requires what I call plumbing, the very unsexy part of any new technology. Plumbing is the underlying infrastructure that enables people to use it more effectively, such as the communication protocols that have been sorely missing from agentic AI. It starts with the premise that agents won't live in a vacuum. They will need to communicate with humans, agents and other services which is going to require mechanisms in place.
For such a hyped technology, it is more than a little shocking that we started building agents before we had a way for them to communicate with one another, and with external services, leaving them kind of neutered, only able to operate on their own. If the end goal is to have agents moving through an enterprise, across applications, and even across organizations to get stuff done, that definitely requires some communication.
Agents chatting with services
The Model Context Protocol, or MCP for short, was the first piece to fall into place. It was developed by Anthropic at the end of last year as a way for the Claude LLM to interact with external cloud services. As analyst Sanjeev Mohan defines it, “MCP connects AI applications to external data sources, such as content repositories, business tools, and development environments, to help AI produce more relevant, context-aware responses.”
And it has gained popularity quickly, partly because it’s been the only game in town, and developers have been hungry for a solution that solves this problem of communicating with external services. But just because it’s something, and some big companies have gotten behind it including Microsoft, Amazon, Google and OpenAI, doesn’t mean it’s a perfect solution, or even the only answer.

Jon Turow, a partner at Madrona Ventures, who spent a decade at AWS, sees the potential, but also the pitfalls through the lens of being a self-described product guy. “What is the product insight here? Is the product insight that MCP is perfect? No, definitely not. Is it that MCP will win? Not even necessarily,” Turow told FastForward.
While he certainly sees the utility of having a solution that works right now, he still worries about key missing pieces like identity, authorization and security as agents conduct business, often in a non-linear way. How do you avoid spoofing along the journey, or even man in the middle attacks. It’s worth noting that Okta announced some new agentic security products this week that begin to address that problem, which we discuss in the News of the Week section below, but it still remains a significant loose end for now.
Agentic communication ahead
Then there’s the matter that MCP only handles agent-to-service communication, but what about agent to agent? That’s actually where the A2A protocol comes into play, says David Linthicum, a long time cloud consultant. “From what I've seen, A2A is about enabling AI agents to work together effectively across different platforms and vendors," Linthicum said.
Since the two protocols work so well together, it should enable more sophisticated use cases. “In practice, this means enterprises can build complex workflows. This is where multiple AI agents collaborate seamlessly, each handling specialized tasks, while maintaining consistent context through MCP,” he said.
What’s more, Jason Andersen, an analyst at Moor Insights & Strategy says the fact both protocols are open source should also help tremendously. “Until now, you’d need to use a platform and its communications protocol would be proprietary to that platform,” he said. “By supporting both protocols, a person can now integrate agents to agents and agents to remote services and data with everything being open source and requiring no specific platform,” he said. That should make things a lot easier for developers, who like working with open source tools.
In practice, this means enterprises can build complex workflows. This is where multiple AI agents collaborate seamlessly, each handling specialized tasks, while maintaining consistent context through MCP.
It’s highly unusual to start building technology like AI agents without key pieces like communications protocols in place, yet that is what we’ve seen. Developers have been forced to find solutions on their own or latch onto these protocol to fill this void.
But these two protocols, as promising as they are, probably aren't the end of the story. As we’ve learned with all things AI in the past couple of years, this is a constantly moving target. As companies try to incorporate agents across their organizations, new and different solutions are likely to develop to address fresh problems as they crop up, and I would be willing to bet that MCP and A2A are just the start of a long conversation.
~Ron