MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
agent
Recherche

What is A2A? How the agent-to-agent protocol enables autonomous collaboration

mardi 18 novembre 2025, 10:00 , par InfoWorld
The gist

A2A is an open, vendor-neutral protocol that allows diverse AI agents to seamlessly communicate, coordinate, and delegate work.
The protocol shifts AI focus from a single monolithic agent to collaborative multi-agent teams solving complex business workflows.
A2A governs agent-to-agent communication, using structured task objects and agent cards to ensure secure, opaque interoperability.

What is A2A?

A2A, short for agent-to-agent, is an open, vendor-neutral protocol that allows autonomous AI agents to communicate, coordinate, and delegate work to one another. Most agents today are built as isolated systems, each with its own framework, API conventions, and assumptions.

A2A defines a shared language and handshake process so they can interoperate. The goal isn’t to reinvent what existing standards do, but to fill the gap between how agents perform tasks and how they collaborate on them.

Matt Hasan, CEO of aiRESULTS, calls A2A a game-changer. “It moves the AI conversation from ‘Can a single agent do this task?’ to ‘How can a team of specialized agents collaborate on a complex business workflow?’,” he says.

The initiative originated at Google, which introduced the open-source specification in April 2025 to promote interoperability across the growing ecosystem of autonomous agents and connect agents built on diverse platforms. The goal was to help developers avoid the brittle integrations that made it difficult for early versions of AI agents to interact.

“In theory, A2A is a universal standard of communication and etiquette for AI agents,” Hasan says. “In practice, it’s the architectural blueprint that finally makes complex, vendor-agnostic multi-agent systems feasible for the enterprise.”

By creating a shared foundation for discovery, messaging, and task delegation, A2A enables agents from different vendors and developers to work together on multi-step workflows without custom middleware. That interoperability is what gives the protocol its significance: rather than building one monolithic super-agent, organizations can assemble a network of specialized ones that understand each other out of the box—and can communicate with external agents as well.

How does A2A work?

A2A defines how agents talk to each other and share standardized information about their capabilities. “The technology’s most critical aspect is the task object, which formalizes the delegation of work,” says aiRESULTS’s Hasan. “When we adapt an existing AI agent to use A2A, we essentially wrap its specialized capability in an agent card — a public metadata file that describes what it can do. This allows a client agent to discover, send a structured task request, and securely monitor the status of that work from a remote agent, regardless of whether the remote agent was built with LangChain, a custom Python framework, or a vendor-specific SDK.”

This is deliberate. A2A is designed as a protocol layer for interoperability, not a runtime, model, or orchestration framework. It defines a clear handshake and message structure that any compliant system can use. But each communicating agent remains a black box (Google uses the term “opaque”) to the other: neither needs to know the internal logic or model architecture that produces results.

In fact, there’s nothing in the spec requiring that an “agent” even be an AI system — any process or human participant can function as one, so long as it can send and receive properly formatted messages. A human operator completing a workflow like the original Mechanical Turk could, in theory, be an A2A agent if they follow the protocol’s structure.

This adaptability is what allows A2A to scale beyond simple demo agents into robust enterprise environments. It provides a predictable surface for discovery and execution without prescribing how the work is done. In that sense, A2A resembles the web itself: a shared protocol layer that lets heterogeneous systems cooperate through standardized requests and responses.

“This structured delegation drastically simplifies building long-running, multi-step workflows — like a complex financial compliance check or a multi-modal recruiting pipeline — by solving the problem of interoperability,” Hasan says.

A2A vs. MCP
It’s easy to confuse A2A with the Model Context Protocol (MCP), since both aim to standardize how AI systems exchange information. But the two operate at different layers of the emerging agent stack. MCP defines how an individual agent connects to tools, data sources, and execution contexts. It standardizes how an agent discovers available functions — such as APIs, files, or databases — and invokes them safely. MCP is essentially about an agent extending its own reach.
A2A, by contrast, governs how agents communicate with one another. It lets multiple autonomous agents — each potentially using different frameworks or models—request structured tasks, monitor progress, and deliver results without needing to understand each other’s internal logic. In short: MCP connects agents to their environments, while A2A connects agents to each other.
Together, they form complementary pieces of the same architecture: MCP makes individual agents more capable, and A2A makes entire networks of agents possible.

The four core A2A data shapes

A2A organizes communication around four primary object types, each representing a distinct phase in the workflow:

Agent card. A small JSON file (agent.json) that publicly describes an agent’s identity, capabilities, authentication methods, and optional digital signature. Other agents read these cards to discover potential collaborators and verify trust.

Task.  A structured work request that includes a unique task_id, an input payload, and metadata about priority, expiration, or required modalities. The task defines what needs to be done and serves as the anchor for all subsequent communication.

Message. A stream of status updates and intermediate information about the task. Messages carry progress indicators, requests for additional input, or contextual notes, and are typically transmitted over SSE or gRPC streams.

Artifact. The result of the completed task. Artifacts can contain text, files, or structured data. They finalize the transaction and can be stored, validated, or chained into another Task for further processing.

In a typical exchange, the requesting agent first retrieves an agent card to confirm another agent’s capabilities and supported protocols. The requesting agent is referred to as a client, and the agent that it’s interrogating about services is called a server, but these aren’t permanent states: an agent may act as a client in one context and a server in another, and in fact the two might switch roles in the course of a single interaction.

At any rate, the client agent issues a createTask request, which the server agent acknowledges by returning a task_id. As the task runs, the provider streams Message objects that indicate status or request clarification. When the work is finished — or if it fails — the provider emits one or more Artifact objects, along with a completion code.

Because every step adheres to the same schema, any compliant agent can participate in this conversation without custom adapters. An LLM-based planning agent might delegate a data-collection task to a Python microservice; that service could, in turn, trigger a human-in-the-loop agent to review the output — all communicating through A2A messages.

A2A design principles

The A2A protocol borrows lessons from decades of internet engineering, applying principles familiar from standards like HTTP, REST, and OAuth to the emerging world of autonomous systems.

Interoperability. A2A is explicitly framework-agnostic. Agents written in any language or framework — whether built with LangChain, OpenDevin, or a custom SDK — can communicate through the same standardized message schema. This decoupling of implementation from interaction is what makes multi-agent ecosystems possible.

Transparency. Each agent exposes its functions and limits through a structured Agent card that acts like a capabilities statement. This transparency allows other agents to discover and evaluate potential collaborators without requiring source code access or vendor integration.

Security. Authentication, authorization, and message integrity are core parts of the specification. A2A supports multiple authentication methods and signed Agent cards so that trust decisions can be made automatically. Every task and message is traceable, creating an auditable record of agent activity.

Modularity and composability. The protocol treats each agent as an opaque service with well-defined inputs and outputs. Agents can reuse each other’s functions, chaining them together into longer workflows without manual wiring. This modularity mirrors the composable design of APIs and microservices.

Scalability and resilience. A2A was built for asynchronous, distributed environments. Agents can spawn long-running tasks, stream partial results, and recover from transient network failures. Because the protocol doesn’t dictate runtime behavior, it scales naturally from a pair of local agents to hundreds of cloud-based ones coordinating across domains.

Together, these principles make A2A a common language for distributed intelligence that can evolve as agents, frameworks, and communication technologies change.

Is A2A secure?

Security in A2A operates on two levels: protocol integrity and trust between agents. The specification builds in the same safeguards that protect web-scale APIs — authentication, authorization, and encryption — while adding features to manage how autonomous systems evaluate and respond to one another’s reliability.

Remember, an Agent card describes an agent’s identity, capabilities, and supported authentication methods. Those cards can include digital signatures, allowing client agents to verify the organization that created the server agent before establishing a connection. Once communication begins, tasks and messages are exchanged over secure channels — typically HTTPS, gRPC, or WebSockets — so all payloads are encrypted in transit. The protocol also defines clear response codes and event logs, creating an auditable trail that can be integrated with enterprise observability tools.

The A2A protocol has mechanisms for error handling. “The failing agent sends an error code or failure notification back to the requester,” Hasan says. “This allows the requester to immediately switch to a different, more reliable agent to get the job done.”

This plumbing can lay the groundwork for more advanced behavior, he notes. “Individual agents (or networks of them) can implement their own tracking, keeping score of who’s reliable, sharing that info, or even publishing trust ratings so others can make smarter choices. Repeated failures cause that score to drop. Eventually, other agents will simply stop asking the unreliable agent for help.”

That self-regulating behavior is essential for large-scale multi-agent systems, where no central controller can vet every participant. A2A’s trust model allows poor-performing or malicious agents to be automatically isolated, while reliable ones gain credibility through successful interactions.

Still, A2A’s open design raises questions familiar to anyone familiar with the dilemmas of cybersecurity. How should organizations authenticate third-party agents that claim certain capabilities? What happens if two agents interpret a schema differently, or if one leaks sensitive data through a malformed message? Identity spoofing, model hallucinations, and version mismatches all pose potential risks that enterprises will need governance frameworks to manage.

For most deployments, A2A security will depend on layering protocol-level controls with operational ones: requiring signed Agent cards, managing API keys or OAuth tokens through a centralized broker, and maintaining reputation databases to record agent reliability across the environment. Over time, these practices could evolve into standardized trust registries—much like certificate authorities on the web — forming the basis for secure, auditable agent ecosystems.

Real-world A2A examples

When Google launched the A2A protocol, the company announced support from more than 50 partners, including Atlassian, Box, PayPal, SAP, and many major consultancies. Likewise, Microsoft publicly announced support for A2A in its Azure AI Foundry and Copilot Studio platforms.

Google has also touted real-world rollouts. For instance, Box AI Agents now coordinate with other agents across dozens of platforms via A2A-compliant endpoints. Another example: Twilio uses A2A extensions to broadcast latency information, enabling intelligent routing among agents and graceful degradation when only slower agents are available. While these examples don’t yet reflect fully documented, large-scale production deployments, they demonstrate meaningful adoption beyond the pilot phase into enterprise ecosystems.

The hope is that A2A isn’t just another AI hype protocol, but that it will become a foundational communication layer for multi-agent ecosystems. Organizations that experiment now with publishing agent cards, defining tasks, and streaming results could gain a head start if vendor-agnostic agents become a core part of automation pipelines. Adoption is still early, but the roadmap is there for a world where agents that were once isolated can now speak a common language.
https://www.infoworld.com/article/4088217/what-is-a2a-how-the-agent-to-agent-protocol-enables-autono...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
mar. 18 nov. - 14:39 CET