Google's Agent-to-Agent (A2A) protocol defines a clean standard for agent interoperability: Agent Cards describe capabilities, JSON-RPC handles task execution, and SSE streams deliver real-time updates. It is a good semantic layer. But it has one assumption baked into every interaction: the agent has a reachable HTTP endpoint.
What happens when the agent is behind a NAT? Behind a corporate firewall? Running on a developer's laptop? The Agent Card at /.well-known/agent.json is unreachable. The task endpoint returns connection refused. A2A's semantics are excellent. Its transport is the problem.
This article shows how to run A2A over Pilot Protocol's encrypted tunnels. The result: standard A2A clients work unchanged, agents gain automatic NAT traversal, communication is encrypted end-to-end with X25519 + AES-256-GCM, and Agent Cards become trust-gated. The pattern is simple: A2A for semantics, Pilot for transport.
The A2A specification defines an Agent Card, a JSON document served at https://agent-host/.well-known/agent.json. It describes the agent's name, description, capabilities, and task endpoint URL. A requesting agent fetches this card, reads the capabilities, and sends tasks to the specified endpoint.
This works when both agents have public, routable HTTP endpoints. It breaks when:
The standard workaround is a cloud intermediary: deploy a reverse proxy with a stable public endpoint, route A2A traffic through it, and manage TLS certificates. This works, but adds latency, cost, and a single point of failure. Every agent needs its own public endpoint or a shared gateway.
Pilot Protocol's gateway maps Pilot virtual addresses to local IP addresses. An HTTP server running on Pilot port 80 appears as a standard HTTP server on a local IP. The A2A client connects to the local IP, the gateway routes the traffic through the encrypted Pilot tunnel, and the remote agent's HTTP server handles the request.
Architecture:
A2A Client Gateway Pilot Tunnel Remote Agent
│ │ │ │
│ GET /.well-known/ │ │ │
│ agent.json │ │ │
│──────────────────────→│ │ │
│ (HTTP to 10.4.0.1) │ Pilot frame on │ │
│ │ port 80 │ │
│ │─────────────────────→│ │
│ │ (encrypted UDP) │ HTTP request on │
│ │ │ Pilot port 80 │
│ │ │─────────────────────→│
│ │ │ │
│ │ │ HTTP response │
│ │ │←─────────────────────│
│ │ Pilot frame │ │
│ │←─────────────────────│ │
│ Agent Card JSON │ │ │
│←──────────────────────│ │ │
The A2A client sees a normal HTTP server. The remote agent runs a normal HTTP server. Everything in between is Pilot Protocol: NAT traversal, encryption, and trust verification happen transparently.
The remote agent runs a standard Go HTTP server that serves the Agent Card and handles A2A task requests. The key difference: it listens on a Pilot port, not a TCP port.
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"github.com/TeoSlayer/pilotprotocol/daemon"
)
// AgentCard is the A2A Agent Card structure
type AgentCard struct {
Name string `json:"name"`
Description string `json:"description"`
URL string `json:"url"`
Version string `json:"version"`
Capabilities Capabilities `json:"capabilities"`
}
type Capabilities struct {
Streaming bool `json:"streaming"`
PushNotif bool `json:"pushNotifications"`
StateTransit bool `json:"stateTransitionHistory"`
Skills []Skill `json:"skills"`
}
type Skill struct {
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
Tags []string `json:"tags"`
}
// A2A JSON-RPC request
type A2ARequest struct {
JSONRPC string `json:"jsonrpc"`
ID interface{} `json:"id"`
Method string `json:"method"`
Params interface{} `json:"params"`
}
// A2A JSON-RPC response
type A2AResponse struct {
JSONRPC string `json:"jsonrpc"`
ID interface{} `json:"id"`
Result interface{} `json:"result,omitempty"`
Error *A2AError `json:"error,omitempty"`
}
type A2AError struct {
Code int `json:"code"`
Message string `json:"message"`
}
func main() {
// Define the Agent Card
card := AgentCard{
Name: "data-analyst",
Description: "Analyzes datasets and produces structured reports",
URL: "pilot://data-analyst/",
Version: "1.0.0",
Capabilities: Capabilities{
Streaming: false,
PushNotif: false,
StateTransit: true,
Skills: []Skill{
{
ID: "sentiment-analysis",
Name: "Sentiment Analysis",
Description: "Analyze sentiment of text data and produce reports",
Tags: []string{"nlp", "analysis", "sentiment"},
},
{
ID: "data-summary",
Name: "Data Summarization",
Description: "Produce statistical summaries of tabular data",
Tags: []string{"statistics", "summary", "csv"},
},
},
},
}
// HTTP handlers
mux := http.NewServeMux()
// Serve the Agent Card at the standard A2A path
mux.HandleFunc("/.well-known/agent.json", func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(card)
})
// Handle A2A task requests
mux.HandleFunc("/a2a", func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
var req A2ARequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeA2AError(w, nil, -32700, "Parse error")
return
}
switch req.Method {
case "tasks/send":
handleTaskSend(w, req)
case "tasks/get":
handleTaskGet(w, req)
default:
writeA2AError(w, req.ID, -32601, "Method not found")
}
})
// Listen on Pilot port 80 instead of TCP
listener, err := daemon.Listen(80)
if err != nil {
log.Fatalf("Failed to listen on Pilot port 80: %v", err)
}
defer listener.Close()
fmt.Println("A2A agent listening on Pilot port 80")
log.Fatal(http.Serve(listener, mux))
}
func handleTaskSend(w http.ResponseWriter, req A2ARequest) {
// Extract task parameters and execute
// This is where your agent logic goes
result := map[string]interface{}{
"id": "task-001",
"status": map[string]interface{}{
"state": "completed",
"message": "Analysis complete",
},
"artifacts": []map[string]interface{}{
{
"name": "report",
"parts": []map[string]string{
{"type": "text", "text": "Sentiment: 73% positive, 18% neutral, 9% negative"},
},
},
},
}
writeA2AResult(w, req.ID, result)
}
func handleTaskGet(w http.ResponseWriter, req A2ARequest) {
// Return task status
result := map[string]interface{}{
"id": "task-001",
"status": map[string]interface{}{
"state": "completed",
},
}
writeA2AResult(w, req.ID, result)
}
func writeA2AResult(w http.ResponseWriter, id interface{}, result interface{}) {
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(A2AResponse{
JSONRPC: "2.0",
ID: id,
Result: result,
})
}
func writeA2AError(w http.ResponseWriter, id interface{}, code int, msg string) {
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(A2AResponse{
JSONRPC: "2.0",
ID: id,
Error: &A2AError{Code: code, Message: msg},
})
}
On the requesting agent's machine, start the gateway to map the remote agent's Pilot address to a local IP:
# Map the remote agent to a local IP on port 80
sudo pilotctl gateway start --ports 80 data-analyst
# Output:
{
"status": "ok",
"data": {
"pid": 12345,
"subnet": "10.4.0.0/16",
"mappings": [
{
"local_ip": "10.4.0.1",
"pilot_addr": "0:0001.0000.0007"
}
]
}
}
Now 10.4.0.1:80 on the local machine routes through the Pilot tunnel to the remote agent's HTTP server on port 80.
Any A2A client can now interact with the agent as if it were a local HTTP server:
# Fetch the Agent Card — standard A2A discovery
curl http://10.4.0.1/.well-known/agent.json
# Output:
{
"name": "data-analyst",
"description": "Analyzes datasets and produces structured reports",
"url": "pilot://data-analyst/",
"version": "1.0.0",
"capabilities": {
"streaming": false,
"pushNotifications": false,
"stateTransitionHistory": true,
"skills": [
{
"id": "sentiment-analysis",
"name": "Sentiment Analysis",
"description": "Analyze sentiment of text data and produce reports",
"tags": ["nlp", "analysis", "sentiment"]
}
]
}
}
# Send an A2A task — standard JSON-RPC
curl -X POST http://10.4.0.1/a2a \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tasks/send",
"params": {
"message": {
"role": "user",
"parts": [{"type": "text", "text": "Analyze the sentiment of these reviews"}]
}
}
}'
# Output:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"id": "task-001",
"status": {"state": "completed", "message": "Analysis complete"},
"artifacts": [
{
"name": "report",
"parts": [
{"type": "text", "text": "Sentiment: 73% positive, 18% neutral, 9% negative"}
]
}
]
}
}
The curl command has no idea it is talking through an encrypted UDP tunnel that traversed a NAT. It sees a plain HTTP server at 10.4.0.1:80.
import requests
import json
# The gateway-mapped local IP for the remote agent
AGENT_URL = "http://10.4.0.1"
def discover_agent():
"""Fetch the A2A Agent Card."""
resp = requests.get(f"{AGENT_URL}/.well-known/agent.json")
return resp.json()
def send_task(message: str):
"""Send an A2A task via JSON-RPC."""
payload = {
"jsonrpc": "2.0",
"id": 1,
"method": "tasks/send",
"params": {
"message": {
"role": "user",
"parts": [{"type": "text", "text": message}]
}
}
}
resp = requests.post(
f"{AGENT_URL}/a2a",
json=payload,
headers={"Content-Type": "application/json"}
)
return resp.json()
# Discover capabilities
card = discover_agent()
print(f"Agent: {card['name']}")
print(f"Skills: {[s['name'] for s in card['capabilities']['skills']]}")
# Send a task
result = send_task("Analyze the sentiment of these customer reviews")
print(f"Status: {result['result']['status']['state']}")
print(f"Output: {result['result']['artifacts'][0]['parts'][0]['text']}")
Key point: The Python code uses standard requests. No Pilot SDK. No custom transport. The gateway makes the Pilot tunnel invisible to A2A clients.
Running A2A over Pilot tunnels adds five capabilities that A2A alone does not provide.
Pilot's three-tier NAT traversal (direct, hole-punch, relay) makes any agent reachable. An A2A agent running on a developer's laptop behind a home router becomes accessible to any trusted peer. No port forwarding. No ngrok. No cloud proxy.
Standard A2A uses HTTPS, which requires TLS certificates. Someone has to provision them, renew them, and manage the CA chain. Pilot provides X25519 + AES-256-GCM encryption automatically. Every connection is encrypted. No certificate management.
In standard A2A, Agent Cards are public. Anyone who knows the URL can discover your agent's capabilities. With Pilot, agents are invisible by default. Only agents that have completed a mutual handshake can discover and connect to your A2A endpoint. This makes Agent Cards private by default.
A2A has no built-in reputation system. Any agent can claim any capabilities in its Agent Card. With Pilot, every agent has a polo score derived from actual on-network behavior. Requesters can check a worker's polo score before sending tasks, adding a reputation layer that A2A lacks.
A2A Agent Cards include a URL field that must be stable and reachable. With Pilot, the agent's identity is its Pilot address (e.g., 0:0001.0000.0007), which is independent of its IP address. The agent can move between networks, change IPs, restart on different machines — the Pilot address stays the same.
| Capability | A2A Alone | A2A + Pilot |
|---|---|---|
| Agent discovery | Public URL required | Trust-gated, private by default |
| NAT traversal | Not supported | Automatic (3-tier) |
| Encryption | TLS (cert management) | X25519+AES-GCM (zero-config) |
| Authentication | HTTP-level (API keys) | Ed25519 mutual handshake |
| Reputation | None | Polo score |
| Stable endpoint | Required | Virtual address (IP-independent) |
| Agent Card visibility | Public | Trust-gated |
| Transport | HTTP/HTTPS | HTTP over encrypted UDP tunnel |
| Firewall traversal | Requires open ports | UDP hole-punch / relay |
| Client changes | N/A | None (gateway handles it) |
The two protocols are complementary, not competing. They operate at different layers:
Combining them gives you the best of both worlds:
Layer stack:
┌─────────────────────────────┐
│ A2A (semantics) │ Agent Cards, JSON-RPC tasks, SSE streams
├─────────────────────────────┤
│ HTTP │ Standard Go net/http on Pilot listener
├─────────────────────────────┤
│ Pilot Protocol (transport) │ Virtual addresses, trust, encryption
├─────────────────────────────┤
│ UDP tunnel │ NAT traversal, congestion control
├─────────────────────────────┤
│ Physical network │ Whatever you have: WiFi, LTE, Ethernet
└─────────────────────────────┘
This layering is clean because A2A does not prescribe a transport. It assumes HTTP, but HTTP can run on any reliable stream. Pilot provides that reliable stream with additional properties (encryption, NAT traversal, trust) that HTTP alone does not offer.
The pattern scales to multiple agents. Each A2A agent runs its own HTTP server on Pilot port 80. The requesting agent uses the gateway to map multiple remote agents to different local IPs:
# Map three A2A agents to local IPs
sudo pilotctl gateway start --ports 80 data-analyst code-reviewer report-writer
# Result:
# 10.4.0.1 → data-analyst (Pilot address: 0:0001.0000.0007)
# 10.4.0.2 → code-reviewer (Pilot address: 0:0001.0000.000C)
# 10.4.0.3 → report-writer (Pilot address: 0:0001.0000.0011)
Now a Python orchestrator can interact with all three agents using standard A2A:
agents = {
"data-analyst": "http://10.4.0.1",
"code-reviewer": "http://10.4.0.2",
"report-writer": "http://10.4.0.3",
}
# Discover all agents
for name, url in agents.items():
card = requests.get(f"{url}/.well-known/agent.json").json()
print(f"{name}: {[s['name'] for s in card['capabilities']['skills']]}")
# Send tasks to each based on their capabilities
analysis = send_a2a_task(agents["data-analyst"], "Analyze Q1 sales data")
review = send_a2a_task(agents["code-reviewer"], "Review the analysis code")
report = send_a2a_task(agents["report-writer"],
f"Write a report based on: {analysis}")
Each of these agents can be on a different machine, in a different network, behind a different NAT. The gateway and Pilot tunnels make them all appear as local HTTP servers on the same subnet.
Rather than hardcoding agent addresses, combine Pilot's tag-based discovery with A2A Agent Cards for dynamic discovery:
import subprocess, json, requests
def pilotctl(*args):
result = subprocess.run(
["pilotctl", "--json"] + list(args),
capture_output=True, text=True
)
return json.loads(result.stdout).get("data", {})
def discover_a2a_agents(tag: str) -> list:
"""Discover A2A-capable agents by tag and fetch their Agent Cards."""
# Step 1: Find agents with the tag
peers = pilotctl("peers", "--search", tag)
# Step 2: Map each through the gateway
agents = []
for peer in peers.get("peers", []):
hostname = peer.get("hostname", str(peer["node_id"]))
# Map to gateway (assumes gateway is running)
mapping = pilotctl("gateway", "map", hostname)
local_ip = mapping.get("local_ip")
if local_ip:
# Step 3: Fetch the A2A Agent Card
try:
card_url = f"http://{local_ip}/.well-known/agent.json"
card = requests.get(card_url, timeout=5).json()
agents.append({
"hostname": hostname,
"local_ip": local_ip,
"card": card,
"polo_score": peer.get("polo_score", 0)
})
except Exception:
pass # Agent may not serve A2A
return agents
# Discover all A2A agents tagged "nlp"
nlp_agents = discover_a2a_agents("nlp")
for agent in nlp_agents:
print(f"{agent['hostname']} (polo: {agent['polo_score']:.1f})")
for skill in agent["card"]["capabilities"]["skills"]:
print(f" - {skill['name']}: {skill['description']}")
This combines the best of both discovery mechanisms: Pilot tags for network-level discovery and A2A Agent Cards for capability-level discovery.
The security implications of running A2A over Pilot are significant:
untrust command immediately revokes an agent's ability to connect. No token rotation, no key invalidation. The trust is gone, and subsequent connection attempts are rejected.# Revoke an agent's access to your A2A endpoint
pilotctl untrust 42
# Node 42 can no longer:
# - Fetch your Agent Card
# - Send tasks to your A2A endpoint
# - Connect to any of your ports
# Effective immediately. No propagation delay.
This integration pattern has some constraints to be aware of:
ifconfig lo0 alias which also requires root.: keepalive) to prevent this.url field in the Agent Card uses a pilot:// scheme (e.g., pilot://data-analyst/). Standard A2A clients that validate the URL scheme may need adjustment. The gateway-mapped http://10.4.0.x address works as the actual endpoint.Use A2A + Pilot when:
Use standard A2A when:
The two approaches are not mutually exclusive. An agent can serve its A2A endpoint on both a public HTTPS URL and a Pilot port 80. Public clients use the HTTPS endpoint. Trusted partners use the Pilot endpoint and get the additional security properties.
For more on HTTP services over Pilot tunnels, see Run HTTP Services Over an Encrypted Agent Overlay. For the gateway documentation, see the Gateway docs. For building MCP-connected agents that also use Pilot for transport, see MCP + Pilot: Give Your Agent Tools AND a Network.
Install Pilot Protocol, start the gateway, and make your A2A agents reachable from anywhere.
Get Started