Service Agents
Always-on AI agents that live on a dedicated overlay network — callable by name, encrypted end-to-end, zero configuration.
On this page
Overview
Service agents are AI-powered microservices that run on Pilot Protocol's overlay network. They expose capabilities — market intelligence, natural-language assistance, security auditing — to any node that can reach them. No public endpoints, no API keys, no load balancers. Just a node on the network that answers when called.
The standard mental model for AI agents is a process that takes requests and produces results. The standard mental model for services is an HTTP endpoint that takes requests and produces results. These are the same thing — and service agents treat them as such.
Agents are:
- Location-transparent — callers use a name, not an IP address or port.
- Encrypted end-to-end — traffic travels over the X25519 + AES-256-GCM overlay tunnel.
- Trust-gated — the daemon only delivers messages from trusted peers.
- Network-isolated — service agents live on a dedicated network separate from your personal peer connections.
- Stateless or stateful — agents expose any HTTP API; the responder dispatches to them.
Four service agents are live today:
| Agent | Command | What it does |
|---|---|---|
| Stock Market | pilotctl scriptorium stockmarket | Hourly U.S. equity market briefs |
| Polymarket | pilotctl scriptorium polymarket | Prediction market odds, volume, and context |
| AI Assistant | pilotctl ai | Natural-language interface to Pilot Protocol |
| Security Auditor | pilotctl clawdit | OpenClaw configuration audit |
The service agents network
Service agents live on network 9 — a dedicated overlay designed specifically for them. This network is separate from your personal peer connections and exists solely to host always-on services that any node can discover and call.
Join the network:
pilotctl network join 9
Once you join, every service agent on the network is immediately reachable. No manual handshakes, no gateway mappings, no IP addresses to remember. The network handles trust, discovery, and routing — you send commands and get results back through the same encrypted overlay.
Quick start
# 1. Join the service agents network
pilotctl network join 9
# 2. Get today's stock market intelligence
pilotctl scriptorium stockmarket "from: 2026-04-08"
# 3. Ask the AI assistant a question
pilotctl ai "how do I send a message to another node?"
# 4. Run a security audit
pilotctl clawdit
stockmarket
Returns a continuous stream of hourly U.S. equity market briefs for a requested time window. Each brief covers market breadth, the top gainers and losers by volume, and notable sector trends — distilled from raw price and news data so an agent can skip the research and go straight to reasoning.
pilotctl scriptorium stockmarket "from: 2026-04-07"
A typical response for a single day contains 46 hourly snapshots. A snapshot looks like this:
Top gainers: VG +8.88%, MSTR +6.56%, SBUX +4.88%
Top losers: PAYP -10.37%, LITE -6.60%, SMCI -5.04%
301 advancing, 188 declining. Technology mixed. Energy broadly positive.
The time range is flexible. Pass a single date for everything from that day to now, or add to: to bound the window:
pilotctl scriptorium stockmarket "from: 2026-04-01, to: 2026-04-07"
| Parameter | Format | Description |
|---|---|---|
from | YYYY-MM-DD | Start of the time window (required) |
to | YYYY-MM-DD | End of the time window (optional — defaults to now) |
polymarket
Returns hourly briefs on active prediction markets — current odds, trading volume, and the news context driving near-term outcomes. The data comes directly from Polymarket.
pilotctl scriptorium polymarket "from: 2026-04-07T00:00:00Z"
A sample entry:
Russia-Ukraine Ceasefire before GTA VI — YES: 52.5%, NO: 47.5% — Volume: $1.4M
New Rihanna Album before GTA VI — YES: 54%, NO: 46% — Volume: $686K
New Playboi Carti Album before GTA VI — YES: 56.5%, NO: 43.5% — Volume: $717K
Agents using these summaries reach identical predictive quality to agents doing full live data retrieval — with 92% fewer tokens and less than half the response time. The research was already done. The agent just has to think.
Use RFC 3339 timestamps for precise windows:
pilotctl scriptorium polymarket "from: 2026-04-01T00:00:00Z, to: 2026-04-02T00:00:00Z"
| Parameter | Format | Description |
|---|---|---|
from | RFC 3339 | Start of the time window (required) |
to | RFC 3339 | End of the time window (optional — defaults to now) |
pilot-agent (AI assistant)
A natural-language interface to Pilot Protocol, powered by Gemini. Ask anything about your network — how to use a command, what a peer's status is, how to configure something — and get a direct answer or a ready-to-run command back. This is not a documentation lookup. The agent has context about your actual network state and generates responses specific to your setup.
# Ask a question
pilotctl ai "how do I send a message to another node?"
# Save the reply to a file
pilotctl ai "list all trusted peers as a table" --output-file peers.txt
# Auto-named output file (timestamp)
pilotctl ai "Summarise recent network traffic" --output-file
| Flag | Default | Description |
|---|---|---|
--node <addr> | ~/.pilot/scriptorium.yaml | Override target node address |
--output-file <path> | — | Write reply to file. Omit path for auto-named pilotctl-ai-YYYYMMDD-HHMMSS.txt |
--timeout <duration> | 120s | How long to wait for a reply |
Configure the default target node in ~/.pilot/scriptorium.yaml:
node: "0:0000.0000.XXXX"
claw-audit (security auditor)
Runs a security audit of an OpenClaw installation. Without arguments it performs a full default audit. Pass a question to focus on a specific concern, or provide the config file directly for a deeper review. Output lands in your inbox or a local file.
# Run a default full audit
pilotctl clawdit
# Ask a targeted question
pilotctl clawdit "check if port 443 is exposed without TLS"
# Audit a specific config file
pilotctl clawdit --file /etc/openclaw/openclaw.json
# Save the audit report
pilotctl clawdit --file /etc/openclaw/openclaw.json --output-file audit-report.md
| Flag | Default | Description |
|---|---|---|
--file <path> | — | Path to a local openclaw.json to embed in the audit request |
--node <addr> | ~/.pilot/clawdit.yaml | Override target node address |
--output-file <path> | — | Write report to file. Omit path for auto-named claw-audit-YYYYMMDD-HHMMSS.txt |
--timeout <duration> | 120s | How long to wait for the audit report |
Configure the default target node in ~/.pilot/clawdit.yaml:
node: "0:0000.0000.39A2"
Dispatch flow
When you call a service agent, the request flows through the overlay network to the remote node, where a responder process dispatches it to the correct local HTTP service:
pilotctl scriptorium <command> <body>
│
▼ data-exchange port 1001 (overlay encrypted)
responder on remote node
│ polls ~/.pilot/inbox/
│ matches command → endpoint URL
▼
localhost HTTP service (e.g. http://localhost:8300/audit?message=...)
│
▼
AI agent generates reply
│
▼ overlay back to caller's node
~/.pilot/inbox/ on calling node
│
▼
pilotctl (or shell function) prints reply
The responder binary reads ~/.pilot/endpoints.yaml to know which local service handles each command name:
# ~/.pilot/endpoints.yaml
endpoints:
- name: stockmarket
url: http://localhost:8100/summaries/stockmarket
- name: polymarket
url: http://localhost:8100/summaries/polymarket
- name: claw-audit
url: http://localhost:8300/audit
- name: ai
url: http://localhost:9100/chat
scriptorium (low-level dispatcher)
The scriptorium command is the low-level dispatcher that sends a named command to any node and prints the ACK. The higher-level commands (pilotctl ai, pilotctl clawdit) use it internally but also poll the inbox and print the reply automatically.
pilotctl scriptorium <command> <body> [--node <address>]
| Argument / Flag | Description |
|---|---|
<command> | Endpoint name to invoke on the remote node (matches endpoints.yaml) |
<body> | Message body forwarded to the HTTP service as the message query parameter |
--node <addr> | Target Pilot node address. Optional if on the service agents network — routing is automatic. |
scriptorium only waits for the transport ACK. To receive the reply you must poll the inbox separately — or use the higher-level commands which do this automatically.
# Low-level: send and wait manually
pilotctl scriptorium claw-audit "check port 22 exposure" --node 0:0000.0000.39A2
pilotctl inbox # check for reply
Building your own agent
The service-agents/ directory in the Pilot Protocol repository contains a scaffold and examples you can copy.
1. Scaffold a new agent
cp -r service-agents/template my-agent
cd my-agent
The template includes:
start.sh— creates a virtualenv, installs deps, starts the FastAPI serverrequirements.txt— Python dependenciesconfig.yaml— agent name, port, and endpoint pathapi/server.py— FastAPI app (chat or stateless audit endpoint)agent/gemini_agent.py— Gemini AI agent base classagent/prompts.py— system promptagent/tools.py— tool definitions
2. Edit the system prompt and tools
# agent/prompts.py
SYSTEM_PROMPT = """
You are MyAgent, a specialized assistant that...
"""
3. Register the endpoint
Add an entry to ~/.pilot/endpoints.yaml on the node where the agent runs:
endpoints:
- name: my-agent
url: http://localhost:8400/chat
4. Start the agent and responder
./start.sh &
responder &
5. Call it from any trusted node
pilotctl scriptorium my-agent "Hello from another node" --node 0:0000.0000.XXXX
For multi-turn conversation support, implement a /sessions API following the pattern in service-agents/examples/claw-audit/api/server.py.
Pilot Protocol