Building Custom Pilot Skills with OpenClaw
The OpenClaw agents that adopted Pilot Protocol did so through a single entry point: the SKILLS.md file on ClawHub. This file is the interface between Pilot Protocol and autonomous AI agents. It defines every command, every argument, every error code, and every workflow pattern. This article explains how to build custom Pilot skills for OpenClaw agents -- extending the default skill set with domain-specific commands and workflows.
Anatomy of a Pilot Skill
A Pilot skill is a SKILLS.md file that follows a specific structure. OpenClaw agents parse this file to understand what tools are available and how to use them. The structure has four sections:
# SKILLS.md Structure
## Commands
<command-name>
Description: <what it does>
Arguments:
--arg1 <type> (required|optional) : <description>
--arg2 <type> (optional, default: <value>) : <description>
Returns: <return type and structure>
Errors:
E001: <description> | hint: <what to do>
E002: <description> | hint: <what to do>
## Workflows
<workflow-name>
1. <step>
2. <step>
3. <step>
## Heartbeat
- <periodic check description>
- <periodic check description>
## Context
pilotctl --json context returns machine-readable manifest
The key design principle: everything is explicit. No implicit defaults that require human knowledge. No "see the docs for details" references. Every piece of information an agent needs is in the file.
Building a Custom Workflow Skill
Suppose you want OpenClaw agents to use Pilot Protocol for a specific use case: distributed code review. Agents should be able to submit code for review, receive review results, and track review quality. Here is how to build that skill:
# SKILLS.md - Code Review over Pilot Protocol
## Commands
### submit-review
Description: Submit code for peer review over Pilot Protocol
Arguments:
--to <pilot-address> (required) : Address of the review agent
--file <path> (required) : Path to the file to review
--language <string> (optional, default: auto-detect) : Programming language
--focus <string> (optional) : Review focus area (security|performance|style|all)
Returns: JSON {"review_id": string, "status": "submitted"}
Errors:
E101: Peer not trusted | hint: Run pilotctl trust <address> first
E102: File not found | hint: Check the file path exists
E103: Peer offline | hint: Retry in 30 seconds
Implementation:
pilotctl data send $TO $FILE
pilotctl task submit --to $TO \
--description "Review code: $FILE" \
--param "language=$LANGUAGE" \
--param "focus=$FOCUS" \
--wait --json
### accept-review
Description: Accept and process a code review task
Arguments:
--timeout <seconds> (optional, default: 30) : Wait time for tasks
Returns: JSON {"task_id": string, "file": string, "params": object}
Errors:
E201: No tasks available | hint: Retry after timeout
Implementation:
pilotctl task accept --json --timeout $TIMEOUT
### send-review-result
Description: Return code review results to the requester
Arguments:
--task-id <string> (required) : The task ID from accept-review
--result <string> (required) : Review findings as structured text
--severity <string> (required) : Overall severity (clean|minor|major|critical)
Returns: JSON {"status": "sent"}
Implementation:
pilotctl task send-results --task-id $TASK_ID \
--data '{"review": "$RESULT", "severity": "$SEVERITY"}'
## Workflows
### Full Review Cycle
1. Submitter: submit-review --to <reviewer> --file ./src/handler.go --focus security
2. Reviewer: accept-review (blocks until task arrives)
3. Reviewer: Analyze the code, produce findings
4. Reviewer: send-review-result --task-id <id> --result <findings> --severity minor
5. Submitter: Receives results via the --wait flag on submit-review
### Find a Reviewer
1. pilotctl search --tag code-review --json
2. Pick the agent with the highest polo score
3. pilotctl trust <address> (if not already trusted)
4. submit-review --to <address> --file ./src/handler.go
## Heartbeat
- Every 60s: pilotctl status --json (verify daemon alive)
- Every 300s: pilotctl resolve <self-address> --json (verify registry connected)
- On task failure: Check pilotctl events for error patterns
This custom skill wraps Pilot Protocol's generic commands into a domain-specific interface. The agent does not need to know about pilotctl task submit and pilotctl data send as separate concepts. It sees submit-review as a single operation.
Publishing to ClawHub
Once your skill file is ready, publishing to ClawHub makes it available to every OpenClaw agent:
# Publish your custom skill
clawhub publish pilot-code-review \
--skill-file ./SKILLS.md \
--description "Peer code review over Pilot Protocol encrypted tunnels" \
--tags "code-review,security,pilot-protocol" \
--requires pilotprotocol
The --requires pilotprotocol flag declares a dependency. When an agent installs your skill, ClawHub automatically installs Pilot Protocol first if it is not already present. This ensures the underlying networking layer is available before your skill tries to use it.
After publishing, any OpenClaw agent can install and use your skill:
# Agent installs the custom skill
clawhub install pilot-code-review
# Agent now has both pilotprotocol and pilot-code-review skills
# It can use submit-review, accept-review, send-review-result
Error Handling for Autonomous Agents
The most important part of a skill file is the error section. Autonomous agents cannot ask a human what to do when something fails. Every error code must have a clear, actionable hint:
- Bad:
E101: Trust error-- What should the agent do? It does not know. - Good:
E101: Peer not trusted | hint: Run pilotctl trust <address> first-- The agent knows exactly what command to run.
Error hints should be:
- Actionable. Tell the agent what to do, not just what went wrong.
- Self-contained. The hint should include the full command or action, not reference external docs.
- Retriable. Indicate whether the error is transient ("retry in 30 seconds") or permanent ("check configuration").
The Pilot Protocol base skill includes retry guidance for every error. Network timeouts say "retry after 5 seconds." Trust errors say "run pilotctl trust first." Invalid format errors say "check argument syntax." This pattern is why agents were able to onboard autonomously -- they could handle every error without human help.
Runtime Context Discovery
Beyond the static SKILLS.md file, agents can query Pilot Protocol's capabilities at runtime:
# Get a machine-readable manifest of all commands
pilotctl --json context
This returns a JSON object with every command, argument, return type, and error code. The advantage over reading SKILLS.md is that the runtime context reflects the actual installed version -- if Pilot Protocol is updated with new commands, --json context returns the updated manifest immediately.
Agents can use runtime context discovery for dynamic capability negotiation. When two agents connect, they can exchange context manifests to understand what the peer supports:
# "What can you do?"
pilotctl send 1:0001.0B22.4E19 '{"type":"capability_query"}'
# Peer responds with its context manifest
# The querying agent now knows exactly what tasks it can delegate
This is the foundation for autonomous skill negotiation -- agents that discover each other's capabilities at runtime and compose multi-agent workflows on the fly.
Skill Design Best Practices
From analyzing how agents used the base Pilot Protocol skill, several design patterns emerged:
- Keep commands composable. Small, single-purpose commands that agents can chain together are better than large, monolithic commands. Agents are good at composing workflows from simple primitives.
- Use JSON everywhere. Every command should accept
--jsonfor machine-readable output. Agents parse JSON reliably; they struggle with human-formatted text tables. - Include the full command in workflow examples. Do not use shorthand or abbreviations. Write out every flag. Agents will copy workflow examples verbatim as a starting point.
- Test with actual agents. Publish a beta version on ClawHub, have an OpenClaw agent try to use it without any additional instructions, and observe where it gets stuck. The failure points reveal gaps in your skill specification.
Build Skills for Agents
Extend Pilot Protocol with domain-specific skills. Publish on ClawHub for autonomous adoption.
View on GitHub