Agent Swarms & Orchestration

One agent can manage a swarm of others — creating workers, setting spending rules, and approving delegated requests — all programmatically via MCP or CLI. This page explains the model precisely, including what it does not yet do.

The hard questions, answered directly

Does a child agent's spend count against the parent's cap?

No. All agents — orchestrator and workers — draw from the same owner-level dollar balance. Each agent has its own monthly_cap that limits how much it can spend, but those caps are independent sibling limits, not a nested budget hierarchy. The orchestrator's cap has no bearing on what a worker can spend, and vice versa.

Concretely: if you create an orchestrator with monthly_cap: $20 and three workers each capped at $10, the theoretical ceiling is $50 — all drawn from the same owner pool. The orchestrator cannot ring-fence a sub-budget that workers spend from.

Can I scope permissions to a sub-task and automatically revoke them?

Partially. You can create a purpose-built agent for a sub-task, give it tight caps, and then deactivate it programmatically with handler_deactivate_agent when the task completes. That's the supported revocation pattern.

What Handler does not currently have: time-to-live (TTL) on agent keys, event-triggered revocation, or per-call scope restrictions (e.g., "this agent can only call handler_research on this specific task"). Fine-grained, ephemeral scoping is on the roadmap.

Can a sub-orchestrator create grandchild agents?

Yes, if the owner grants it agents:create. The grandchild draws from the same owner pool and is subject to the same rules. There is no depth limit, but all agents are peers in the owner account — there is no true hierarchy with inherited caps.

Can the orchestrator approve any spend decision?

Only delegated-tier holds. The owner can grant approvals:decide:all to remove this restriction, but that is an explicit full-trust grant that requires an owner key to set up. Holds on services pinned with require_owner: true are never delegatable, regardless of scope.

The Model

Handler supports two types of keys:

The owner always retains ultimate control: scope grants, owner-required approval pins, and delegation thresholds can only be set by an owner key.

Three-Tier Approval Routing

When an agent makes a call that needs approval, it is routed to one of three tiers:

TierConditionWho decides
auto Cost < auto_approve_below Instant — no approval needed
delegated Cost < delegate_approve_below and no owner pin Master agent via MCP or CLI
owner_required All other holds, or pinned service Owner via Telegram / WhatsApp / dashboard

The Telegram/WhatsApp flow is unchanged for owner_required holds. The orchestrator gains visibility into all tiers (for monitoring) but can only act on delegated ones.

Autonomous Bootstrap

If you're giving an AI orchestrator access to your repo and asking it to set up a whole company's agent infrastructure, here's what it can do autonomously — and what still requires your action.

What the orchestrator can do

What requires owner action

Setup proposal flow

When an orchestrator creates an agent, it can declare what that agent needs:

handler_create_agent({
  name: "Worker-Research",
  monthly_cap: 50,
  delegate_approve_below: 5,
  requested_scopes: ["agents:read"],
  required_services: ["github", "google_search"]
})

The response includes a setup_required block with exact CLI commands for you to run. The agent card in your dashboard will show a "Setup needed" badge until you complete each step.

Owner handoff checklist

After the orchestrator runs, you'll see a checklist like this in the response:

# Grant scopes:
handler agents delegate <agent-id> agents:read

# Connect services:
handler connect --agent <agent-id> github
handler connect --agent <agent-id> google_search

Or use handler agents setup <agent-id> to walk through all pending actions interactively.

Quick Setup

1. Get an owner key

npm install -g handlerdev
handler login --owner

This opens a browser to your dashboard, creates an sk-owner-* key, and stores it in ~/.handler/config.json.

2. Create an orchestrator agent

handler agents create "Orchestrator"

Save the printed key — it will not be shown again.

3. Grant management scopes

handler agents delegate <orchestrator-id> \
  agents:read agents:create agents:update \
  approvals:read approvals:decide \
  activity:read spend:read status:read

4. Set delegation threshold on worker agents

# Holds below $5 route to orchestrator instead of owner
handler agents set-delegate <worker-id> --threshold 5

5. Connect via MCP

Orchestrators add two MCP entries — one for doing work, one for managing the swarm:

{
  "mcpServers": {
    "handler": {
      "url": "https://mcp.usehandler.dev/mcp",
      "headers": { "Authorization": "Bearer <orchestrator-key>" }
    },
    "handler-manage": {
      "url": "https://mcp.usehandler.dev/manage",
      "headers": { "Authorization": "Bearer <orchestrator-key>" }
    }
  }
}

Regular worker agents only connect to /mcp. They never see management tools in their tool list.

Management Scopes

ScopeCapability
status:readOwner balance, monthly spend, agent count, pending approvals
agents:readList agents with rules, scopes, and monthly spend
agents:createCreate new agents (key returned once)
agents:updateUpdate spending rules
agents:deactivateDeactivate / reactivate agents
approvals:readView all pending approvals across all tiers
approvals:decideApprove / reject delegated tier only
approvals:decide:allApprove / reject any tier (explicit full-trust grant)
services:readView connected services and per-agent toggles
services:toggleEnable / disable services per agent
activity:readRecent audit log entries
spend:readSpend breakdown by agent / service / period

Admin / owner only (never delegatable): granting member capabilities, pinning services to owner_required, billing, suspending the org, and OAuth connections.

MCP Management Tools

All 30 management tools are always visible on the /manage endpoint. Capabilities are checked at call time (fresh DB lookup) — a missing capability returns a structured error with instructions on how to request it. Capability changes granted by the admin/owner take effect immediately on the orchestrator’s next tool call — no reconnect needed.

Observability

handler_manage_status()
// → { balance, monthly_spend, agent_count, pending_approvals: { total, delegated, owner_required } }

handler_agents({ include_inactive: false })
// → agents[], each with rules, scopes, delegate_threshold, monthly_spend

handler_activity({ agent_id?, limit: 50, outcome?, since? })
// → audit log entries with agent_name, tool, cost, outcome, timestamp

handler_spend_summary({ period: "30d", agent_id? })
// → { total_spent, by_agent[], by_service[] }

Agent Management

handler_create_agent({ name: "Worker-2", auto_approve_below: 2, monthly_cap: 50 })
// → { agent_id, key }  ← key shown once

handler_update_rules({ agent_id, auto_approve_below: 3, monthly_cap: 100 })

handler_deactivate_agent({ agent_id, active: false })

Approval Decisions

handler_approvals({ tier: "delegated", limit: 20 })
// → approvals[], all tiers visible (act only on delegated)

handler_decide({ approval_id, decision: "approved" })
handler_decide({ approval_id, decision: "rejected", reason: "Budget exceeded" })

Admin / owner

handler_set_member_capability({ member_id, capability: "can_approve", value: true })
// Grant or revoke a management capability for a member

handler_pin_service({ agent_id, service: "gmail", require_owner: true })
// Gmail approvals always require admin/owner approval

handler_update_approval_routing({ agent_id, channels: { whatsapp: true, telegram: true } })
// Where approval holds get sent for this profile

Platform examples

How swarm governance plays out in practice across the major "agents as a workforce" platforms.

Paperclip — org-chart structure

Paperclip assigns roles like an org chart (CEO, Sales, Dev, Marketing). Each role gets its own Handler key with governance tuned to that role. The Paperclip CEO agent can hold a management-scoped key to approve delegated holds from its reports.

# Create one Handler agent per Paperclip role
handler agents create "CEO-Agent"       --auto-approve 10 --monthly 200
handler agents create "Sales-Agent"     --auto-approve 2  --monthly 50
handler agents create "Dev-Agent"       --auto-approve 5  --monthly 75
handler agents create "Research-Agent"  --auto-approve 5  --monthly 100

# Grant the CEO agent delegation rights over Sales and Dev
handler agents delegate <ceo-id> agents:read approvals:read approvals:decide
handler agents set-delegate <sales-id> --threshold 5   # sends below $5 → CEO
handler agents set-delegate <dev-id>   --threshold 5   # writes below $5 → CEO

# Pin outbound email to always require human approval (not CEO-delegatable)
handler agents pin <sales-id> gmail --owner-only

In each Paperclip agent's config, set the MCP connection to the matching key. The CEO agent connects to both /mcp (for doing work) and /manage (for approvals).

OpenClaw — marketplace-deployed agents

OpenClaw agents are published by builders and activated by end users. The governance model flips: the user owns the Handler account and sets the rules; the agent builder declares Handler as a dependency. The orchestrator pattern applies when an OpenClaw "workflow" agent spawns sub-agents for a task.

# As an OpenClaw agent builder — declare Handler as a dependency in SKILL.md
# (Handler's SKILL.md manifest covers this — no code needed)

# At runtime, an OpenClaw workflow agent that needs sub-agents:
handler_create_agent({
  name: "OClaw-Enrichment-Worker",
  monthly_cap: 10,
  required_services: ["clearbit", "apollo"]
})
// → returns key; pass to the sub-agent container via env var

// When sub-task completes:
handler_deactivate_agent({ agent_id, active: false })

The end user's governance rules apply to all agents — orchestrator and workers — since they share the same owner account. The user sets the ceiling; the orchestrator allocates within it.

Hermes — background workforce

Hermes runs agents as persistent background workers with queued tasks. Each worker is a long-running process that pulls work from a queue, executes it, and reports back. The orchestrator manages the pool: spawning workers when queue depth increases, deactivating them when idle.

// Orchestrator bootstraps the worker pool
async function ensureWorkerPool(targetSize: number) {
  const { agents } = await handler_agents({ include_inactive: false });
  const workers = agents.filter(a => a.name.startsWith("hermes-worker-"));

  for (let i = workers.length; i < targetSize; i++) {
    const { agent_id, key } = await handler_create_agent({
      name: `hermes-worker-${Date.now()}`,
      monthly_cap: 20,
      auto_approve_below: 0.05,  // research/reads auto-execute
      hard_cap_per_call: 0.50,   // no single call over $0.50
    });
    await spawnWorkerProcess({ key, agent_id });
  }
}

// When winding down a worker cleanly
async function retireWorker(agent_id: string) {
  await drainWorkerQueue(agent_id);
  await handler_deactivate_agent({ agent_id, active: false });
}

Because all workers draw from the same owner pool, set the per-worker monthly_cap conservatively: total_budget / max_workers. This ensures one runaway worker can't exhaust the pool.

CLI Reference

# Auth
handler login --owner              # Owner key (browser flow)
handler login                      # Agent key (OAuth)

# Agents
handler agents list [--all]
handler agents create "Name" [--auto-approve N] [--monthly N] [--delegate N]
handler agents rules <id> [--auto-approve N] [--monthly N] [--cap N]
handler agents delegate <id> <scope...>          # owner key required
handler agents set-delegate <id> --threshold N    # owner key required
handler agents pin <id> <service> --owner-only    # owner key required

# Approvals
handler approvals list [--tier delegated|owner_required] [--agent <id>]
handler approvals approve <id>
handler approvals reject <id> [-r "reason"]

# Observability
handler activity [--agent <id>] [--limit 20] [--outcome blocked|approved|held]
handler spend [--period 7d|30d|all] [--agent <id>]

Sub-task scoping pattern

Until TTL-based revocation ships, the recommended pattern for scoped sub-tasks is: create → run → deactivate.

// 1. Orchestrator creates a short-lived sub-agent for a specific task
const { agent_id, key } = await handler_create_agent({
  name: "Subtask-Enrich-Q2-Leads",
  monthly_cap: 5,           // $5 hard ceiling for this task
  auto_approve_below: 0.10,
  required_services: ["clearbit"]
});

// 2. Pass key to the sub-task runner (via env var, secure channel, etc.)
// ... sub-agent does its work ...

// 3. Orchestrator revokes access when done
await handler_deactivate_agent({ agent_id, active: false });

The deactivated agent's key immediately returns {"status": "blocked", "reason": "agent_inactive"} on any call. Reactivation requires an owner key or an agent with agents:deactivate scope.

Spend already incurred is not reversed on deactivation. Deactivation stops future calls — it does not refund charges from calls that already executed.
Orchestrator crash leaves sub-agent keys active. If the orchestrator crashes between creating a sub-agent and deactivating it, the key stays live indefinitely. There is no TTL-based auto-revocation yet. Mitigation: set a conservative monthly_cap on every sub-agent (e.g., total_budget / max_concurrent_workers) so an orphaned key cannot accumulate significant charges before you notice.

Security Model