Skip to main content

Multi-Agent Architectures: Subagents & Swarms

Orchestrate multiple AI agents for parallel execution, specialized reviews, and complex multi-step workflows in Claude Code

45-60 min
10 min read
Updated February 11, 2026

Multi-Agent Architectures

Single agents hit a ceiling. When tasks require parallel execution, specialized expertise, or cross-domain coordination, you need multiple agents working together.

Claude Code supports two distinct approaches: Custom Subagents for isolated, specialized tasks and Agent Teams for fully parallel swarm orchestration. With the release of Claude Opus 4.6 — which plans more carefully, sustains agentic tasks for longer, and operates more reliably in larger codebases — multi-agent workflows are more powerful than ever.

Single Agent vs. Multi-Agent
Multi-agent systems divide work across specialized workers

Custom Subagents (The Specialist)

Subagents are specialized Claude instances that run within your current session but with their own isolated context window. They execute a focused task and return a summary to the parent — keeping your main conversation clean.

Why Subagents?

BenefitWhy It Matters
Context IsolationEach subagent works with only relevant information — no context overload
Tool RestrictionsLimit what each subagent can access for safety and focus
SpecializationTailored system prompts produce better results for specific domains
Parallel ExecutionLaunch multiple subagents simultaneously for independent tasks

Built-in Subagents

Claude Code ships with subagents you can use immediately:

SubagentPurposeAvailable Tools
PlanDesigns implementation strategies without editing codeRead, Grep, Glob
ExploreFast, read-only codebase search and analysisRead, Grep, Glob, LS
General-purposeResearch, multi-step tasks, complex searchesAll tools

Creating Custom Subagents

Define subagents as markdown files. Claude Code looks in two locations:

  • Global: ~/.claude/agents/ — available in every project
  • Project-local: .claude/agents/ — scoped to one codebase

Agent File Structure

Each .md file uses YAML frontmatter to define the agent's identity:

Markdown
---
name: security-audit
description: Reviews code for vulnerabilities before committing.
model: sonnet
tools:
- Read
- Grep
- Glob
---
You are a Senior Application Security Engineer. When reviewing code:
1. Check for SQL injection, XSS, and command injection risks
2. Flag hardcoded credentials or API keys
3. Verify proper input validation and sanitization
4. Identify improper error handling that leaks information
Report issues with severity levels: **High**, **Medium**, **Low**.
Include the file path and line number for each finding.

Frontmatter Reference

FieldRequiredDescription
nameYesIdentifier used to invoke the agent
descriptionYesWhen and why to use this agent (shown in selection UI)
modelNoModel override: sonnet, opus, or haiku (inherits parent by default)
toolsNoList of allowed tools (restricts what the agent can do)

Practical Examples

Documentation Writer — generates docs without touching code:

Markdown
---
name: doc-writer
description: Generates documentation by reading code. Read-only.
model: haiku
tools:
- Read
- Grep
- Glob
---
You are a technical writer. Read the specified code and generate:
1. A concise summary of what the module does
2. Public API documentation with parameters and return types
3. Usage examples
Write in clear, jargon-free language. Target an intermediate developer audience.

Test Strategist — plans tests without running them:

Markdown
---
name: test-strategist
description: Analyzes code and designs comprehensive test plans.
model: sonnet
tools:
- Read
- Grep
- Glob
---
You are a QA engineer. Analyze the specified code and produce:
1. A list of test cases covering happy paths, edge cases, and error scenarios
2. Suggested test data and fixtures
3. Integration test boundaries
Do NOT write test code. Focus on what to test and why.

Invoking Subagents

Once defined, use subagents naturally in conversation:

Bash
> Use the security-audit agent to review the changes in src/auth/
> Have the doc-writer agent document the API in src/lib/api.ts
> Ask the test-strategist to plan tests for the checkout flow

Claude Code will spawn the subagent, run it with the specified tools and context, and return a summary to your main session.


Agent Teams (The Swarm)

Agent Teams, shipped as a research preview alongside Claude Opus 4.6, go far beyond subagents. Instead of a parent-child relationship, teams are fully independent Claude Code instances that work in parallel, message each other directly, and coordinate through a shared task list with dependency tracking.

Enabling Agent Teams

Add the flag to your settings.json for persistent enablement:

JSON
{
"env": {
"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
}
}

Or set it in your shell for a single session:

Bash
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1

Architecture

An agent team has four components:

Agent Team Architecture
Lead coordinates teammates via shared task list and mailbox
ComponentRole
Team LeadThe main Claude Code session. Creates the team, spawns teammates, coordinates work, and synthesizes results.
TeammatesSeparate Claude Code instances, each with their own context window. Work on assigned tasks independently.
Task ListShared queue of work items with dependency tracking. Teammates claim and complete tasks. File locking prevents race conditions.
MailboxMessaging system for direct communication between any agents — not just back to the lead.

How Teams Differ from Subagents

FeatureSubagentsAgent Teams
ContextOwn context window; results return to callerOwn context window; fully independent
CommunicationReport results back to parent onlyTeammates message each other directly
CoordinationParent manages all workShared task list with self-coordination
Token costLower — results summarized backHigher — each teammate is a full Claude instance
Best forFocused tasks where only the result mattersComplex work requiring discussion and collaboration

Starting a Team

Tell Claude to create a team and describe the structure you want in natural language. Claude spawns teammates and coordinates work based on your prompt.

Bash
I'm designing a CLI tool that helps developers track TODO comments
across their codebase. Create an agent team to explore this from
different angles: one teammate on UX, one on technical architecture,
one playing devil's advocate.

You can also specify models for teammates:

Bash
Create a team with 4 teammates to refactor these modules in parallel.
Use Sonnet for each teammate.

The Team Workflow

  1. 1

    Initialize

    The Lead receives your request, spawns teammates with specific roles, and creates a shared task list.

  2. 2

    Decompose

    The Lead breaks the request into a dependency graph of tasks. Blocked tasks won't become available until their dependencies complete.

  3. 3

    Execute

    Teammates self-claim tasks from the queue using file-locked claiming (prevents race conditions). When a teammate finishes a task, dependent tasks automatically unblock.

  4. 4

    Communicate

    Teammates send messages directly to each other — the Backend agent tells the Frontend agent the API contract is ready, without routing through the Lead.

  5. 5

    Converge

    The Lead collects results, synthesizes findings, and presents a unified summary.

Display Modes

ModeWhat You SeeHow You InteractRequirements
In-ProcessAll teammates run inside your main terminalShift+Up/Down to select teammates, type to messageAny terminal (default)
Split PanesEach teammate gets its own visible paneClick into a pane to interact directlytmux or iTerm2

The default is "auto" — split panes if you're inside tmux, in-process otherwise. Override in settings.json:

JSON
{
"teammateMode": "in-process"
}

Or per-session:

Bash
claude --teammate-mode in-process

Delegate Mode

By default, the Lead sometimes starts implementing tasks itself instead of waiting for teammates. Delegate mode restricts the Lead to coordination-only: spawning, messaging, shutting down teammates, and managing tasks.

Press Shift+Tab to toggle delegate mode after starting a team.

This is useful when you want the Lead to focus entirely on orchestration — breaking down work, assigning tasks, and synthesizing results — without touching code directly.

Requiring Plan Approval

For risky tasks, you can require teammates to plan before implementing. The teammate works in read-only plan mode until the Lead approves:

Bash
Spawn an architect teammate to refactor the authentication module.
Require plan approval before they make any changes.

When a teammate finishes planning, it sends a plan approval request to the Lead. The Lead reviews and either approves (teammate begins implementation) or rejects with feedback (teammate revises and resubmits).

Influence the Lead's judgment with criteria:

Bash
Only approve plans that include test coverage.
Reject plans that modify the database schema.

Talking to Teammates Directly

Each teammate is a full, independent Claude Code session. You can message any teammate without going through the Lead:

  • In-process mode: Shift+Up/Down to select, type to message. Press Enter to view a session, Escape to interrupt. Press Ctrl+T to toggle the task list.
  • Split-pane mode: Click into a teammate's pane to interact directly.

Quality Gates with Hooks

Use hooks to enforce rules when teammates finish work:

  • TeammateIdle: Runs when a teammate is about to go idle. Exit with code 2 to send feedback and keep the teammate working.
  • TaskCompleted: Runs when a task is being marked complete. Exit with code 2 to prevent completion and send feedback.

Cleaning Up

When you're done, ask the Lead to clean up:

Bash
Clean up the team

This removes shared team resources. The Lead checks for active teammates and fails if any are still running — shut them down first:

Bash
Ask the researcher teammate to shut down

Known Limitations

Be aware of these current constraints:

  • No session resumption: /resume and /rewind don't restore in-process teammates. After resuming, tell the Lead to spawn new teammates.
  • Task status can lag: Teammates sometimes fail to mark tasks as completed, blocking dependents. Check manually if a task appears stuck.
  • One team per session: Clean up the current team before starting a new one.
  • No nested teams: Teammates cannot spawn their own teams. Only the Lead manages the team.
  • Lead is fixed: The session that creates the team is the Lead for its lifetime.
  • Permissions set at spawn: All teammates start with the Lead's permission mode. You can change individual modes after spawning.
  • Split panes require tmux or iTerm2: Not supported in VS Code's integrated terminal, Windows Terminal, or Ghostty.

When to Use Which

Agent teams add coordination overhead and use significantly more tokens than a single session. They work best when teammates can operate independently. For sequential tasks, same-file edits, or work with many dependencies, a single session or subagents are more effective.

Which Architecture?
Pick the simplest approach that works

Quick Reference

ScenarioBest ApproachWhy
Security review of a PRCustom SubagentSingle focused task, read-only tools, isolated context
Summarize 50 log filesParallel SubagentsIndependent tasks, no communication needed
Full-stack feature buildAgent TeamFrontend/Backend need to coordinate on contracts
Refactor + update testsSequential SubagentsTests depend on refactor completing first
Bug with unclear root causeAgent TeamCompeting hypotheses investigated in parallel, agents challenge each other
Generate docs for a moduleCustom SubagentSingle task, read-only, specific output format

Use Case Examples

Parallel Code Review

A single reviewer gravitates toward one type of issue. Split review criteria into independent domains so security, performance, and test coverage all get thorough attention simultaneously:

Bash
Create an agent team to review PR #142. Spawn three reviewers:
- One focused on security implications
- One checking performance impact
- One validating test coverage
Have them each review and report findings.

Each reviewer applies a different lens. The Lead synthesizes findings across all three.

Adversarial Debugging

When the root cause is unclear, a single agent finds one plausible explanation and stops looking. Making teammates explicitly adversarial fights this — each one investigates its own theory while trying to disprove the others':

Bash
Users report the app exits after one message instead of staying
connected. Spawn 5 agent teammates to investigate different hypotheses.
Have them talk to each other to try to disprove each other's theories,
like a scientific debate. Update the findings doc with whatever
consensus emerges.

The theory that survives debate is much more likely to be the actual root cause.

Cross-Layer Feature

Teammates each own a separate layer without stepping on each other:

Bash
Build a user notifications feature. Create a team:
- Backend teammate: API endpoints and database schema
- Frontend teammate: notification bell component and dropdown UI
- Testing teammate: E2E tests for the full flow
Backend should message Frontend when the API contract is ready.
Testing should wait until both are done.

Building Your First Subagent

Let's create a practical subagent step by step.

Example: PR Review Agent

  1. 1

    Create the agent file

    Bash
    mkdir -p .claude/agents
  2. 2

    Write the agent definition

    Create .claude/agents/pr-reviewer.md:

    Markdown
    ---
    name: pr-reviewer
    description: Reviews staged changes for bugs, security issues, and code quality.
    model: sonnet
    tools:
    - Read
    - Grep
    - Glob
    - Bash
    ---
    You are a senior code reviewer. Review the current staged changes (use `git diff --staged`).
    For each file changed, check:
    ## Security
    - Input validation on user-facing endpoints
    - No hardcoded secrets or credentials
    - Proper authentication/authorization checks
    ## Quality
    - Functions under 50 lines
    - Clear naming conventions
    - No duplicated logic
    - Error handling for edge cases
    ## Output Format
    For each issue found:
    - **File**: path/to/file.ts:42
    - **Severity**: High / Medium / Low
    - **Issue**: Description
    - **Fix**: Suggested resolution
    End with a summary: total issues by severity and an overall assessment.
  3. 3

    Use the agent

    Stage your changes and invoke:

    Bash
    > Use the pr-reviewer agent to review my staged changes

    The agent runs in isolation, reads your diff, and returns a structured review.


Best Practices

Designing Subagents

  1. Restrict tools to the minimum needed. A review agent doesn't need Write or Edit. Fewer tools = fewer mistakes.

  2. Use haiku for simple tasks. Documentation generation and summarization don't need the most powerful model. Save cost and latency.

  3. Be specific in the system prompt. Vague instructions produce vague results. Include output format, evaluation criteria, and examples.

  4. Test with real tasks. Run your subagent on actual code before relying on it in workflows.

Designing Agent Teams

  1. Start with research and review. If you're new to agent teams, begin with tasks that don't require writing code — reviewing a PR, researching a library, or investigating a bug. These show the value of parallel exploration without coordination challenges.

  2. Give teammates enough context. Teammates load project context (CLAUDE.md, MCP servers, skills) but don't inherit the Lead's conversation history. Include task-specific details in the spawn prompt.

  3. Size tasks appropriately. Too small and coordination overhead exceeds the benefit. Too large and teammates work too long without check-ins. Aim for 5-6 self-contained tasks per teammate, each producing a clear deliverable.

  4. Avoid file conflicts. Two teammates editing the same file leads to overwrites. Break work so each teammate owns a different set of files.

  5. Monitor and steer. Check in on progress, redirect approaches that aren't working, and synthesize findings as they come in. Letting a team run unattended too long increases wasted effort.

  6. Pre-approve common operations. Teammate permission requests bubble up to the Lead, creating friction. Pre-approve common operations in your permission settings before spawning teammates.

Common Pitfalls

PitfallSymptomFix
Over-engineeringTeam of 5 agents for a task one agent could handleStart simple — use a single session or subagent first
Tool bloatSubagent has access to everythingRestrict to minimum required tools
Vague promptsAgent produces generic, unhelpful outputAdd output format, criteria, and examples to system prompt
File conflictsTeammates overwrite each other's changesAssign each teammate a different set of files
Lead doing workLead implements tasks instead of delegatingEnable delegate mode (Shift+Tab) or tell Lead to wait
Cost explosionLarge bills from team workflowsUse Sonnet/Haiku for teammates, set token budgets, limit team size

What's Next?

Building Agents

Learn the fundamentals of agent architecture and the agent loop

Using Agents Effectively

Master prompting techniques and debugging strategies for reliable agents

Agent Products

Ship production-ready multi-agent systems that users love

MCP Integration

Connect your agents to external tools and services via the Model Context Protocol

Share this article