Subagents vs Skills vs Agent Teams: A Field Guide to Claude Code's Multi-Agent Arsenal

Subagents vs Skills vs Agent Teams: A Field Guide to Claude Code's Multi-Agent Arsenal

claude-code ai agent-teams tools

Claude Code now ships with three distinct ways to split work across multiple AI brains: Subagents, Skills, and the brand-new Agent Teams (launched with Opus 4.6 on Feb 5, 2026). Each solves a different problem, burns a different number of tokens, and will confuse you in a different way if you mix them up.

Let's untangle them.


The One-Liner Version

Subagents Skills Agent Teams
What Isolated worker threads Reusable instruction packs Independent parallel sessions
Context Own window Inline (shared) Fully separate
Communication Report back to caller N/A Peer-to-peer messaging
Token cost Medium Low High
Status Stable Stable Experimental

If subagents are interns who go do a task and come back with a summary, skills are cheat sheets you keep on your desk, and agent teams are a full war room of people who can talk to each other.


Subagents: The Delegators

When Claude hits a complex task, it can spawn a subagent via the Task tool. The subagent gets its own context window, its own system prompt, and a restricted set of tools. It does its thing, then returns a compact summary to the main conversation.

Think of it as fork() for LLMs -- minus the existential dread of process management.

Built-in Subagent Types

Type Model Superpower
Explore Haiku Fast, read-only codebase search
Plan Inherited Architect mode, read-only
General-purpose Inherited Full toolkit, multi-step tasks
Bash Inherited Terminal command specialist
Claude Code Guide Haiku Answers "how do I..." questions

Why This Matters for Context

Here's the key insight. Without subagents, every search result, every error log, every intermediate step piles into your main context window. It's like keeping every draft of every email you've ever written in your inbox.

Subagents fix this:

Without Subagents

Everything in one window: prompts, search results, error logs, fix attempts, build output...

~50K tokens — 30% signal, 70% noise
Status: context window sweating nervously

With Subagents

Main window keeps only prompts + compact summaries. Heavy lifting happens in isolated subagent windows.

~5K tokens in main window — 90% signal
Subagents handle the rest in their own contexts

The math is simple. If a task generates X tokens of work and Z tokens of useful output, subagents keep the X in isolation and only return the Z. Run five such tasks and you save 5X - 5Z tokens in your main window. That's often a 10x reduction.

Custom Subagents

You can create your own in .claude/agents/my-agent.md:

name: security-reviewer
model: sonnet
tools:
  - Read
  - Grep
  - Glob

# Instructions below the frontmatter
You are a security review specialist. Analyze the provided code
for OWASP Top 10 vulnerabilities and return a concise report.

Up to 7 subagents can run simultaneously. They can run in the background while you keep working. They cannot, however, spawn sub-subagents. No turtles all the way down.


Skills: The Playbooks

Skills are reusable instruction sets defined in SKILL.md files. They don't get their own context window (by default) -- they inject knowledge directly into the main conversation, like loading a reference manual into your brain.

Subagents vs Skills at a Glance

Skill (inline) Subagent (isolated)
Context Shared with main window Own separate window
How it works Instructions loaded directly into conversation Task delegated, compact summary returns
Pro Zero overhead, instant knowledge Context stays clean
Con Eats into your context budget Startup overhead

How to Use Skills

Skills live in .claude/skills/<name>/SKILL.md and are invoked two ways:

  1. User invocation: Type /skill-name in Claude Code (e.g., /deploy, /fix-issue 42)
  2. Auto-invocation: Claude reads the skill description and triggers it when relevant
# .claude/skills/deploy/SKILL.md
description: Deploy the application to production
disable-model-invocation: true  # User-only, no auto-trigger

# Instructions below the frontmatter
Run the deployment pipeline:
1. Run tests: npm test
2. Build: npm run build
3. Deploy: npm run deploy
4. Verify: curl the health endpoint

Skills can also run in a forked context (like a subagent) by adding context: fork to the frontmatter. Best of both worlds -- reusable instructions with context isolation.

When to Use Which

Need to... Use...
Look up code across the repo Subagent (Explore)
Apply a standard review checklist Skill
Run a complex multi-file refactor Subagent (General)
Execute a deployment playbook Skill (user-invoked)
Answer "how does X work?" Subagent (Guide)
Add project conventions to Claude Skill (auto-invoked)

Agent Teams: The War Room

This is the new hotness. Shipped Feb 5, 2026 with Opus 4.6. Still experimental. Still thrilling.

Agent Teams let you spawn multiple fully independent Claude Code sessions that can talk to each other. Not just report back to a boss -- actually message each other peer-to-peer. It's the difference between a manager delegating tasks and a team collaborating on a whiteboard.

Enabling Agent Teams

// settings.json or .claude/settings.json
{
  "env": {
    "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
  }
}

Architecture

graph TD
    Lead["TEAM LEAD\n(Your session)"]
    Lead -->|spawn + task| Auth["Teammate: auth"]
    Lead -->|spawn + task| API["Teammate: api"]
    Lead -->|spawn + task| Tests["Teammate: tests"]
    Lead -->|spawn + task| Docs["Teammate: docs"]
    Auth <-->|peer msg| API
    API <-->|peer msg| Tests
    Tests <-->|peer msg| Docs
    Auth <-->|peer msg| Tests
    Tasks[("SHARED TASK LIST\n~/.claude/tasks/team/")]
    Auth -.->|claim| Tasks
    API -.->|claim| Tasks
    Tests -.->|claim| Tasks
    Docs -.->|claim| Tasks

The Killer Feature: Peer-to-Peer Messaging

With subagents, communication is strictly hierarchical -- worker reports to boss. With agent teams, any teammate can message any other teammate directly:

Subagent Communication

graph TD
    Main["Main"] --> A
    Main --> B

A and B cannot talk. They only report to Main.

Agent Team Communication

graph TD
    Lead --> A
    Lead --> B
    Lead --> C
    A <--> B
    B <--> C
    A <--> C

A, B, and C can message each other directly.

This means the "tests" teammate can ask the "api" teammate about endpoint signatures without routing through the team lead. Less bottleneck, more collaboration.

Context Window: The Full Picture

Here's how all three approaches compare for a task requiring four parallel workstreams:

Approach 1: Single Context (no delegation)

~200K tokens in one window — 30% signal, 70% noise. Risk: context overflow, degraded quality.

Approach 2: Subagents (isolated workers, summaries return)

~20K tokens in main window — 90% signal. 4 workers handle ~30K each in isolation. Total: ~140K, but main stays clean.

Approach 3: Agent Teams (independent sessions, peer messaging)

~10K tokens in lead window — coordination only. 4 teammates run full ~50K sessions independently. Total: ~210K across 5 windows. Maximum autonomy.

The tradeoff is clear:

  • Skills: Cheapest, but shares context space
  • Subagents: Great balance, isolates work, returns summaries
  • Agent Teams: Maximum parallelism, maximum autonomy, maximum token spend

Real-World Scale

Anthropic's own team used agent teams to build a C compiler with Opus 4.6 -- nearly 2,000 Claude Code sessions over two weeks, 2 billion input tokens, 140 million output tokens, ~$20,000 in compute. Not your weekend side project budget, but proof the architecture scales.

Quick Start: Your First Agent Team

Press Shift+Tab in Claude Code to enable delegate mode (keeps the lead focused on coordination), then:

You: "Build a REST API with auth, tests, and docs"

Claude (Lead):
  -> Spawns teammate "auth" with auth middleware task
  -> Spawns teammate "api" with endpoint implementation task
  -> Spawns teammate "tests" with test writing task
  -> Spawns teammate "docs" with documentation task

Teammates work independently, message each other for
interface contracts, and update the shared task list.

Lead monitors progress and approves plans.

Best practices:

  • 2-5 teammates with 5-6 tasks each is the sweet spot
  • Ensure teammates own different files to avoid merge conflicts
  • Give rich spawn prompts -- teammates don't inherit the lead's conversation history
  • Start with research tasks before parallel implementation
  • Monitor your team. Unattended teams risk wasted effort (and wasted dollars)

TL;DR Decision Tree

graph TD
    Q1{"Do you need to\ndelegate work?"} -->|No| A1["Just use Claude directly"]
    Q1 -->|Yes| Q2{"Does it need its\nown context?"}
    Q2 -->|No| A2["Use a SKILL\n(inline instructions)"]
    Q2 -->|Yes| Q3{"Do workers need to\ntalk to each other?"}
    Q3 -->|No| A3["Use SUBAGENTS\n(isolated, report back)"]
    Q3 -->|Yes| A4["Use AGENT TEAMS\n(peer-to-peer, experimental)"]

The progression from skills to subagents to agent teams mirrors how human organizations scale: from reference docs, to delegated tasks, to autonomous teams. Claude Code now supports all three levels, and knowing which to reach for is half the battle.

Now go spawn some agents. Responsibly.