Designing effective subagents
You create a code review subagent, wire it up, and ask Claude to review your branch. It runs — for a long time. Eventually it returns a loose paragraph that mentions "some concerns about error handling" and a vague suggestion to "consider the performance implications." You skim it, shrug, and re-review the code yourself.
The subagent ran. The subagent didn't help.
A subagent that's poorly configured will wander, run too long, or produce output the main thread can't actually use. The fixes come down to four patterns — simple on their own, powerful together.
How config data gets used
Before the patterns, one piece of mechanics that most people miss. When you send a message to the main context agent, the name and description of every available subagent are included in the main agent's system prompt. That's how the main agent decides which subagent to launch and when.
The description plays a second role too. When the main agent launches a subagent, it writes an input prompt to kick off the task — and it uses the description as guidance for writing that prompt. So the description doesn't just control when a subagent runs. It shapes what the subagent is told to do.
Keep that dual role in mind. It's why the first pattern is about writing descriptions carefully.
Pattern 1 — Specific descriptions
Consider a code review subagent with a generic description like "Use this agent to review code changes." When the main agent launches it, it writes a vague input prompt — something like "use git diff to find the current changes." The subagent then has to guess which files matter.
Now update the description to add one specific instruction:
You must tell the agent precisely which files you want it to review.
Ask Claude to run the reviewer now, and the input prompt changes. Instead of "use git diff," the main agent writes something like "Review the changes to auth/session.ts, auth/tokens.ts, and tests/auth.test.ts for security and correctness." Same subagent. Completely different input.
Before: "Use this agent to review code changes." → vague input, unfocused review.
After: "Use this agent to review code changes. You must tell the agent precisely which files you want it to review." → specific input with a concrete file list, focused review.
The same technique works across every subagent type. Adding "return sources that can be cited" to a web search subagent's description causes the main agent to include that instruction when it delegates. You're not just describing when the subagent runs — you're programming what its input prompts will look like.
Pattern 2 — Structured output in the system prompt
This is the single most important improvement you can make to any subagent. Define an output format in its system prompt and two things happen:
- The subagent gets natural stopping points. It knows it's done when every section of the format is filled in.
- It stops over-running. Without a defined output, subagents struggle to decide when enough research has been done and tend to run much, much longer than necessary.
Here's the seven-section output format for a code review subagent. Paste it into the system prompt body as-is or adapt the sections to your own criteria:
Provide your review in a structured format:
1. Summary: Brief overview of what you reviewed and overall assessment
2. Critical Issues: Any security vulnerabilities, data integrity risks,
or logic errors that must be fixed immediately
3. Major Issues: Quality problems, architecture misalignment, or
significant performance concerns
4. Minor Issues: Style inconsistencies, documentation gaps, or
minor optimizations
5. Recommendations: Suggestions for improvement, refactoring
opportunities, or best practices to apply
6. Approval Status: Clear statement of whether the code is ready
to merge/deploy or requires changes
7. Obstacles Encountered: Report any obstacles encountered during the
review process. This can be: setup issues, workarounds discovered, or
environment quirks. Report commands that needed a special flag or
configuration. Report dependencies or imports that caused problems.
The reviewer now has a checklist. It works through the diff until every section has content, then stops. No wandering, no guessing when it's done.
Pattern 3 — Obstacle reporting
Section 7 in that template deserves its own pattern, because it's easy to miss and expensive to skip.
When a subagent discovers a workaround during its work — a dependency that needs a specific install flag, a test that requires an env var, a command that fails without --legacy-peer-deps — those details need to appear in the summary. If they don't, the main thread has to rediscover the same solutions on its own. That's wasted tokens and wasted time.
Explicitly ask for this information in the output format. Things worth surfacing:
- Setup issues or environment quirks discovered during the task
- Workarounds that worked around errors or missing dependencies
- Commands that needed special flags or configuration
- Imports or dependencies that caused problems
"Obstacles Encountered" as a required section in the output format is how you make this reliable. Don't rely on the subagent to volunteer the information — make it part of the contract.
Pattern 4 — Limiting tool access
Not every subagent needs access to every tool. Think about what your subagent actually needs to do, and give it only what that job requires. Two things improve: unintended side effects drop to zero, and each subagent's role becomes clearer when you have several of them.
Here's how to think about tool access by role:
| Role | Tools | Rationale |
|---|---|---|
| Research / read-only | Glob, Grep, Read | Can explore the codebase but physically cannot modify files. Safe to run broadly. |
| Code reviewer | Glob, Grep, Read, Bash | Needs Bash to run git diff and see what changed, but should not edit anything. |
| Code modification | Glob, Grep, Read, Bash, Edit, Write | The subagent's job is to actually change code, so it needs write access. Use sparingly. |
A read-only research subagent using just Glob, Grep, and Read cannot accidentally modify files. That's not a guideline — it's a hard constraint enforced by the tool list. The constraint clarifies the subagent's role and prevents a class of failure modes entirely. Only give Edit and Write to subagents whose job is actually to change your code.
Putting it together
Effective subagents share four characteristics:
- Specific descriptions — written to control both when the subagent triggers and what instructions it receives.
- Structured output — a defined format in the system prompt so the subagent knows when it's done and returns information the main thread can use.
- Obstacle reporting — a section in the output format for workarounds, quirks, and problems so the main thread never rediscovers them.
- Limited tool access — only the tools the subagent actually needs. Read-only for research, Bash for reviewers, Edit and Write only for agents that should change code.
Each pattern is small. Applied together, they turn a subagent from something that vaguely tries to help into a focused, predictable worker that finishes on time and reports back clearly.
Key Takeaways
- 1A subagent's description controls two things: when the main agent launches it, and what input prompt the main agent writes — so specific descriptions shape both behaviors at once.
- 2Defining a structured output format in the system prompt is the single most valuable improvement — it creates natural stopping points and prevents subagents from running too long.
- 3Obstacle reporting is a dedicated output section that surfaces workarounds, environment quirks, and flag requirements so the main thread never has to rediscover them.
- 4Tool access should be scoped by role: read-only tools for research, add Bash for reviewers, and only give Edit or Write to subagents whose job is to change code.
- 5Effective subagents combine specific descriptions, structured output, obstacle reporting, and limited tool access — each pattern is simple, but together they transform wandering subagents into predictable workers.