Strands Agents SDK Multi-Agent — Choosing the Right Multi-Agent Pattern
Table of Contents
Introduction
We learned Agents as Tools in Part 5 of the intro series, and Swarm and Graph in this series. We understand all three patterns, but lack criteria for "which to use when."
Run the same task with 3 patterns and compare metrics to determine which pattern to choose.
In this article, we'll try:
- Run the same task with 3 patterns — "Summarize and translate" as the common task
- Metrics comparison and selection criteria — Structural differences and guidelines for choosing by use case
See the official documentation at Multi-agent Patterns.
Setup
Use the same environment from Part 1. All examples use the same model configuration. Write the common setup at the top, then add each example's code below it.
from strands import Agent, tool
from strands.models import BedrockModel
from strands.multiagent import Swarm, GraphBuilder
bedrock_model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-east-1",
)
TASK = "Summarize what Amazon Bedrock is in 2 sentences, then translate the summary into Japanese."Running the Same Task with 3 Patterns
Run "Summarize Amazon Bedrock in 2 sentences and translate to Japanese" with Agents as Tools, Swarm, and Graph. The task is the same, but the agent collaboration method differs.
Agents as Tools
The pattern from Part 5 of the intro series. An orchestrator calls specialized agents wrapped in @tool.
orchestrator = Agent(
model=bedrock_model, tools=[summarizer, translator],
system_prompt="First summarize, then translate to Japanese.",
callback_handler=None,
)
result_a = orchestrator(TASK)In this pattern, the orchestrator reasons "first call summarizer, then call translator." The orchestrator's own reasoning cycles are added, completing in 3 cycles (reason → summarizer → reason → translator → reason → respond).
@tool definitions (summarizer, translator)
@tool
def summarizer(text: str) -> str:
"""Summarize the given text into 2 concise sentences.
Args:
text: The text to summarize
Returns:
A concise summary
"""
agent = Agent(model=bedrock_model, system_prompt="Summarize in exactly 2 sentences.", callback_handler=None)
return agent(f"Summarize this: {text}").message['content'][0]['text']
@tool
def translator(text: str, target_language: str) -> str:
"""Translate the given text into the specified language.
Args:
text: The text to translate
target_language: The target language
Returns:
The translated text
"""
agent = Agent(model=bedrock_model, system_prompt="Translate accurately. Return only the translation.", callback_handler=None)
return agent(f"Translate into {target_language}: {text}").message['content'][0]['text']Swarm
The pattern from Part 1. Agents autonomously hand off to each other.
summarizer_agent = Agent(
name="summarizer", model=bedrock_model,
system_prompt="Summarize the topic in exactly 2 sentences, then hand off to the translator.",
callback_handler=None,
)
translator_agent = Agent(
name="translator", model=bedrock_model,
system_prompt="Translate the provided summary into Japanese. Do not hand off.",
callback_handler=None,
)
swarm = Swarm([summarizer_agent, translator_agent], entry_point=summarizer_agent, max_handoffs=5, max_iterations=5)
result_b = swarm(TASK)In Swarm, there's no orchestrator. When the summarizer finishes, it autonomously hands off to the translator via handoff_to_agent. Without the orchestrator's reasoning cycles, it's lighter than Agents as Tools.
Graph
The pattern from Part 2. Execution order explicitly defined by edges.
summarizer_agent2 = Agent(
name="summarizer2", model=bedrock_model,
system_prompt="Summarize the topic in exactly 2 sentences.",
callback_handler=None,
)
translator_agent2 = Agent(
name="translator2", model=bedrock_model,
system_prompt="Translate the provided text into Japanese.",
callback_handler=None,
)
builder = GraphBuilder()
builder.add_node(summarizer_agent2, "summarize")
builder.add_node(translator_agent2, "translate")
builder.add_edge("summarize", "translate")
builder.set_entry_point("summarize")
graph = builder.build()
result_c = graph(TASK)In Graph, edges determine execution order. The summarize → translate order is explicitly defined in code, not dependent on LLM judgment. However, Graph has overhead from building node inputs (constructing the next node's input from the previous node's results).
01_compare.py full code (copy-paste)
from strands import Agent, tool
from strands.models import BedrockModel
from strands.multiagent import Swarm, GraphBuilder
bedrock_model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-east-1",
)
TASK = "Summarize what Amazon Bedrock is in 2 sentences, then translate the summary into Japanese."
# Pattern A: Agents as Tools
@tool
def summarizer(text: str) -> str:
"""Summarize the given text into 2 concise sentences.
Args:
text: The text to summarize
Returns:
A concise summary
"""
agent = Agent(model=bedrock_model, system_prompt="Summarize in exactly 2 sentences.", callback_handler=None)
return agent(f"Summarize this: {text}").message['content'][0]['text']
@tool
def translator(text: str, target_language: str) -> str:
"""Translate the given text into the specified language.
Args:
text: The text to translate
target_language: The target language
Returns:
The translated text
"""
agent = Agent(model=bedrock_model, system_prompt="Translate accurately. Return only the translation.", callback_handler=None)
return agent(f"Translate into {target_language}: {text}").message['content'][0]['text']
orchestrator = Agent(
model=bedrock_model, tools=[summarizer, translator],
system_prompt="First summarize, then translate to Japanese.",
callback_handler=None,
)
result_a = orchestrator(TASK)
summary_a = result_a.metrics.get_summary()
print(f"Agents as Tools: {summary_a['total_cycles']} cycles, {summary_a['total_duration']:.1f}s, {summary_a['accumulated_usage']['totalTokens']} tokens")
# Pattern B: Swarm
summarizer_agent = Agent(name="summarizer", model=bedrock_model, system_prompt="Summarize the topic in exactly 2 sentences, then hand off to the translator.", callback_handler=None)
translator_agent = Agent(name="translator", model=bedrock_model, system_prompt="Translate the provided summary into Japanese. Do not hand off.", callback_handler=None)
swarm = Swarm([summarizer_agent, translator_agent], entry_point=summarizer_agent, max_handoffs=5, max_iterations=5)
result_b = swarm(TASK)
print(f"Swarm: {result_b.execution_count} nodes, {result_b.execution_time}ms")
# Pattern C: Graph
summarizer_agent2 = Agent(name="summarizer2", model=bedrock_model, system_prompt="Summarize the topic in exactly 2 sentences.", callback_handler=None)
translator_agent2 = Agent(name="translator2", model=bedrock_model, system_prompt="Translate the provided text into Japanese.", callback_handler=None)
builder = GraphBuilder()
builder.add_node(summarizer_agent2, "summarize")
builder.add_node(translator_agent2, "translate")
builder.add_edge("summarize", "translate")
builder.set_entry_point("summarize")
graph = builder.build()
result_c = graph(TASK)
total_time_c = sum(nr.execution_time for nr in result_c.results.values())
print(f"Graph: {len(result_c.execution_order)} nodes, {total_time_c}ms")python -u 01_compare.pyResult
Agents as Tools: 3 cycles, 17.3s, 3418 tokens
Swarm: 2 nodes, 10693ms
Graph: 2 nodes, 15550msMetrics Comparison and Selection Criteria
Structural Differences
Graph execution time is calculated as the sum of per-node execution_time values (result.execution_time currently returns 0ms).
| Pattern | Control | Execution Time | Agent Count | Characteristics |
|---|---|---|---|---|
| Agents as Tools | Orchestrator controls | 17.3s | 3 (orchestrator + 2 specialists) | Orchestrator reasoning cost added |
| Swarm | Agents decide autonomously | 10.7s | 2 (summarizer + translator) | Lightweight handoff overhead |
| Graph | Explicitly defined by edges | 15.6s | 2 (summarize + translate) | Execution order guaranteed |
Agents as Tools is slowest because the orchestrator's "which tool to call" reasoning adds cycles. The agent loop from Part 1 of the intro series applies to the orchestrator itself — "reason → tool selection → tool execution → reason" cycles run at the orchestrator level, and the same cycles run inside each tool's agent.
Swarm is fastest with lightweight handoffs. The handoff_to_agent tool call uses the same mechanism as regular tool calls, without additional orchestrator reasoning.
Graph has overhead from building node inputs — constructing the next node's input from the previous node's results is heavier than Swarm's handoff. However, execution order is guaranteed, making results most predictable.
Pattern Selection Framework
Don't choose based on execution time alone. Answer these 3 questions to determine the right pattern.
Q1: Do you need explicit control over execution order?
- Yes → Graph. Define dependencies with edges. Supports parallel processing, conditional branching, and feedback loops as learned in Part 2.
- No → Q2.
Q2: Do you want to reuse existing tools (@tool functions)?
- Yes → Agents as Tools. Reuse custom tools from Part 2 of the intro series or MCP tools from Part 3. No new API to learn.
- No → Q3.
Q3: Should agents autonomously route tasks based on their expertise?
- Yes → Swarm. Agents reference shared context to autonomously decide handoff targets.
- Combine both → Graph + Swarm nested. As learned in Part 3, embed Swarm as a Graph node.
| Use Case | Recommended Pattern | Reason |
|---|---|---|
| Fixed-step workflows | Graph | Guaranteed execution order, parallel processing |
| Exploratory tasks | Swarm | Agents autonomously choose optimal handoff targets |
| Reuse existing tools | Agents as Tools | Just wrap existing functions with @tool |
| Review-revise iteration | Graph (cyclic) | Conditional edges for feedback loops |
| Autonomous + structured | Graph + Swarm nested | Partially autonomous, overall structured |
| Share auth/DB connections | invocation_state | Works with any pattern via invocation_state |
Series Recap
Across the intro, practical, and multi-agent series (14 articles total), we covered the major features of Strands Agents SDK.
| Series | Theme | What We Learned |
|---|---|---|
| Intro (5 parts) | Fundamentals | Agent loop, tools, MCP, conversation, Agents as Tools |
| Practical (5 parts) | Quality | Structured Output, sessions, Hooks, Guardrails, metrics |
| Multi-Agent (4 parts) | Collaboration | Swarm, Graph, nesting+Hooks+shared state, pattern comparison |
Knowledge from all 3 series compounds rather than standing alone:
- Intro agent loop → Practical Structured Output (internally works as a tool), Multi-agent Swarm (handoffs are part of the agent loop)
- Intro custom tools → Practical Hooks (tool call monitoring/limiting), Multi-agent Agents as Tools (tool reuse)
- Practical Hooks → Multi-agent
BeforeNodeCallEvent(node-level monitoring) - Practical metrics → Multi-agent pattern comparison (decide by cycle count and execution time)
From here, you can explore deployment to AWS Lambda or Bedrock AgentCore, observability with OpenTelemetry, and remote agent collaboration with A2A (Agent2Agent).
Summary
- Same task, different execution time and structure per pattern — Agents as Tools adds orchestrator reasoning cost (3 cycles, 17.3s), Swarm has lightweight handoffs and is fastest (2 nodes, 10.7s), Graph guarantees execution order (2 nodes, 15.6s).
- 3 questions determine the pattern — Need execution order control → Graph. Reuse existing tools → Agents as Tools. Autonomous routing → Swarm.
- Patterns can be combined — Nest Swarm inside Graph nodes for both autonomous collaboration and structured flow.
invocation_statepasses shared data in any pattern. - Knowledge from all 3 series compounds — Agent loops from intro, Hooks and metrics from practical, and pattern selection from multi-agent all contribute to designing practical agent systems.
