Strands Agents SDK Practical — Control the Agent Loop with Hooks
Table of Contents
Introduction
In Part 1 of the intro series, we used metrics to inspect agent behavior "after the fact." But in production, you often need to intervene "during" execution — log tool calls, limit invocation counts, or modify results before they reach the LLM.
A single add_hook call lets you monitor and limit tool calls.
In this article, we'll try:
- Logging before/after tool calls — Output tool names and parameters with
BeforeToolCallEvent - Limiting tool call counts — Disable tools with
cancel_tool - Modifying tool results — Add formatting with
AfterToolCallEvent
See the official documentation at Hooks.
Setup
Use the same environment from Part 1. All examples use the same model configuration and can be run as independent .py files. Write the common setup and shared tool at the top, then add each example's code below.
from strands import Agent, tool
from strands.models import BedrockModel
from strands.hooks import BeforeToolCallEvent, AfterToolCallEvent, BeforeInvocationEvent
bedrock_model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-east-1",
)All examples use a shared weather tool:
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city.
Args:
city: The city name
Returns:
str: Weather information
"""
weather_data = {
"Tokyo": "Sunny, 22°C",
"London": "Cloudy, 15°C",
"New York": "Rainy, 18°C",
"Paris": "Windy, 16°C",
"Sydney": "Clear, 25°C",
}
return weather_data.get(city, f"No data for {city}")Logging Before and After Tool Calls
Register callback functions for BeforeToolCallEvent and AfterToolCallEvent.
def log_before_tool(event: BeforeToolCallEvent) -> None:
print(f"[HOOK] Before: {event.tool_use['name']}({event.tool_use['input']})")
def log_after_tool(event: AfterToolCallEvent) -> None:
status = event.result.get("status", "unknown")
print(f"[HOOK] After: {event.tool_use['name']} -> {status}")
agent = Agent(model=bedrock_model, tools=[get_weather], callback_handler=None)
agent.add_hook(log_before_tool)
agent.add_hook(log_after_tool)
result = agent("What's the weather in Tokyo and London?")
print(f"\nAnswer: {result.message['content'][0]['text']}")01_log.py full code (copy-paste)
from strands import Agent, tool
from strands.models import BedrockModel
from strands.hooks import BeforeToolCallEvent, AfterToolCallEvent
bedrock_model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-east-1",
)
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city.
Args:
city: The city name
Returns:
str: Weather information
"""
weather_data = {
"Tokyo": "Sunny, 22°C",
"London": "Cloudy, 15°C",
"New York": "Rainy, 18°C",
}
return weather_data.get(city, f"No data for {city}")
def log_before_tool(event: BeforeToolCallEvent) -> None:
print(f"[HOOK] Before: {event.tool_use['name']}({event.tool_use['input']})")
def log_after_tool(event: AfterToolCallEvent) -> None:
status = event.result.get("status", "unknown")
print(f"[HOOK] After: {event.tool_use['name']} -> {status}")
agent = Agent(model=bedrock_model, tools=[get_weather], callback_handler=None)
agent.add_hook(log_before_tool)
agent.add_hook(log_after_tool)
result = agent("What's the weather in Tokyo and London?")
print(f"\nAnswer: {result.message['content'][0]['text']}")python -u 01_log.pyResult
[HOOK] Before: get_weather({'city': 'Tokyo'})
[HOOK] Before: get_weather({'city': 'London'})
[HOOK] After: get_weather -> success
[HOOK] After: get_weather -> success
Answer: Here's the current weather for both cities:
**Tokyo**: Sunny, 22°C (72°F)
**London**: Cloudy, 15°C (59°F)Notice that both Before hooks fire first, then both After hooks. The LLM called both tools in parallel, resulting in Before → Before → After → After ordering.
The SDK auto-detects which event to register based on the function's type hint (BeforeToolCallEvent / AfterToolCallEvent). No need to explicitly specify the event type.
Limiting Tool Call Counts
Setting the cancel_tool property on BeforeToolCallEvent cancels tool execution.
tool_count = 0
def reset_count(event: BeforeInvocationEvent) -> None:
global tool_count
tool_count = 0
def limit_tool_calls(event: BeforeToolCallEvent) -> None:
global tool_count
tool_count += 1
if tool_count > 2:
event.cancel_tool = (
f"Tool '{event.tool_use['name']}' call limit reached (max 2). "
"DO NOT CALL THIS TOOL ANYMORE."
)
print(f"[HOOK] BLOCKED: {event.tool_use['name']}({event.tool_use['input']})")
else:
print(f"[HOOK] ALLOWED ({tool_count}/2): {event.tool_use['name']}({event.tool_use['input']})")
agent = Agent(model=bedrock_model, tools=[get_weather], callback_handler=None)
agent.add_hook(reset_count)
agent.add_hook(limit_tool_calls)
result = agent("What's the weather in Tokyo, London, New York, Paris, and Sydney?")02_limit.py full code (copy-paste)
from strands import Agent, tool
from strands.models import BedrockModel
from strands.hooks import BeforeToolCallEvent, BeforeInvocationEvent
bedrock_model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-east-1",
)
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city.
Args:
city: The city name
Returns:
str: Weather information
"""
weather_data = {
"Tokyo": "Sunny, 22°C",
"London": "Cloudy, 15°C",
"New York": "Rainy, 18°C",
"Paris": "Windy, 16°C",
"Sydney": "Clear, 25°C",
}
return weather_data.get(city, f"No data for {city}")
tool_count = 0
def reset_count(event: BeforeInvocationEvent) -> None:
global tool_count
tool_count = 0
def limit_tool_calls(event: BeforeToolCallEvent) -> None:
global tool_count
tool_count += 1
if tool_count > 2:
event.cancel_tool = (
f"Tool '{event.tool_use['name']}' call limit reached (max 2). "
"DO NOT CALL THIS TOOL ANYMORE."
)
print(f"[HOOK] BLOCKED: {event.tool_use['name']}({event.tool_use['input']})")
else:
print(f"[HOOK] ALLOWED ({tool_count}/2): {event.tool_use['name']}({event.tool_use['input']})")
agent = Agent(model=bedrock_model, tools=[get_weather], callback_handler=None)
agent.add_hook(reset_count)
agent.add_hook(limit_tool_calls)
result = agent("What's the weather in Tokyo, London, New York, Paris, and Sydney?")python -u 02_limit.pyResult
[HOOK] ALLOWED (1/2): get_weather({'city': 'Tokyo'})
[HOOK] ALLOWED (2/2): get_weather({'city': 'London'})
[HOOK] BLOCKED: get_weather({'city': 'New York'})
[HOOK] BLOCKED: get_weather({'city': 'Paris'})
[HOOK] BLOCKED: get_weather({'city': 'Sydney'})get_weather: calls=5, success=2, errors=3Of the 5 tool calls for 5 cities, only the first 2 executed. The remaining 3 were blocked. Blocked tools count as errors in metrics. The LLM recognized the "tool call limit reached" message and generated a response using only the 2 cities it could fetch.
The reset_count hook on BeforeInvocationEvent resets the counter when agent() is called multiple times.
Modifying Tool Results
Overwriting the result property on AfterToolCallEvent changes what the LLM receives.
@tool
def calculate(expression: str) -> str:
"""Evaluate a math expression.
Args:
expression: A math expression to evaluate (e.g. "2 + 3")
Returns:
str: The result of the calculation
"""
result = eval(expression)
return str(result)
def format_result(event: AfterToolCallEvent) -> None:
if event.tool_use["name"] == "calculate":
original = event.result["content"][0]["text"]
expression = event.tool_use["input"]["expression"]
event.result["content"][0]["text"] = f"[FORMATTED] {expression} = {original}"
print(f"[HOOK] Modified result: {original} -> {event.result['content'][0]['text']}")
agent = Agent(model=bedrock_model, tools=[calculate], callback_handler=None)
agent.add_hook(format_result)
result = agent("What is 42 * 58?")
print(f"\nAnswer: {result.message['content'][0]['text']}")03_modify.py full code (copy-paste)
from strands import Agent, tool
from strands.models import BedrockModel
from strands.hooks import AfterToolCallEvent
bedrock_model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-east-1",
)
@tool
def calculate(expression: str) -> str:
"""Evaluate a math expression.
Args:
expression: A math expression to evaluate (e.g. "2 + 3")
Returns:
str: The result of the calculation
"""
result = eval(expression)
return str(result)
def format_result(event: AfterToolCallEvent) -> None:
if event.tool_use["name"] == "calculate":
original = event.result["content"][0]["text"]
expression = event.tool_use["input"]["expression"]
event.result["content"][0]["text"] = f"[FORMATTED] {expression} = {original}"
print(f"[HOOK] Modified result: {original} -> {event.result['content'][0]['text']}")
agent = Agent(model=bedrock_model, tools=[calculate], callback_handler=None)
agent.add_hook(format_result)
result = agent("What is 42 * 58?")
print(f"\nAnswer: {result.message['content'][0]['text']}")python -u 03_modify.pyResult
[HOOK] Modified result: 2436 -> [FORMATTED] 42 * 58 = 2436
Answer: 42 * 58 = 2,436The hook added a [FORMATTED] prefix and the expression to the tool result. The LLM received this modified result and generated its answer accordingly.
This technique is useful for:
- Standardizing result formats — Unify output formats across different tools
- Adding metadata — Append execution time or source information to results
- Filtering — Remove sensitive information from results before passing to the LLM
Summary
- A single
add_hookcall monitors tool calls — The SDK auto-detects the event type from theBeforeToolCallEvent/AfterToolCallEventtype hint. cancel_toolcancels tool execution — The cancellation message is returned to the LLM, which interprets it and includes it in the response. Blocked tools count aserrorsin metrics.- Overwriting
event.resultcontrols LLM input — Modifying tool results before they reach the LLM improves output quality and security. - Multiple hooks can be bundled into a Plugin — Group related hooks into a
Pluginclass for reusable modules. See the official Plugins documentation for details.
