@shinyaz

Strands Agents SDK Practical — Type-Safe LLM Output with Structured Output

Table of Contents

Introduction

In the introductory series, we learned everything from agent basics to multi-agent patterns. However, all agent outputs so far have been plain text. When the agent says "250 USD is 37,375 JPY," extracting the numeric amount requires parsing the text.

With Structured Output, you just pass a Pydantic model and the LLM's output becomes a typed object. No text parsing needed.

In this article, we'll try:

  1. Basics — Structure LLM output with a Pydantic model
  2. Combining with tools — Structure tool execution results
  3. Automatic validation retries — LLM auto-corrects on validation failure
  4. Extraction from conversation history — Structure information from multi-turn context

See the official documentation at Structured Output.

Setup

Use the same environment from the introductory series. For a fresh setup:

Terminal
mkdir my_agent && cd my_agent
python -m venv .venv
source .venv/bin/activate
pip install strands-agents strands-agents-tools

All examples use the same model configuration and can be run as independent .py files. Write the common setup at the top, then add each example's code below it.

Python (common setup)
from strands import Agent
from strands.models import BedrockModel
 
bedrock_model = BedrockModel(
    model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
    region_name="us-east-1",
)

Basics — Structuring Output with a Pydantic Model

Start with the simplest example: extracting person information from text and receiving it as a typed object.

Python
from pydantic import BaseModel, Field
 
class PersonInfo(BaseModel):
    """Information about a person."""
    name: str = Field(description="Full name of the person")
    age: int = Field(description="Age in years")
    occupation: str = Field(description="Current occupation")
 
agent = Agent(model=bedrock_model, callback_handler=None)
result = agent(
    "John Smith is a 30 year-old software engineer",
    structured_output_model=PersonInfo,
)
 
person = result.structured_output
print(f"Name: {person.name}")
print(f"Age: {person.age}")
print(f"Occupation: {person.occupation}")
print(f"Type: {type(person).__name__}")

callback_handler=None disables streaming output to the console. Results are retrieved from result. All subsequent examples use the same setting.

01_basic.py full code (copy-paste)
01_basic.py
from strands import Agent
from strands.models import BedrockModel
from pydantic import BaseModel, Field
 
bedrock_model = BedrockModel(
    model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
    region_name="us-east-1",
)
 
class PersonInfo(BaseModel):
    """Information about a person."""
    name: str = Field(description="Full name of the person")
    age: int = Field(description="Age in years")
    occupation: str = Field(description="Current occupation")
 
agent = Agent(model=bedrock_model, callback_handler=None)
result = agent(
    "John Smith is a 30 year-old software engineer",
    structured_output_model=PersonInfo,
)
 
person = result.structured_output
print(f"Name: {person.name}")
print(f"Age: {person.age}")
print(f"Occupation: {person.occupation}")
print(f"Type: {type(person).__name__}")
Terminal
python -u 01_basic.py

Result

Output
Name: John Smith
Age: 30
Occupation: software engineer
Type: PersonInfo

result.structured_output is a PersonInfo typed object. Access the name with person.name and age with person.age. No text parsing required.

Three key points:

  • Define a Pydantic model — Inherit from BaseModel and add Field(description=...) to each field. These descriptions serve as instructions to the LLM
  • Pass it to structured_output_model — Specify the model class when calling agent()
  • Retrieve via result.structured_output — Get the result as a typed object

Combining with Tools — Structuring Tool Results

Structured Output works with tools. Let's take the exchange rate tool from Part 2 of the intro series and receive its results as a structured object.

Define the output model and pass it to the agent along with the tool.

Python (model definition and execution)
class ConversionResult(BaseModel):
    """Currency conversion result."""
    base_currency: str = Field(description="Source currency code")
    target_currency: str = Field(description="Target currency code")
    exchange_rate: float = Field(description="Exchange rate used")
    original_amount: float = Field(description="Original amount")
    converted_amount: float = Field(description="Converted amount")
 
agent = Agent(model=bedrock_model, tools=[get_exchange_rate], callback_handler=None)
result = agent(
    "Convert 250 USD to JPY",
    structured_output_model=ConversionResult,
)
 
conv = result.structured_output
print(f"Result: {conv.original_amount} {conv.base_currency} = {conv.converted_amount} {conv.target_currency}")
print(f"Rate: {conv.exchange_rate}")

The get_exchange_rate tool is the same one from Part 2 of the intro series.

02_tools.py full code (copy-paste)
02_tools.py
from strands import Agent, tool
from strands.models import BedrockModel
from pydantic import BaseModel, Field
import json
 
bedrock_model = BedrockModel(
    model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
    region_name="us-east-1",
)
 
@tool
def get_exchange_rate(base: str, target: str) -> dict:
    """Get the current exchange rate between two currencies.
 
    Args:
        base: The base currency code (e.g. USD, EUR, JPY)
        target: The target currency code (e.g. USD, EUR, JPY)
 
    Returns:
        dict: Exchange rate information
    """
    rates = {
        ("USD", "JPY"): 149.50,
        ("EUR", "USD"): 1.08,
        ("EUR", "JPY"): 161.46,
    }
    rate = rates.get((base.upper(), target.upper()))
    if rate is None:
        return {"error": f"Rate not found for {base}/{target}"}
    return {"base": base.upper(), "target": target.upper(), "rate": rate}
 
class ConversionResult(BaseModel):
    """Currency conversion result."""
    base_currency: str = Field(description="Source currency code")
    target_currency: str = Field(description="Target currency code")
    exchange_rate: float = Field(description="Exchange rate used")
    original_amount: float = Field(description="Original amount")
    converted_amount: float = Field(description="Converted amount")
 
agent = Agent(model=bedrock_model, tools=[get_exchange_rate], callback_handler=None)
result = agent("Convert 250 USD to JPY", structured_output_model=ConversionResult)
 
conv = result.structured_output
print(f"Result: {conv.original_amount} {conv.base_currency} = {conv.converted_amount} {conv.target_currency}")
print(f"Rate: {conv.exchange_rate}")
 
print("\n--- Metrics ---")
print(json.dumps(result.metrics.get_summary(), indent=2, default=str))
Terminal
python -u 02_tools.py

Result

Output
Result: 250.0 USD = 37375.0 JPY
Rate: 149.5

The agent fetched the rate using the get_exchange_rate tool and structured the result into a ConversionResult model. Instead of text, you can access the converted amount directly as conv.converted_amount.

How Structured Output Works Internally

The metrics reveal an interesting mechanism:

Output (metrics excerpt)
Cycles: 2
tool_usage:
  get_exchange_rate: calls=1, success=1
  ConversionResult: calls=1, success=1

ConversionResult appears as a tool. Structured Output internally registers the Pydantic model as a "tool" for the LLM. The LLM outputs structured data using the same tool-calling mechanism, and the SDK validates it against the Pydantic model.

The agent loop flow looks like this:

  1. Cycle 1: LLM calls get_exchange_rate to fetch the rate
  2. Cycle 2: LLM calculates the conversion and calls the ConversionResult tool (= Pydantic model) to output structured data

Automatic Validation Retries — LLM Self-Corrects

Adding a Pydantic field_validator with custom validation causes the LLM to automatically retry when validation fails.

Python
from pydantic import BaseModel, Field, field_validator
 
class UserName(BaseModel):
    """A user's name with a required suffix."""
    first_name: str = Field(description="First name of the person")
 
    @field_validator("first_name")
    @classmethod
    def validate_first_name(cls, value: str) -> str:
        if not value.endswith("_verified"):
            raise ValueError("first_name must end with '_verified' suffix")
        return value
 
agent = Agent(model=bedrock_model, callback_handler=None)
result = agent(
    "What is Aaron's first name?",
    structured_output_model=UserName,
)
 
print(f"Result: {result.structured_output}")
03_validation.py full code (copy-paste)
03_validation.py
from strands import Agent
from strands.models import BedrockModel
from pydantic import BaseModel, Field, field_validator
 
bedrock_model = BedrockModel(
    model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
    region_name="us-east-1",
)
 
class UserName(BaseModel):
    """A user's name with a required suffix."""
    first_name: str = Field(description="First name of the person")
 
    @field_validator("first_name")
    @classmethod
    def validate_first_name(cls, value: str) -> str:
        if not value.endswith("_verified"):
            raise ValueError("first_name must end with '_verified' suffix")
        return value
 
agent = Agent(model=bedrock_model, callback_handler=None)
result = agent("What is Aaron's first name?", structured_output_model=UserName)
print(f"Result: {result.structured_output}")
Terminal
python -u 03_validation.py

Result

Output
tool_name=<UserName> | structured output validation failed |
  error_message=<Validation failed for UserName. Please fix the following errors:
  - Field 'first_name': Value error, first_name must end with '_verified' suffix>
Result: first_name='Aaron_verified'
Output (metrics excerpt)
Cycles: 2
UserName: calls=2, success=1, errors=1, success_rate=0.5

On the first call, the LLM returned first_name="Aaron", but validation failed. The SDK sent the error message back to the LLM, which corrected it to first_name="Aaron_verified" on the second call.

This mechanism is useful for:

  • Format constraints — Email address formats, date formats, etc.
  • Value range constraints — Age between 0-150, positive prices, etc.
  • Business rules — Required prefixes or suffixes

Developers only need to write validation rules in the Pydantic model. The SDK handles retry logic automatically.

Extraction from Conversation History

Combining multi-turn conversations from Part 4 of the intro series with Structured Output lets you structure information from conversation context.

Python
from typing import Optional
 
agent = Agent(model=bedrock_model, callback_handler=None)
 
# Build up conversation context
agent("My name is Taro and I work as a data scientist at a startup in Tokyo.")
agent("I've been using Python for 5 years and recently started learning Rust.")
 
class ProfileSummary(BaseModel):
    """Summary of a person's profile from conversation."""
    name: str = Field(description="Person's name")
    occupation: str = Field(description="Current job title")
    location: str = Field(description="Where they work")
    primary_language: str = Field(description="Main programming language")
    learning: Optional[str] = Field(description="Language currently learning", default=None)
 
# Extract structured data from conversation history
result = agent(
    "Based on our conversation, extract my profile information.",
    structured_output_model=ProfileSummary,
)
 
profile = result.structured_output
print(f"Name: {profile.name}")
print(f"Occupation: {profile.occupation}")
print(f"Location: {profile.location}")
print(f"Primary: {profile.primary_language}")
print(f"Learning: {profile.learning}")
print(f"\nMessages in history: {len(agent.messages)}")
04_conversation.py full code (copy-paste)
04_conversation.py
from strands import Agent
from strands.models import BedrockModel
from pydantic import BaseModel, Field
from typing import Optional
 
bedrock_model = BedrockModel(
    model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
    region_name="us-east-1",
)
 
agent = Agent(model=bedrock_model, callback_handler=None)
agent("My name is Taro and I work as a data scientist at a startup in Tokyo.")
agent("I've been using Python for 5 years and recently started learning Rust.")
 
class ProfileSummary(BaseModel):
    """Summary of a person's profile from conversation."""
    name: str = Field(description="Person's name")
    occupation: str = Field(description="Current job title")
    location: str = Field(description="Where they work")
    primary_language: str = Field(description="Main programming language")
    learning: Optional[str] = Field(description="Language currently learning", default=None)
 
result = agent(
    "Based on our conversation, extract my profile information.",
    structured_output_model=ProfileSummary,
)
 
profile = result.structured_output
print(f"Name: {profile.name}")
print(f"Occupation: {profile.occupation}")
print(f"Location: {profile.location}")
print(f"Primary: {profile.primary_language}")
print(f"Learning: {profile.learning}")
print(f"\nMessages in history: {len(agent.messages)}")
Terminal
python -u 04_conversation.py

Result

Output
Name: Taro
Occupation: data scientist
Location: Tokyo
Primary: Python
Learning: Rust
Messages in history: 7

Information accumulated across two conversations was structured into a ProfileSummary on the third call. The Optional field (learning) was also correctly extracted.

This technique is useful for:

  • Chatbot profile building — Gradually collect user information through conversation, then structure it
  • Meeting notes structuring — Extract decisions and action items from free-form conversation
  • Form input alternative — Gather information conversationally, then output as form data

Summary

  • Just pass a Pydantic model for type-safe output — Specify the model class in structured_output_model. No text parsing needed, and IDE type completion works.
  • Internally works as a tool — Structured Output registers the Pydantic model as a tool for the LLM. It appears in metrics as a tool, so you can monitor performance the same way as regular tools.
  • LLM auto-retries on validation failure — Adding field_validator with custom validation causes the SDK to send error messages back to the LLM, which corrects and retries. No need to write retry logic.
  • Combine with conversation history to extract information — Extract structured data from accumulated multi-turn context at any point.

Share this post

Shinya Tahara

Shinya Tahara

Solutions Architect @ AWS

I'm a Solutions Architect at AWS, providing technical guidance primarily to financial industry customers. I share learnings about cloud architecture and AI/ML on this site.The views and opinions expressed on this site are my own and do not represent the official positions of my employer.

Related Posts