Strands Agents SDK Deploy — Visualize Agent Traces with OpenTelemetry
Enable the Strands Agents SDK's built-in OpenTelemetry support to visualize agent reasoning and tool calls as traces in Jaeger. Only 2 lines of code change required.
Content tagged with "python"
Enable the Strands Agents SDK's built-in OpenTelemetry support to visualize agent reasoning and tool calls as traces in Jaeger. Only 2 lines of code change required.
Use the AgentCore CLI (Node.js) to deploy a Strands agent to Bedrock AgentCore with just agentcore create + deploy. Verified that session isolation enables conversation continuity across invocations.
Deploy a Strands Agents SDK agent to AWS Lambda using the official Lambda Layer. Measured cold start Init Duration at ~1 second and warm start at under 1 second.
Wrap a Strands Agents SDK agent with FastAPI to create an HTTP API, then package it as a Docker container. Covers the async def hang pitfall and SSO credential issues encountered during verification.
Run the same task with Agents as Tools, Swarm, and Graph, then compare structural differences via metrics. Establish selection criteria for choosing the right pattern per use case.
Embed a Swarm as a node in a Graph to combine autonomous collaboration with structured workflows. Verify nested execution results and multi-agent Hooks for node monitoring.
Build deterministic workflows with Strands Agents SDK's Graph pattern. Define sequential, parallel, conditional, and feedback loop workflows with GraphBuilder, verified with real execution results.
Run Strands Agents SDK's Swarm pattern hands-on and verify how agents autonomously hand off tasks to each other. Compare with the Agents as Tools pattern from the intro series.
Deep dive into Strands Agents SDK's result.metrics to analyze cycle counts, token usage, and tool execution times. Compare how tool design choices impact performance and identify optimization targets.
Apply Bedrock Guardrails to Strands Agents SDK for automatic input/output filtering. Verify guardrail intervention behavior and implement shadow mode (monitor-only) using Hooks.
Use Strands Agents SDK Hooks to intercept the agent loop. Log tool calls, limit invocation counts, and modify results with real code examples showing how to control agent behavior in real time.
Use Strands Agents SDK's FileSessionManager to persist conversations to files and restore them after process restarts. Examine the stored data structure and the migration path to S3.
Use Strands Agents SDK's Structured Output to convert LLM responses into type-safe Pydantic objects. Verify tool integration, automatic validation retries, and conversation history extraction with code and metrics.
Export a CVV key wrapped with KEK via TR-31, import it, and verify the same CVV2 is generated. Key material transfer and KCV-based identity verification with Python (boto3).
Hands-on verification of the AI-powered A/B testing engine from the AWS blog — implementing context-dependent variant selection with Bedrock Converse API tool use. Discovered that omitting context from the prompt reverses the variant choice, highlighting prompt design as the critical factor.
Coordinate multiple agents using the Agents as Tools pattern in Strands Agents SDK. Build a summarizer and translator as specialized agents, orchestrated by a coordinator — verified with real code and metrics.
Explore multi-turn conversations and SlidingWindowConversationManager in Strands Agents SDK. See how agents remember past exchanges and what happens when the context window fills up — verified with real code.
Connect MCP (Model Context Protocol) servers to Strands Agents SDK and extend your agent with external tools. Walk through the AWS documentation search MCP server hands-on, and combine MCP tools with custom @tool functions.
Deep dive into Strands Agents SDK custom tools. Build multi-step tool chains, observe how the LLM handles tool errors gracefully, and control agent behavior with system prompts — all verified with real code and metrics.
Walk through the Strands Agents SDK Python Quickstart hands-on, explaining the agent loop, custom tools, and metrics with working code. A few dozen lines of Python is all it takes.
Strands Agents SDK's agent() is a blocking call. Calling it inside an async def FastAPI endpoint blocks the event loop and hangs the request. Switching to def fixes it.
Strands Agents SDK Graph with cycles (feedback loops) fails build() auto-detection with ValueError. Use set_entry_point to specify the starting node explicitly.
Strands Agents SDK Graph's result.execution_time returns 0ms. Per-node execution_time is correct, so calculate the total yourself.
In Strands Agents SDK Swarm, agents that hand off via handoff_to_agent have no text block in their result. The final text output comes from the last agent that didn't hand off.
When Bedrock Guardrails blocks a request in Strands Agents, the user input in conversation history is auto-replaced with [User input redacted.]. If user input disappears during debugging, this is why.
Pydantic models passed to structured_output_model appear as tools in metrics tool_usage. If you see an unfamiliar tool name during debugging, it's the Structured Output mechanism.
With 3 tool types and 4 total calls needed in Bedrock Converse API tool use, the model requested all 4 in a single response instead of calling them sequentially. The implementation must handle multiple toolUse blocks per turn.