AgentCore CLI in Practice — Deploy an AI Agent to AWS in Four Commands
Table of Contents
Introduction
Amazon Bedrock AgentCore lets you deploy and operate AI agents securely at scale using any framework and model. It was announced as a preview in July 2025 and became generally available in October 2025. It provides components such as Runtime, Memory, Gateway, and Identity, and is currently available in nine AWS regions.
Two CLI tools exist for creating and deploying AgentCore projects: the Python-based Bedrock AgentCore Starter Toolkit and the Node.js-based AgentCore CLI. According to the maintainers, the AgentCore CLI is the successor to the Starter Toolkit, which is being deprecated. New projects are recommended to use the AgentCore CLI. Both provide an agentcore command, so the AgentCore CLI README instructs users to uninstall the Starter Toolkit first. The AgentCore CLI is still in preview (v0.3.0-preview), with GA in progress.
This article tests the AgentCore CLI.
AgentCore CLI is in Public Preview (v0.3.0-preview). Commands, options, and generated templates may change before GA. This article reflects behavior as of March 2026. Using it for production workloads is not recommended until GA.
This article walks through the full create → dev → deploy → invoke lifecycle using the AgentCore CLI. I wanted to find out whether you can really deploy an agent without touching CDK, how the local development experience feels, and what observability looks like out of the box. See the official docs for the full reference.
Prerequisites
- Node.js 20+
- uv (for Python agents)
- AWS CLI with configured credentials (Bedrock model access required)
- An AWS account in a supported region (us-east-1, us-west-2, ap-southeast-2, eu-central-1, ap-northeast-1)
If uv is not installed:
curl -LsSf https://astral.sh/uv/install.sh | shInstallation and project creation
Installing the CLI
Install globally via npm.
npm install -g @aws/agentcore
agentcore --version0.3.0-preview.6.1If you have the Starter Toolkit installed, uninstall it first to avoid command name conflicts.
Creating a project
agentcore create scaffolds a new project. The --defaults flag selects Python + Strands Agents + Bedrock with no memory.
agentcore create --name AgentCoreTest --defaults
cd AgentCoreTestRunning agentcore create without flags launches an interactive wizard where you can choose the framework (Strands / LangChain / GoogleADK / OpenAI Agents), model provider (Bedrock / Anthropic / OpenAI / Gemini), and memory options.
Generated project structure
Excluding node_modules and .venv:
AgentCoreTest/
├── agentcore/
│ ├── agentcore.json # Project configuration (central config file)
│ ├── aws-targets.json # Deploy targets (initially empty)
│ ├── .env.local # API keys (gitignored)
│ ├── .cli/deployed-state.json # Deployment state (auto-managed)
│ └── cdk/ # CDK infra code (auto-generated, no editing needed)
└── app/
└── AgentCoreTest/
├── main.py # Agent entry point
├── model/load.py # Model configuration
├── mcp_client/client.py # MCP client (Exa AI by default)
└── pyproject.toml # Python dependenciesagentcore.json is the central configuration file where agents, memories, credentials, and evaluators are all declared. All files below are auto-generated by agentcore create — no manual creation needed.
agentcore.json (auto-generated project configuration)
{
"name": "AgentCoreTest",
"version": 1,
"agents": [
{
"type": "AgentCoreRuntime",
"name": "AgentCoreTest",
"build": "CodeZip",
"entrypoint": "main.py",
"codeLocation": "app/AgentCoreTest/",
"runtimeVersion": "PYTHON_3_12",
"networkMode": "PUBLIC",
"modelProvider": "Bedrock",
"protocol": "HTTP"
}
],
"memories": [],
"credentials": [],
"evaluators": [],
"onlineEvalConfigs": []
}Generated agent code
The generated main.py uses the Strands Agents SDK with the bedrock-agentcore runtime. The key points:
@tooldecorator defines tools — a sampleadd_numbersis included@app.entrypointdefines the entry point — receives user input viapayload.get("prompt")agent.stream_async()enables streaming responses — returns chunks via SSE
main.py (auto-generated agent code)
from strands import Agent, tool
from bedrock_agentcore.runtime import BedrockAgentCoreApp
from model.load import load_model
from mcp_client.client import get_streamable_http_mcp_client
app = BedrockAgentCoreApp()
log = app.logger
# Define a Streamable HTTP MCP Client
mcp_clients = [get_streamable_http_mcp_client()]
# Define a collection of tools used by the model
tools = []
# Define a simple function tool
@tool
def add_numbers(a: int, b: int) -> int:
"""Return the sum of two numbers"""
return a+b
tools.append(add_numbers)
# Add MCP client to tools if available
for mcp_client in mcp_clients:
if mcp_client:
tools.append(mcp_client)
_agent = None
def get_or_create_agent():
global _agent
if _agent is None:
_agent = Agent(
model=load_model(),
system_prompt="""
You are a helpful assistant. Use tools when appropriate.
""",
tools=tools
)
return _agent
@app.entrypoint
async def invoke(payload, context):
log.info("Invoking Agent.....")
agent = get_or_create_agent()
# Execute and format response
stream = agent.stream_async(payload.get("prompt"))
async for event in stream:
# Handle Text parts of the response
if "data" in event and isinstance(event["data"], str):
yield event["data"]
if __name__ == "__main__":
app.run()The model configuration defaults to Claude Sonnet 4.5 via a global inference profile.
from strands.models.bedrock import BedrockModel
def load_model() -> BedrockModel:
"""Get Bedrock model client using IAM credentials."""
return BedrockModel(model_id="global.anthropic.claude-sonnet-4-5-20250929-v1:0")pyproject.toml (Python dependencies)
[project]
name = "AgentCoreTest"
version = "0.1.0"
requires-python = ">=3.10"
dependencies = [
"aws-opentelemetry-distro",
"bedrock-agentcore >= 1.0.3",
"botocore[crt] >= 1.35.0",
"mcp >= 1.19.0",
"strands-agents >= 1.13.0",
]The agentcore/cdk/ directory contains auto-generated CDK infrastructure code. You never need to edit it — the CLI manages CDK based on agentcore.json.
Local development
Starting the dev server
agentcore dev starts a local development server. The --logs flag runs in non-interactive mode with logs printed to stdout.
agentcore dev --logsStarting dev server...
Agent: AgentCoreTest
Provider: Bedrock
Server: http://localhost:8080/invocations
→ INFO: Uvicorn running on http://127.0.0.1:8080 (Press CTRL+C to quit)
→ INFO: Started reloader process [18029] using StatReload
→ INFO: Application startup complete.Under the hood, it creates a Python virtual environment with uv, installs dependencies from pyproject.toml, and starts uvicorn with StatReload for hot-reloading on file changes.
Invoking the agent
With the dev server running, open a second terminal, navigate to the project directory, and use agentcore dev --invoke to send a prompt.
cd AgentCoreTest
agentcore dev --invoke "What is 3 + 5?" --streamProvider: Bedrock
The answer is **8**.Explicitly requesting a tool call:
agentcore dev --invoke "Please use the add_numbers tool to calculate 42 + 58" --streamProvider: Bedrock
The answer is **100**.The dev server logs show which tools were called:
→ {... "message": "Invoking Agent.....", "sessionId": "local-dev-session"}
→ Tool #1: add_numbers
→ INFO: 127.0.0.1:53698 - "POST /invocations HTTP/1.1" 200 OKThe session ID is fixed to local-dev-session during local development.
Local development limitations
As documented in the local development guide:
| Aspect | Local dev | Deployed |
|---|---|---|
| API keys | .env.local | AgentCore Identity |
| Memory | Not available | AgentCore Memory |
| Gateways | Env vars from deployed state | CDK-injected |
| Networking | localhost | Public |
Memory requires deployment to test — keep this in mind if your agent relies on it. Gateway environment variables are auto-injected from deployed-state.json if you have previously deployed gateways.
Deploying to AWS
Configuring the deploy target
Edit agentcore/aws-targets.json with your account and region. It starts as an empty array [] after agentcore create.
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
cat > agentcore/aws-targets.json << EOF
[
{
"name": "default",
"description": "Tokyo (ap-northeast-1)",
"account": "${ACCOUNT_ID}",
"region": "ap-northeast-1"
}
]
EOFAdjust the region to match your environment.
Running the deployment
Preview with --plan first:
agentcore deploy --plan --json{
"success": true,
"targetName": "default",
"stackName": "AgentCore-AgentCoreTest-default"
}Then deploy. -y auto-confirms, -v shows resource-level events. If CDK bootstrap has not been run for the account/region, the CLI handles it automatically.
agentcore deploy -y -vThe following resources were created via CloudFormation:
AWS::IAM::Role+AWS::IAM::Policy— Execution role for the agent (with Bedrock model invocation permissions)AWS::BedrockAgentCore::Runtime— The AgentCore Runtime resource
According to the deploy log, the total time was 1 minute 22 seconds. The CloudFormation deployment itself took about 1 minute, with the rest spent on CDK build/synthesize (~10s) and validation.
✓ Deployed to 'default' (stack: AgentCore-AgentCoreTest-default)
Outputs:
RuntimeArn: arn:aws:bedrock-agentcore:ap-northeast-1:<ACCOUNT_ID>:runtime/AgentCoreTest_AgentCoreTest-yd8lp93Cqp
RuntimeId: AgentCoreTest_AgentCoreTest-yd8lp93CqpChecking deployment status
agentcore status --json{
"success": true,
"projectName": "AgentCoreTest",
"targetName": "default",
"targetRegion": "ap-northeast-1",
"resources": [
{
"resourceType": "agent",
"name": "AgentCoreTest",
"deploymentState": "deployed",
"detail": "READY"
}
]
}detail: "READY" confirms the agent is ready to accept invocations.
Testing the deployed agent
Invoking the agent
agentcore invoke "Hello! What is 10 + 20? Please use the add_numbers tool." --streamThe sum of 10 + 20 is **30**.Tool calling with streaming worked correctly on the deployed agent. The invoke log shows a total response time of about 13 seconds, with SSE chunks arriving incrementally:
[12:33:54.704] SSE: data: "The sum"
[12:33:54.803] SSE: data: " of 10 + 20 "
[12:33:54.856] SSE: data: "is **"
[12:33:55.016] SSE: data: "30**."
[12:33:55.181] INVOKE RESPONSE (12751ms)Viewing runtime logs
agentcore logs streams runtime logs from the deployed agent.
agentcore logs --agent AgentCoreTest --since 10mLogs are in OpenTelemetry structured format. Key observations:
- Distributed tracing — Each log entry includes
traceIdandspanId - Model calls — Tagged with
gen_ai.system: aws.bedrock, recording system prompts, user inputs, and assistant responses - Full tool call trace — The entire flow of
add_numbers({a: 10, b: 20})→30is recorded as tool_calls → toolResult - Session management — Sessions are identified by
session.id(auto-generated UUIDs after deployment)
This observability comes for free — the aws-opentelemetry-distro dependency in pyproject.toml enables automatic trace and log collection by the AgentCore Runtime.
One issue: Failed to export span batch code: 400, reason: Bad Request errors appeared in the logs. This did not affect agent behavior, but some trace data may be lost. Likely related to the CLI still being in preview.
Summary
- Four commands cover the full lifecycle —
create → dev → deploy → invoketakes you from zero to a deployed agent. CDK code is auto-generated, so no infrastructure knowledge is needed. The initial deployment completed in about 90 seconds. - Good local development experience —
agentcore devinstantly starts a local server with automatic dependency resolution via uv and hot-reloading via uvicorn. The caveat is that Memory is not available locally, so agents using memory require deployment for testing. - Structured logging out of the box — Deployed agents automatically get OpenTelemetry-based structured logging with trace IDs, recording the full flow of model calls and tool executions.
agentcore logsmakes it easy to inspect. - Declarative, config-driven design — Everything is declared in
agentcore.json— agents, memories, credentials, evaluators — and applied withagentcore deploy. Adding and removing resources follows a consistentagentcore add/agentcore remove→agentcore deploypattern.
This test covered a simple configuration. The CLI also supports Memory (SEMANTIC / SUMMARIZATION / USER_PREFERENCE), Gateway (MCP server integration), and Evaluations (LLM-as-a-Judge quality scoring). The rest of this series explores these features one by one.
As a reminder, the AgentCore CLI is still in preview. The developer experience is solid, but CLI commands, options, and generated code templates may change. Check the GitHub repository for the latest.
Cleanup
# Remove all resource definitions from the project
agentcore remove all --force
# Delete AWS resources (deploys empty state to tear down the CloudFormation stack)
agentcore deploy -y
# Uninstall the CLI (if no longer needed)
npm uninstall -g @aws/agentcore