Install and Use Custom Libraries in Bedrock AgentCore Code Interpreter
Table of Contents
Introduction
Amazon Bedrock AgentCore's Code Interpreter provides an isolated sandbox where agents can generate and execute code for data analysis, computation, and visualization. Over 200 Python packages come pre-installed, but real-world use cases inevitably require libraries beyond that set.
This post shares the results of testing three methods for installing and using non-pre-installed libraries within Code Interpreter sessions. The bottom line: both direct pip install from PyPI and offline S3 wheel installation work. However, installed packages do not persist across sessions.
What the Sandbox Actually Looks Like
I started by surveying the environment inside a session.
Python 3.12.12
pip 26.0.1 from /opt/amazon/genesis1p-tools/venv/lib64/python3.12/site-packages/pip
aws-cli/2.32.22
genesis1ptools
Linux localhost 6.1.158-15.288.amzn2023.aarch64In the previous post, I verified that AgentCore Runtime's code configuration lacked pip and AWS CLI. Code Interpreter's sandbox, in contrast, comes with pip, AWS CLI, and over 200 Python packages pre-installed. This is a significant difference.
Pre-installed Packages (Selection)
Key packages from the pip list output:
| Category | Packages |
|---|---|
| Data analysis | pandas 2.3.1, numpy 1.26.4, polars 1.38.1, duckdb 1.3.2 |
| ML/DL | scikit-learn 1.5.0, torch 2.3.0, xgboost 2.0.3, onnxruntime 1.24.1 |
| Visualization | matplotlib 3.9.0, plotly 5.22.0, seaborn 0.13.2, bokeh 2.4.3 |
| NLP | spacy 3.7.4, nltk 3.9.1, textblob 0.18.0.post0 |
| HTTP | requests 2.32.4, httpx 0.28.1, openai 1.33.0 |
| Web frameworks | fastapi 0.116.1, Flask 3.0.3, Django 5.1.12 |
| PDF/Office | pypdf 6.2.0, pdfplumber 0.11.0, python-docx 1.1.2, openpyxl 3.1.3 |
| AWS | boto3 1.40.30, botocore 1.40.76, s3transfer 0.14.0 |
PyPI was reachable too — curl https://pypi.org/simple/ returned HTTP 200.
Test Environment Setup
If you only want the results, skip to the next section.
- boto3 1.42.70
- Region: us-east-1
- Code Interpreter: custom-created with
executionRoleArn(S3 access) andnetworkMode: PUBLIC
Resource Creation
Create an S3 bucket, IAM role, and Code Interpreter in sequence.
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
BUCKET_NAME="agentcore-ci-lib-test-${ACCOUNT_ID}"
REGION="us-east-1"
# S3 bucket
aws s3 mb "s3://${BUCKET_NAME}" --region "$REGION"
# IAM role with AgentCore trust policy
aws iam create-role \
--role-name AgentCoreCITestRole \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "bedrock-agentcore.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
# S3 access policy
aws iam put-role-policy \
--role-name AgentCoreCITestRole \
--policy-name S3Access \
--policy-document "{
\"Version\": \"2012-10-17\",
\"Statement\": [{
\"Effect\": \"Allow\",
\"Action\": [\"s3:GetObject\", \"s3:PutObject\", \"s3:ListBucket\"],
\"Resource\": [
\"arn:aws:s3:::${BUCKET_NAME}\",
\"arn:aws:s3:::${BUCKET_NAME}/*\"
]
}]
}"Creating the Code Interpreter
Code Interpreter creation uses the control plane (bedrock-agentcore-control), while session operations use the data plane (bedrock-agentcore). This separation is easy to miss.
import boto3, time
REGION = "us-east-1"
ACCOUNT_ID = "123456789012" # Replace with your account ID
ROLE_ARN = f"arn:aws:iam::{ACCOUNT_ID}:role/AgentCoreCITestRole"
control = boto3.client("bedrock-agentcore-control", region_name=REGION)
response = control.create_code_interpreter(
name="ciLibInstallTest",
executionRoleArn=ROLE_ARN,
networkConfiguration={"networkMode": "PUBLIC"},
)
ci_id = response["codeInterpreterId"]
# Poll until READY
while True:
status = control.get_code_interpreter(codeInterpreterId=ci_id)["status"]
if status == "READY":
break
time.sleep(5)Gotchas:
- Names must match
[a-zA-Z][a-zA-Z0-9_]{0,47}. Hyphens are not allowed (ci-test→ fails,ciTest→ OK) - Readiness status is
READY, notACTIVEas in AgentCore Runtime
Helper Functions
Common functions used in all tests below. The name parameter switches between executeCode (run Python) and executeCommand (run shell). The language parameter must be lowercase ("python", "javascript", "typescript").
data_client = boto3.client("bedrock-agentcore", region_name=REGION)
def start_session():
return data_client.start_code_interpreter_session(
codeInterpreterIdentifier=ci_id)["sessionId"]
def run_code(session_id, code):
resp = data_client.invoke_code_interpreter(
codeInterpreterIdentifier=ci_id, sessionId=session_id,
name="executeCode", arguments={"code": code, "language": "python"})
for event in resp.get("stream", []):
if "result" in event:
sc = event["result"].get("structuredContent", {})
if sc.get("stdout"): print(sc["stdout"])
if sc.get("stderr"): print(sc["stderr"])
def run_cmd(session_id, command):
resp = data_client.invoke_code_interpreter(
codeInterpreterIdentifier=ci_id, sessionId=session_id,
name="executeCommand", arguments={"command": command})
for event in resp.get("stream", []):
if "result" in event:
sc = event["result"].get("structuredContent", {})
if sc.get("stdout"): print(sc["stdout"])
if sc.get("stderr"): print(sc["stderr"])
def stop_session(session_id):
data_client.stop_code_interpreter_session(
codeInterpreterIdentifier=ci_id, sessionId=session_id)Verification: Three Installation Methods
I tested with cowsay, a package not included in the pre-installed set.
Method 1: Direct pip install from PyPI (PUBLIC Mode)
The simplest approach. With networkMode: PUBLIC, the sandbox has direct access to PyPI.
session_id = start_session()
# Confirm cowsay is not pre-installed
run_code(session_id, """
try:
import cowsay
print(f"cowsay installed: {cowsay.__version__}")
except ImportError:
print("cowsay: NOT installed")
""")
# → cowsay: NOT installed
# Install directly from PyPI
run_cmd(session_id, "pip install cowsay")Collecting cowsay
Downloading cowsay-6.1-py3-none-any.whl.metadata (5.6 kB)
Downloading cowsay-6.1-py3-none-any.whl (25 kB)
Installing collected packages: cowsay
Successfully installed cowsay-6.1run_code(session_id, 'import cowsay; cowsay.cow("Hello from AgentCore Code Interpreter!")')
stop_session(session_id)______________________________________
| Hello from AgentCore Code Interpreter! |
======================================
\
\
^__^
(oo)\_______
(__)\ )\/\
||----w |
|| ||Maximum convenience, but depends on PyPI availability and makes version pinning harder.
Method 2: S3 Wheel Download + Offline Install
The robust approach for production. Pre-stage wheel files in S3, then download and install within the session.
# Preparation (local or CI)
pip download cowsay --dest ./wheels/ --only-binary=:all:
aws s3 sync ./wheels/ s3://${BUCKET_NAME}/cowsay_wheels/session_id = start_session()
# Download wheels from S3
run_cmd(session_id, f"mkdir -p /tmp/cowsay_s3 && aws s3 cp s3://{BUCKET_NAME}/cowsay_wheels/ /tmp/cowsay_s3/ --recursive")download: s3://.../cowsay_wheels/cowsay-6.1-py3-none-any.whl
to ../../../../tmp/cowsay_s3/cowsay-6.1-py3-none-any.whl# Offline install (no PyPI access needed)
run_cmd(session_id, "pip install --no-index --find-links /tmp/cowsay_s3/ cowsay")Looking in links: /tmp/cowsay_s3/
Processing /tmp/cowsay_s3/cowsay-6.1-py3-none-any.whl
Installing collected packages: cowsay
Successfully installed cowsay-6.1run_code(session_id, 'import cowsay; cowsay.tux("Installed from S3 wheels!")')
stop_session(session_id)_________________________
| Installed from S3 wheels! |
=========================
\
\
\
.--.
|o_o |
|:_/ |
// \ \
(| | )
/'\_ _/`\
\___)=(___/Since pip install --no-index requires no network access, this approach should also work in SANDBOX mode when combined with S3 downloads via the execution role (this verification used PUBLIC mode only). Versions are fully pinned.
Method 3: Direct sys.path Addition (No pip)
Extract a wheel as a ZIP and add it to sys.path without using pip. The same S3 download pattern as Method 2, but using only Python's standard library instead of pip install.
For this test, I staged httpx and its dependency wheels in S3 (separately from the cowsay test) and targeted the pure Python package h11.
# Preparation (local or CI)
pip download httpx --dest ./httpx_wheels/ --only-binary=:all:
aws s3 sync ./httpx_wheels/ s3://${BUCKET_NAME}/httpx_wheels/session_id = start_session()
# Download wheels from S3
run_cmd(session_id, f"mkdir -p /tmp/wheels && aws s3 cp s3://{BUCKET_NAME}/httpx_wheels/ /tmp/wheels/ --recursive")
# Extract wheel and import without pip
run_code(session_id, """
import sys, os, zipfile, importlib
whl_path = "/tmp/wheels/h11-0.16.0-py3-none-any.whl"
extract_dir = "/tmp/lib_direct"
os.makedirs(extract_dir, exist_ok=True)
with zipfile.ZipFile(whl_path, 'r') as z:
z.extractall(extract_dir)
sys.path.insert(0, extract_dir)
# Ensure the extracted version loads instead of the pre-installed one
if 'h11' in sys.modules:
del sys.modules['h11']
h11 = importlib.import_module('h11')
print(f"h11 version: {h11.__version__}")
""")
stop_session(session_id)h11 version: 0.16.0h11 is pre-installed, but this test confirmed that the extracted module takes priority via sys.path ordering. Limited to pure Python packages, but useful as a fallback when pip is unavailable for any reason. Does not work for packages with C extensions.
Cross-Session Persistence
Do pip-installed packages survive across sessions?
# Session 1: Install cowsay → verify
session1 = start_session()
run_cmd(session1, "pip install cowsay")
run_code(session1, "import cowsay; print('Session 1: OK')")
stop_session(session1)
# Session 2: Attempt import in fresh session
session2 = start_session()
run_code(session2, """
try:
import cowsay
print("cowsay available: YES")
except ImportError:
print("cowsay available: NO (not persisted)")
""")
stop_session(session2)cowsay in new session: NO (not persisted)Pip-installed packages do not persist across sessions. In a separate test, files created in /tmp were also confirmed absent in new sessions. Each session runs in an isolated sandbox, so you need to run installation at the start of every session.
Within a single session, however, the filesystem is shared across API calls — files downloaded from S3 in one call were accessible in subsequent calls within the same session. This within-session sharing / cross-session isolation model matches the InvokeAgentRuntimeCommand verification.
Takeaways
- Code Interpreter is a rich sandbox with pip + AWS CLI — Unlike AgentCore Runtime's code configuration, it ships with 200+ packages, pip, and AWS CLI. Adding custom libraries is straightforward.
- S3 wheel + offline install is the production choice — Since
pip install --no-indexneeds no network, it should work with SANDBOX mode too. Pins versions and ensures reproducibility. Direct pip install from PyPI is best for development convenience. - No cross-session library persistence — Each session is an isolated sandbox. Build an installation script into your session startup flow.
Cleanup
Delete resources in reverse dependency order.
# Delete Code Interpreter
aws bedrock-agentcore-control delete-code-interpreter \
--code-interpreter-id "$CI_ID" --region us-east-1
# Delete S3 bucket
aws s3 rb "s3://${BUCKET_NAME}" --force --region us-east-1
# Delete IAM role (detach policies first)
aws iam delete-role-policy --role-name AgentCoreCITestRole --policy-name S3Access
aws iam delete-role --role-name AgentCoreCITestRole