@shinyaz

Real-Time Memory Change Detection with Bedrock AgentCore Memory Streaming

Table of Contents

Introduction

Delivering personalized AI agent experiences requires long-term memory — insights extracted from past conversations. Amazon Bedrock AgentCore Memory manages this, but detecting memory record changes previously required polling.

On March 12, 2026, AgentCore Memory added streaming notifications. Memory record creation, updates, and deletion are now pushed to Amazon Kinesis Data Streams in real time. No more polling — event-driven architectures become straightforward.

This post verifies the feature in the Tokyo region (ap-northeast-1), covering setup, event type behavior, and content level differences with actual test data.

Prerequisites:

  • AWS CLI v2 (with bedrock-agentcore subcommands available)
  • An AWS account with AgentCore Memory access
  • jq (for decoding Kinesis records)

Memory Record Streaming Overview

Streaming notifications deliver memory record lifecycle events to Kinesis Data Streams via a push-based model.

Event TypeTrigger
StreamingEnabledStreaming configuration enabled or changed
MemoryRecordCreatedLTM extraction, BatchCreateMemoryRecords API
MemoryRecordUpdatedBatchUpdateMemoryRecords API
MemoryRecordDeletedDeleteMemoryRecord, BatchDeleteMemoryRecords API, consolidation workflows

Two content levels are available:

  • FULL_CONTENT — Includes metadata plus memoryRecordText (the record body)
  • METADATA_ONLY — Metadata only. Retrieve the body via a separate API call if needed

Use cases include memory consolidation into data lakes, triggering downstream workflows, and audit logging of memory changes.

Environment Setup

Three resources are needed: a Kinesis Data Stream, an IAM role, and a Memory.

Create a Kinesis Data Stream

Terminal
aws kinesis create-stream \
  --stream-name agentcore-memory-stream \
  --stream-mode-details '{"StreamMode": "ON_DEMAND"}' \
  --region ap-northeast-1

ON_DEMAND mode eliminates the need to pre-plan shard counts.

Create an IAM Role

Create an IAM role that allows AgentCore to write events to Kinesis. The trust policy permits bedrock-agentcore.amazonaws.com, and the permissions policy grants PutRecords and DescribeStream on the stream.

IAM policies and role creation commands
trust-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "bedrock-agentcore.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
kinesis-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "kinesis:PutRecords",
        "kinesis:DescribeStream"
      ],
      "Resource": "arn:aws:kinesis:ap-northeast-1:<ACCOUNT_ID>:stream/agentcore-memory-stream"
    }
  ]
}
Terminal
aws iam create-role \
  --role-name AgentCoreMemoryStreamRole \
  --assume-role-policy-document file://trust-policy.json
 
aws iam put-role-policy \
  --role-name AgentCoreMemoryStreamRole \
  --policy-name AgentCoreKinesisAccess \
  --policy-document file://kinesis-policy.json

If your Kinesis Data Stream uses server-side encryption (SSE), add kms:GenerateDataKey permission as well.

Create a Memory with Streaming Enabled

Enable streaming via --stream-delivery-resources in create-memory. The --memory-execution-role-arn parameter is required.

Terminal
aws bedrock-agentcore-control create-memory \
  --name "StreamingTestMemory" \
  --description "Memory with streaming enabled" \
  --event-expiry-duration 30 \
  --memory-execution-role-arn "arn:aws:iam::<ACCOUNT_ID>:role/AgentCoreMemoryStreamRole" \
  --stream-delivery-resources '{
    "resources": [
      {
        "kinesis": {
          "dataStreamArn": "arn:aws:kinesis:ap-northeast-1:<ACCOUNT_ID>:stream/agentcore-memory-stream",
          "contentConfigurations": [
            {
              "type": "MEMORY_RECORDS",
              "level": "FULL_CONTENT"
            }
          ]
        }
      }
    ]
  }' \
  --region ap-northeast-1

Use memory.id from the response as <MEMORY_ID> in subsequent commands. The status took about 3 minutes to transition from CREATING to ACTIVE.

Terminal (status check)
aws bedrock-agentcore-control get-memory \
  --memory-id "<MEMORY_ID>" \
  --region ap-northeast-1 \
  --query 'memory.status' --output text

That completes the setup. The next section starts generating and verifying events.

Event Delivery Verification

Reading Events from Kinesis

Instead of setting up a Lambda consumer, this post reads events directly from Kinesis via the AWS CLI. The following script checks all shards. Run it after each verification step to confirm event delivery.

read-stream.sh (Kinesis event reader)
read-stream.sh
#!/bin/bash
STREAM_NAME="agentcore-memory-stream"
REGION="ap-northeast-1"
 
SHARDS=$(aws kinesis list-shards \
  --stream-name "$STREAM_NAME" --region "$REGION" \
  --query 'Shards[].ShardId' --output text)
 
for SHARD_ID in $SHARDS; do
  ITERATOR=$(aws kinesis get-shard-iterator \
    --stream-name "$STREAM_NAME" --shard-id "$SHARD_ID" \
    --shard-iterator-type TRIM_HORIZON --region "$REGION" \
    --query 'ShardIterator' --output text)
 
  RESULT=$(aws kinesis get-records \
    --shard-iterator "$ITERATOR" --region "$REGION")
 
  COUNT=$(echo "$RESULT" | jq '.Records | length')
  if [ "$COUNT" -gt 0 ]; then
    echo "=== $SHARD_ID ($COUNT records) ==="
    echo "$RESULT" | jq -r '.Records[].Data' | while read -r data; do
      echo "$data" | base64 -d | jq .
    done
  fi
done

Kinesis records are Base64-encoded, so decoding is required. TRIM_HORIZON reads from the beginning of the stream.

StreamingEnabled

Once the memory becomes ACTIVE, the first event delivered is StreamingEnabled.

StreamingEnabled event
{
  "memoryStreamEvent": {
    "eventType": "StreamingEnabled",
    "eventTime": "2026-03-20T13:43:44.520995805Z",
    "memoryId": "StreamingTestMemory-Vwl4GD9vJS",
    "message": "Streaming enabled for memory resource: StreamingTestMemory-Vwl4GD9vJS"
  }
}

If you see this event, the streaming configuration is working correctly.

MemoryRecordCreated (Direct Creation)

Create records directly via batch-create-memory-records.

Terminal
aws bedrock-agentcore batch-create-memory-records \
  --memory-id "<MEMORY_ID>" \
  --records '[
    {
      "requestIdentifier": "direct-test-001",
      "content": {"text": "User prefers dark mode in all applications"},
      "namespaces": ["preferences/test-user-001"],
      "timestamp": "'$(date +%s)'"
    }
  ]' \
  --region ap-northeast-1

A MemoryRecordCreated event arrived in Kinesis within seconds.

MemoryRecordCreated event (FULL_CONTENT)
{
  "memoryStreamEvent": {
    "eventType": "MemoryRecordCreated",
    "eventTime": "2026-03-20T13:48:01.822867426Z",
    "memoryId": "StreamingTestMemory-Vwl4GD9vJS",
    "memoryRecordId": "mem-234829a0-6aae-4844-9e05-ab5dd6823545",
    "memoryRecordText": "User prefers dark mode in all applications",
    "namespaces": ["preferences/test-user-001"],
    "createdAt": 1774014477000,
    "memoryStrategyType": "NONE"
  }
}

With FULL_CONTENT, the memoryRecordText field contains the record body. memoryStrategyType is NONE for directly created records. Note that the official documentation schema also defines memoryStrategyId and metadata fields.

MemoryRecordUpdated and MemoryRecordDeleted

Next, update and delete operations.

Update via batch-update-memory-records. Note that the timestamp field is required.

Terminal (update)
aws bedrock-agentcore batch-update-memory-records \
  --memory-id "<MEMORY_ID>" \
  --records '[
    {
      "memoryRecordId": "<MEMORY_RECORD_ID>",
      "content": {"text": "User prefers dark mode and uses Catppuccin Mocha theme"},
      "namespaces": ["preferences/test-user-001"],
      "timestamp": "'$(date +%s)'"
    }
  ]' \
  --region ap-northeast-1

Delete via delete-memory-record.

Terminal (delete)
aws bedrock-agentcore delete-memory-record \
  --memory-id "<MEMORY_ID>" \
  --memory-record-id "<MEMORY_RECORD_ID>" \
  --region ap-northeast-1

Both operations delivered corresponding events to Kinesis.

MemoryRecordUpdated event
{
  "memoryStreamEvent": {
    "eventType": "MemoryRecordUpdated",
    "eventTime": "2026-03-20T13:49:26.975705431Z",
    "memoryId": "StreamingTestMemory-Vwl4GD9vJS",
    "memoryRecordId": "mem-234829a0-6aae-4844-9e05-ab5dd6823545",
    "memoryRecordText": "User prefers dark mode and uses Catppuccin Mocha theme",
    "namespaces": ["preferences/test-user-001"],
    "createdAt": 1774014477000,
    "memoryStrategyType": "NONE"
  }
}
MemoryRecordDeleted event
{
  "memoryStreamEvent": {
    "eventType": "MemoryRecordDeleted",
    "eventTime": "2026-03-20T13:49:35.451689599Z",
    "memoryId": "StreamingTestMemory-Vwl4GD9vJS",
    "memoryRecordId": "mem-271588a7-1a77-4580-b4ed-8734db426a26"
  }
}

The Updated event reflects the new text, and the same memoryRecordId lets you track the Create → Update lifecycle. The Deleted event contains only identifiers — no memoryRecordText, regardless of content level.

Streaming from Semantic Extraction

The tests above used direct API operations. In practice, long-term memory is extracted asynchronously from conversation data (short-term memory). I verified whether this extraction process also triggers streaming events.

Create a new memory with a semantic memory strategy.

Memory creation command with strategy
Terminal
aws bedrock-agentcore-control create-memory \
  --name "StreamingTestWithStrategy" \
  --description "Memory with strategy and streaming" \
  --event-expiry-duration 30 \
  --memory-execution-role-arn "arn:aws:iam::<ACCOUNT_ID>:role/AgentCoreMemoryStreamRole" \
  --memory-strategies '[
    {
      "semanticMemoryStrategy": {
        "name": "UserPreferences",
        "description": "Extract user preferences",
        "namespaceTemplates": ["{actorId}"]
      }
    }
  ]' \
  --stream-delivery-resources '{
    "resources": [
      {
        "kinesis": {
          "dataStreamArn": "arn:aws:kinesis:ap-northeast-1:<ACCOUNT_ID>:stream/agentcore-memory-stream",
          "contentConfigurations": [
            {
              "type": "MEMORY_RECORDS",
              "level": "FULL_CONTENT"
            }
          ]
        }
      }
    ]
  }' \
  --region ap-northeast-1

namespaceTemplates supports {actorId}, {sessionId}, and {memoryStrategyId} placeholders. Here {actorId} automatically isolates namespaces per user.

Once the memory is ACTIVE, submit conversation data.

Terminal
aws bedrock-agentcore create-event \
  --memory-id "<MEMORY_ID>" \
  --actor-id "test-user-002" \
  --session-id "test-session-002" \
  --event-timestamp "$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")" \
  --payload '[
    {
      "conversational": {
        "content": {"text": "I am learning Rust and I find the borrow checker challenging but rewarding. I also enjoy writing technical blog posts."},
        "role": "USER"
      }
    },
    {
      "conversational": {
        "content": {"text": "Rust borrow checker is a steep learning curve but it leads to safer code. Writing about your learning is excellent."},
        "role": "ASSISTANT"
      }
    }
  ]' \
  --region ap-northeast-1

About 25 seconds later, 3 MemoryRecordCreated events arrived in Kinesis.

#memoryRecordTextmemoryStrategyType
1The user is learning Rust programming language.SEMANTIC
2The user finds Rust's borrow checker challenging but rewarding.SEMANTIC
3The user enjoys writing technical blog posts about their learning journey.SEMANTIC

Key observations:

  • memoryStrategyType is SEMANTIC — distinguishable from direct creation (NONE). Consumers can filter by strategy type
  • Individual facts are semantically decomposed from the conversation — a single conversation turn produces multiple memory records
  • Extraction latency is ~25 seconds — real-time but not instant. The async extraction process must complete first

FULL_CONTENT vs METADATA_ONLY

I switched the content level to METADATA_ONLY via update-memory and repeated the same operations.

Terminal
aws bedrock-agentcore-control update-memory \
  --memory-id "<MEMORY_ID>" \
  --stream-delivery-resources '{
    "resources": [
      {
        "kinesis": {
          "dataStreamArn": "arn:aws:kinesis:ap-northeast-1:<ACCOUNT_ID>:stream/agentcore-memory-stream",
          "contentConfigurations": [
            {
              "type": "MEMORY_RECORDS",
              "level": "METADATA_ONLY"
            }
          ]
        }
      }
    ]
  }' \
  --region ap-northeast-1

After submitting new conversation data, the extracted events had memoryRecordText set to null.

MemoryRecordCreated event (METADATA_ONLY)
{
  "memoryStreamEvent": {
    "eventType": "MemoryRecordCreated",
    "eventTime": "2026-03-20T13:56:09.765875198Z",
    "memoryRecordId": "mem-c9bf440b-d9d8-4a98-9267-851bdac3e6b8",
    "memoryRecordText": null,
    "memoryStrategyType": "SEMANTIC",
    "memoryId": "StreamingTestWithStrategy-47sXyZ92KK"
  }
}
FieldFULL_CONTENTMETADATA_ONLY
memoryRecordTextContains record bodynull
Metadata (ID, namespace, strategy, etc.)IncludedIncluded
Deletion eventsID only (same)ID only (same)
Use caseDownstream processing needs textChange notification is sufficient

Another finding: changing the content level triggers a new StreamingEnabled event. This provides a built-in audit trail for streaming configuration changes.

CloudWatch Monitoring

AgentCore Memory automatically records streaming delivery metrics under the AWS/Bedrock-AgentCore CloudWatch namespace.

MetricMeaning
StreamPublishingSuccessNumber of events successfully delivered to Kinesis
StreamPublishingFailureDelivery failures (after retries exhausted)
StreamUserErrorFailures caused by user-side configuration issues (IAM permissions, invalid KMS key, etc.)

In this verification, StreamPublishingSuccess totaled 11, while StreamPublishingFailure and StreamUserError were both 0.

Terminal
aws cloudwatch get-metric-statistics \
  --namespace "AWS/Bedrock-AgentCore" \
  --metric-name "StreamPublishingSuccess" \
  --dimensions Name=Operation,Value=MemoryStreamEvent \
  --start-time "2026-03-20T13:40:00Z" \
  --end-time "2026-03-20T14:00:00Z" \
  --period 60 --statistics Sum \
  --region ap-northeast-1

For production use, set CloudWatch Alarms on StreamPublishingFailure > 0 and StreamUserError > 0. To filter by a specific memory, add the Resource dimension with the Memory ARN. When delivery fails, CloudWatch Logs records the error code, error message, and affected memoryRecordId, making root cause analysis straightforward.

Takeaways

  • Event-driven without polling — The full memory record lifecycle (Create / Update / Delete) is pushed to Kinesis. No need to design polling intervals or implement retry logic.
  • Async extraction events are streamed too — Semantic strategy extraction triggers MemoryRecordCreated events. The memoryStrategyType field distinguishes direct creation (NONE) from extraction (SEMANTIC).
  • FULL_CONTENT and METADATA_ONLY are switchable at runtimeupdate-memory changes the level immediately, and a StreamingEnabled event is re-emitted on each change. Choose based on downstream processing requirements.
  • Observability is built into CloudWatchStreamPublishingSuccess / StreamPublishingFailure / StreamUserError metrics and failure logs are recorded automatically. Set alarms on Failure > 0 to catch delivery issues immediately.

Cleanup

Terminal
# Delete memories (same command for strategy-enabled memories)
aws bedrock-agentcore-control delete-memory \
  --memory-id "<MEMORY_ID>" --region ap-northeast-1
 
# Delete Kinesis Data Stream
aws kinesis delete-stream \
  --stream-name agentcore-memory-stream --region ap-northeast-1
 
# Delete IAM role
aws iam delete-role-policy \
  --role-name AgentCoreMemoryStreamRole \
  --policy-name AgentCoreKinesisAccess
aws iam delete-role --role-name AgentCoreMemoryStreamRole

Share this post

Shinya Tahara

Shinya Tahara

Solutions Architect @ AWS

I'm a Solutions Architect at AWS, providing technical guidance primarily to financial industry customers. I share learnings about cloud architecture and AI/ML on this site.The views and opinions expressed on this site are my own and do not represent the official positions of my employer.

Related Posts