diff --git a/s3-lambda-agentcore/Readme.md b/s3-lambda-agentcore/Readme.md new file mode 100644 index 000000000..16ded9a65 --- /dev/null +++ b/s3-lambda-agentcore/Readme.md @@ -0,0 +1,81 @@ +# Amazon S3 to AWS Lambda to Amazon Bedrock AgentCore +This pattern creates a Lambda function to invoke an agent in AgentCore Runtime when an object is uploaded to the S3 bucket. + +This Terraform template creates 2 S3 buckets (input and output), an AWS Lambda Function, and an agent in AgentCore Runtime. + +Learn more about this pattern at Serverless Land Patterns: https://serverlessland.com/patterns/s3-lambda-agentcore + +Important: this application uses various AWS services and there are costs associated with these services after the Free Tier usage - please see the AWS Pricing page for details. You are responsible for any AWS costs incurred. No warranty is implied in this example. + +## Requirements + +* [Create an AWS account](https://portal.aws.amazon.com/gp/aws/developer/registration/index.html) if you do not already have one and log in. The IAM user that you use must have sufficient permissions to make necessary AWS service calls and manage AWS resources. +* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) installed and configured +* [Git Installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) +* [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) (Terraform) installed + +## Deployment Instructions + +1. Create a new directory, navigate to that directory in a terminal and clone the GitHub repository: + +`git clone https://github.com/aws-samples/serverless-patterns` + +2. Change directory to the pattern directory: + +`cd serverless-patterns/s3-lambda-agentcore` + +3. From the command line, initialize terraform to download and install the providers defined in the configuration: + +`terraform init` + +4. From the command line, apply the configuration in the deploy.tf file: + +`terraform apply` + +1. When prompted, enter `yes` to confirm the deployment + +2. Note the outputs from the deployment process, these contain the resource names and/or ARNs which are used for testing. + +## How it works + +S3 will invoke the Lambda function when an object is created or updated. It will pass metadata about the new object in the event argument of the Lambda invocation. + +The lambda function will invoke the agent and pass a uri for the s3 file. + +The agent will categorize the file as architecture, runbook, or other and identify some metadata. Then it will send the results back to the Lambda function as JSON. + +The Lambda function will write the metadata to the S3 output bucket. + +## Testing + +Ensure you're in the correct directory (`cd serverless-patterns/s3-lambda-agentcore`). Then run the following script to test with files in the `./test-files` folder. + +```bash +# upload test files to the input bucket +aws s3 cp ./test-files/ s3://$(terraform output -raw s3_input_bucket)/ --recursive +# wait for the agent to process the files +sleep 10 +# download the metadata from the output bucket +aws s3 cp s3://$(terraform output -raw s3_output_bucket)/ ./metadata/ --recursive +``` +You can view the metadata in `./metadata` + +## Cleanup + +1. Ensure you're in the correct directory (`cd serverless-patterns/s3-lambda-agentcore`) + +2. Delete all created resources: + +`terraform destroy` + +3. When prompted, enter `yes` to confirm the destruction + +4. Confirm all created resources has been deleted: + +`terraform show` + +--- + +Copyright 20225 Amazon.com, Inc. or its affiliates. All Rights Reserved. + +SPDX-License-Identifier: MIT-0 \ No newline at end of file diff --git a/s3-lambda-agentcore/agent/Dockerfile b/s3-lambda-agentcore/agent/Dockerfile new file mode 100644 index 000000000..105f178e5 --- /dev/null +++ b/s3-lambda-agentcore/agent/Dockerfile @@ -0,0 +1,8 @@ +FROM python:3 + +COPY requirements.txt ./ +RUN pip install --no-cache-dir -r requirements.txt + +COPY . . + +CMD [ "opentelemetry-instrument", "python", "main.py" ] \ No newline at end of file diff --git a/s3-lambda-agentcore/agent/main.py b/s3-lambda-agentcore/agent/main.py new file mode 100644 index 000000000..5b1f399d8 --- /dev/null +++ b/s3-lambda-agentcore/agent/main.py @@ -0,0 +1,85 @@ +from strands import Agent +from strands_tools import use_aws, current_time +from strands.models import BedrockModel +from bedrock_agentcore.runtime import BedrockAgentCoreApp +from pydantic import BaseModel, Field +from typing import List, Literal +from datetime import datetime, timezone + +app = BedrockAgentCoreApp() + +# Define structured output schema +class FileMetadata(BaseModel): + filename: str = Field(description="The name of the file") + system: str = Field(description="The system or service the file relates to") + keywords: List[str] = Field(description="List of relevant keywords or subjects") + +class FileClassification(BaseModel): + category: Literal["architecture", "operations", "other"] = Field(description="The category of the file") + metadata: FileMetadata = Field(description="Metadata about the file") + reasoning: str = Field(description="The reasoning behind the categorization") + time: str = Field(description="The UTC timestamp of the categorization") + +model_id = "us.amazon.nova-pro-v1:0" +model = BedrockModel( + model_id=model_id, +) + +agent = Agent( + model=model, + tools=[use_aws, current_time], + system_prompt=""" +You are an IT documentation classifier. Your task is to categorize documentation files into one of three categories and extract relevant metadata. + +CATEGORIES: + +1. **architecture** - System design and technical architecture documentation including: + - System architecture diagrams and design documents + - Reference architectures + - API specifications and interface definitions + - Data models, database schemas, and ER diagrams + - Technology stack decisions and architecture decision records (ADRs) + - Component interaction diagrams and sequence diagrams + - Infrastructure architecture and network topology + - Security architecture and authentication flows + +2. **operations** - Operational procedures and runbooks including: + - Deployment procedures and release processes + - Troubleshooting guides and incident response playbooks + - Monitoring and alerting setup documentation + - Backup and recovery procedures + - Configuration management and environment setup + - Maintenance schedules and operational checklists + - On-call procedures and escalation paths + +3. **other** - All other documentation including: + - Meeting notes and minutes + - Project plans and timelines + - Training materials and user guides + - General reference documents + - Administrative documentation + +TASK: + +For each file, analyze its content and provide: +- **category**: One of "architecture", "operations", or "other" +- **metadata**: + - **filename**: The name of the file + - **system**: The primary system, service, or component the document relates to + - **keywords**: A list of relevant technical keywords or topics covered + +Base your categorization on the document's primary purpose and content. If a document covers multiple areas, choose the category that best represents its main focus. +""" +) + +@app.entrypoint +def strands_agent_bedrock(payload): + """ + Invoke the agent with a payload and return structured output + """ + user_input = payload.get("prompt") + response = agent(user_input, structured_output_model=FileClassification) + return response.structured_output.model_dump() + +if __name__ == "__main__": + app.run() \ No newline at end of file diff --git a/s3-lambda-agentcore/agent/requirements.txt b/s3-lambda-agentcore/agent/requirements.txt new file mode 100644 index 000000000..033423a00 --- /dev/null +++ b/s3-lambda-agentcore/agent/requirements.txt @@ -0,0 +1,7 @@ +strands-agents +strands-agents-tools +uv +boto3 +bedrock-agentcore<=0.1.5 +bedrock-agentcore-starter-toolkit==0.1.14 +aws-opentelemetry-distro>=0.10.0 diff --git a/s3-lambda-agentcore/bin/build.sh b/s3-lambda-agentcore/bin/build.sh new file mode 100755 index 000000000..cf5e23f91 --- /dev/null +++ b/s3-lambda-agentcore/bin/build.sh @@ -0,0 +1,41 @@ +#!/bin/bash + +# Fail fast +set -e + +# This is the order of arguments +ECR_BASE_ARN=${1} +BUILD_FOLDER=${2} +IMAGE_NAME=${3} +IMAGE_URI=${4} +TARGET_AWS_REGION=${5} +MYTAG=$(date +%Y%m%d%H%M%S) + +# Check that aws is installed +which aws >/dev/null || { + echo 'ERROR: aws-cli is not installed' + exit 1 +} + +# Check that docker is installed and running +which docker >/dev/null && docker ps >/dev/null || { + echo 'ERROR: docker is not running' + exit 1 +} + +# Connect into aws +aws ecr get-login-password --region ${TARGET_AWS_REGION} | docker login --username AWS --password-stdin ${ECR_BASE_ARN} || { + echo 'ERROR: aws ecr login failed' + exit 1 +} + +# Build image +docker build --no-cache -t ${IMAGE_NAME} ${BUILD_FOLDER} --platform linux/arm64 + +# Docker Tag and Push +docker tag ${IMAGE_NAME}:latest ${IMAGE_URI}:latest +docker tag ${IMAGE_URI}:latest ${IMAGE_URI}:${MYTAG} +docker push ${IMAGE_URI}:latest +docker push ${IMAGE_URI}:${MYTAG} + +echo "Tags Used for ${IMAGE_NAME} Image are ${MYTAG}" \ No newline at end of file diff --git a/s3-lambda-agentcore/deploy.tf b/s3-lambda-agentcore/deploy.tf new file mode 100644 index 000000000..49e63e191 --- /dev/null +++ b/s3-lambda-agentcore/deploy.tf @@ -0,0 +1,235 @@ +provider "aws" { + region = var.aws_region +} + +data "aws_region" "current" {} +data "aws_caller_identity" "current" {} +locals { + region = data.aws_region.current.region + account_id = data.aws_caller_identity.current.account_id + ecr_base_arn = "${local.account_id}.dkr.ecr.${local.region}.amazonaws.com" +} + +resource "random_id" "rand" { + keepers = { + first = "${timestamp()}" + } + byte_length = 8 +} + +################## +### AgentCore #### +################## + +# AgentCore IAM +data "aws_iam_policy_document" "agentcore_assume_role_policy" { + statement { + effect = "Allow" + actions = ["sts:AssumeRole"] + principals { + type = "Service" + identifiers = ["bedrock-agentcore.amazonaws.com"] + } + } +} + +data "aws_iam_policy_document" "agentcore_permissions_policy" { + statement { + actions = ["ecr:GetAuthorizationToken"] + effect = "Allow" + resources = ["*"] + } + statement { + actions = [ + "ecr:BatchGetImage", + "ecr:GetDownloadUrlForLayer" + ] + effect = "Allow" + resources = [aws_ecr_repository.agentcore_repo.arn] + } + statement { + actions = [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents", + ] + effect = "Allow" + resources = ["*"] + } + statement { + actions = [ + "xray:PutTraceSegments", + "xray:PutTelemetryRecords" + ] + effect = "Allow" + resources = ["*"] + } + statement { + actions = [ + "bedrock:InvokeModel", + "bedrock:InvokeModelWithResponseStream" + ] + effect = "Allow" + resources = ["*"] + } + statement { + actions = [ + "s3:GetObject", + "s3:ListBucket" + ] + effect = "Allow" + resources = ["*"] + } +} + +resource "aws_iam_role" "agentcore_role" { + name = "serverlessland-s3-lambda-agentcore-agentcore-role" + assume_role_policy = data.aws_iam_policy_document.agentcore_assume_role_policy.json +} + +resource "aws_iam_role_policy" "agentcore_role_policy" { + role = aws_iam_role.agentcore_role.id + policy = data.aws_iam_policy_document.agentcore_permissions_policy.json +} + +# Agent code repo +resource "aws_ecr_repository" "agentcore_repo" { + name = "serverlessland-s3-lambda-agentcore-repo" + image_tag_mutability = "MUTABLE" + image_scanning_configuration { + scan_on_push = true + } + force_delete = true +} + +# Build and push agent code +resource "null_resource" "agentcore_code" { + depends_on = [aws_ecr_repository.agentcore_repo] + triggers = { always_run = "${random_id.rand.hex}" } + provisioner "local-exec" { + command = "bash ${path.module}/bin/build.sh ${local.ecr_base_arn} ${path.module}/agent ${aws_ecr_repository.agentcore_repo.name} ${aws_ecr_repository.agentcore_repo.repository_url} ${local.region}" + } +} + +# Agentcore runtime +resource "aws_bedrockagentcore_agent_runtime" "agentcore_runtime" { + agent_runtime_name = substr("serverlessland_s3_lambda_agentcore_agent_${random_id.rand.hex}", 0, 47) + role_arn = aws_iam_role.agentcore_role.arn + + agent_runtime_artifact { + container_configuration { + container_uri = "${aws_ecr_repository.agentcore_repo.repository_url}:latest" + } + } + network_configuration { + network_mode = "PUBLIC" + } + environment_variables = { + LOG_LEVEL = "INFO" + } + depends_on = [null_resource.agentcore_code] +} + +########### +### S3 #### +########### + +# Input Bucket +resource "aws_s3_bucket" "input_bucket" { + bucket = "serverlessland-s3-lambda-agentcore-input-${data.aws_caller_identity.current.account_id}" + force_destroy = true +} + +# Output Bucket +resource "aws_s3_bucket" "output_bucket" { + bucket = "serverlessland-s3-lambda-agentcore-output-${data.aws_caller_identity.current.account_id}" + force_destroy = true +} + +# Lambda permission for S3 to invoke +resource "aws_lambda_permission" "allow_s3" { + statement_id = "AllowS3Invoke" + action = "lambda:InvokeFunction" + function_name = aws_lambda_function.s3_agent_lambda_function.function_name + principal = "s3.amazonaws.com" + source_arn = aws_s3_bucket.input_bucket.arn +} + +############## +### Lambda ### +############## + +# Lambda IAM +data "aws_iam_policy_document" "lambda_assume_role_policy" { + statement { + effect = "Allow" + actions = ["sts:AssumeRole"] + principals { + type = "Service" + identifiers = ["lambda.amazonaws.com"] + } + } +} + +data "aws_iam_policy_document" "lambda_permissions_policy" { + statement { + actions = ["ecr:GetAuthorizationToken"] + effect = "Allow" + resources = ["*"] + } + statement { + actions = [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents", + "s3:*", + "bedrock-agentcore:*" + ] + effect = "Allow" + resources = ["*"] + } +} + +resource "aws_iam_role" "lambda_role" { + name = "serverlessland-s3-lambda-agentcore-lambda-role" + assume_role_policy = data.aws_iam_policy_document.lambda_assume_role_policy.json +} + +resource "aws_iam_role_policy" "lambda_role_policy" { + role = aws_iam_role.lambda_role.id + policy = data.aws_iam_policy_document.lambda_permissions_policy.json +} + +# Lambda Code +data "archive_file" "zip" { + type = "zip" + source_file = "lambda/invoke_agent.py" + output_path = "lambda/invoke_agent.zip" +} + +# Lambda Function +resource "aws_lambda_function" "s3_agent_lambda_function" { + function_name = "serverlessland-s3-lambda-agentcore-func-${random_id.rand.hex}" + role = aws_iam_role.lambda_role.arn + handler = "invoke_agent.lambda_handler" + runtime = "python3.14" + timeout = 30 + filename = data.archive_file.zip.output_path + source_code_hash = data.archive_file.zip.output_base64sha256 + environment { + variables = { + AGENT_ARN = aws_bedrockagentcore_agent_runtime.agentcore_runtime.agent_runtime_arn + OUTPUT_BUCKET = aws_s3_bucket.output_bucket.id + } + } +} + +# S3->Lambda Trigger +resource "aws_s3_bucket_notification" "aws-lambda-trigger" { + bucket = aws_s3_bucket.input_bucket.id + lambda_function { + lambda_function_arn = aws_lambda_function.s3_agent_lambda_function.arn + events = ["s3:ObjectCreated:*"] + } + depends_on = [aws_lambda_permission.allow_s3] +} \ No newline at end of file diff --git a/s3-lambda-agentcore/example-pattern.json b/s3-lambda-agentcore/example-pattern.json new file mode 100644 index 000000000..1c122c374 --- /dev/null +++ b/s3-lambda-agentcore/example-pattern.json @@ -0,0 +1,59 @@ +{ + "title": "This pattern creates a Lambda function to invoke an AgentCore agent when an object is uploaded to the S3 bucket.", + "description": "This pattern creates an S3 bucket, which triggers an AWS Lambda, which invokes an agent in AgentCore.", + "language": "Python", + "level": "200", + "framework": "Terraform", + "introBox": { + "headline": "How it works", + "text": [ + "S3 will invoke the Lambda function when an object is created or updated. It will pass metadata about the new object in the event argument of the Lambda invocation.", + "The lambda function will invoke the agent and pass a uri for the s3 file.", + "The agent will categorize the file as architecture, runbook, or other and identify some metadata. Then it will send the results back to the Lambda function as JSON.", + "The Lambda function will write the metadata to the S3 output bucket." + ] + }, + "gitHub": { + "template": { + "repoURL": "https://github.com/aws-samples/serverless-patterns/tree/main/s3-lambda-agentcore", + "templateURL": "serverless-patterns/s3-lambda-agentcore", + "projectFolder": "s3-lambda-agentcore", + "templateFile": "s3-lambda-agentcore/deploy.tf" + } + }, + "resources": { + "bullets": [ + { + "text": "Trigger AWS Lambda with Amazon S3", + "link": "https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html" + }, + { + "test": "Invoke an AgentCore Runtime agent", + "link": "https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/runtime-invoke-agent.html" + } + ] + }, + "deploy": { + "text": [ + "terraform apply" + ] + }, + "testing": { + "text": [ + "See the GitHub repo for detailed testing instructions." + ] + }, + "cleanup": { + "text": [ + "terraform destroy" + ] + }, + "authors": [ + { + "name": "Geoffrey Burdett", + "bio": "Sr. Solutions Architect at AWS", + "image": "http://www.geoffreyburdett.com.s3-website-us-east-1.amazonaws.com/images/headshot.jpeg", + "linkedin": "geoffreyburdett" + } + ] +} \ No newline at end of file diff --git a/s3-lambda-agentcore/lambda/invoke_agent.py b/s3-lambda-agentcore/lambda/invoke_agent.py new file mode 100644 index 000000000..2a6ed4598 --- /dev/null +++ b/s3-lambda-agentcore/lambda/invoke_agent.py @@ -0,0 +1,109 @@ +import json, boto3, os, uuid + +# Initialize clients +agent_core_client = boto3.client('bedrock-agentcore') +s3_client = boto3.client('s3') + +def preparePayload(event: dict) -> str: + bucket = event['Records'][0]['s3']['bucket']['name'] + object_key = event['Records'][0]['s3']['object']['key'] + prompt = f"Categorize and identify metadata for this file: s3://{bucket}/{object_key}" + payload = json.dumps({"prompt": prompt}) + return payload + +def lambda_handler(event,context): + print('### event ###)') + print(event) + + # Get input file details + bucket = event['Records'][0]['s3']['bucket']['name'] + object_key = event['Records'][0]['s3']['object']['key'] + output_bucket = os.environ.get('OUTPUT_BUCKET') + + agent_arn = os.environ.get('AGENT_ARN') + print('### agent_arn ###)') + print(agent_arn) + payload = preparePayload(event) + print('### payload ###)') + print(payload) + session_id = str(uuid.uuid4()) + print('### session_id ###)') + print(session_id) + + # Invoke the agent + response = agent_core_client.invoke_agent_runtime( + agentRuntimeArn=agent_arn, + runtimeSessionId=session_id, + payload=payload + ) + + # Process the response + raw_response = "" + if "text/event-stream" in response.get("contentType", ""): + # Handle streaming response + content = [] + for line in response["response"].iter_lines(chunk_size=10): + if line: + line = line.decode("utf-8") + if line.startswith("data: "): + line = line[6:] + print(line) + content.append(line) + raw_response = "\n".join(content) + print("\nComplete response:", raw_response) + + elif response.get("contentType") == "application/json": + # Handle standard JSON response + content = [] + for chunk in response.get("response", []): + content.append(chunk.decode('utf-8')) + raw_response = ''.join(content) + print(raw_response) + + else: + # Handle raw response + raw_response = str(response) + print(response) + + # Extract JSON from markdown code fences if present + if "```json" in raw_response: + start = raw_response.find("```json") + 7 + end = raw_response.find("```", start) + json_str = raw_response[start:end].strip() + result = json.loads(json_str) + elif "```" in raw_response: + start = raw_response.find("```") + 3 + end = raw_response.find("```", start) + json_str = raw_response[start:end].strip() + try: + result = json.loads(json_str) + except: + result = {"response": raw_response} + else: + try: + result = json.loads(raw_response) + except: + result = {"response": raw_response} + + # Save result to output bucket + output_key = f"{object_key}.json" + s3_client.put_object( + Bucket=output_bucket, + Key=output_key, + Body=json.dumps(result, indent=2), + ContentType='application/json' + ) + print(f"Saved result to s3://{output_bucket}/{output_key}") + + return { + 'statusCode': 200, + 'body': json.dumps({ + 'input': f"s3://{bucket}/{object_key}", + 'output': f"s3://{output_bucket}/{output_key}" + }) + } + + + # This code is provided on best effort basis. + # Kindly note this code is not tested on edge cases these may create issues if you deploy it over production environment. + # Use this code for referene purpose only. \ No newline at end of file diff --git a/s3-lambda-agentcore/lambda/invoke_agent.zip b/s3-lambda-agentcore/lambda/invoke_agent.zip new file mode 100644 index 000000000..c3ed0b5fd Binary files /dev/null and b/s3-lambda-agentcore/lambda/invoke_agent.zip differ diff --git a/s3-lambda-agentcore/outputs.tf b/s3-lambda-agentcore/outputs.tf new file mode 100644 index 000000000..55e829cf0 --- /dev/null +++ b/s3-lambda-agentcore/outputs.tf @@ -0,0 +1,21 @@ +# Output value definitions + +output "lambda_arn" { + description = "Lambda" + value = aws_lambda_function.s3_agent_lambda_function.arn +} + +output "agent_arn" { + description = "Agent" + value = aws_bedrockagentcore_agent_runtime.agentcore_runtime.agent_runtime_arn +} + +output "s3_input_bucket" { + description = "S3_input_bucket" + value = aws_s3_bucket.input_bucket.id +} + +output "s3_output_bucket" { + description = "S3_output_bucket" + value = aws_s3_bucket.output_bucket.id +} \ No newline at end of file diff --git a/s3-lambda-agentcore/test-files/Failing over a Multi-AZ DB cluster for Amazon RDS - Amazon Relational Database Service.pdf b/s3-lambda-agentcore/test-files/Failing over a Multi-AZ DB cluster for Amazon RDS - Amazon Relational Database Service.pdf new file mode 100644 index 000000000..4fe501f0a Binary files /dev/null and b/s3-lambda-agentcore/test-files/Failing over a Multi-AZ DB cluster for Amazon RDS - Amazon Relational Database Service.pdf differ diff --git a/s3-lambda-agentcore/test-files/Haiku.txt b/s3-lambda-agentcore/test-files/Haiku.txt new file mode 100644 index 000000000..90abbc613 --- /dev/null +++ b/s3-lambda-agentcore/test-files/Haiku.txt @@ -0,0 +1,3 @@ +Silicon minds wake +Elastic clouds learn and grow +Innovation flows \ No newline at end of file diff --git a/s3-lambda-agentcore/test-files/multi-agent-orchestration-on-aws.pdf b/s3-lambda-agentcore/test-files/multi-agent-orchestration-on-aws.pdf new file mode 100644 index 000000000..ca575aeaa Binary files /dev/null and b/s3-lambda-agentcore/test-files/multi-agent-orchestration-on-aws.pdf differ diff --git a/s3-lambda-agentcore/variables.tf b/s3-lambda-agentcore/variables.tf new file mode 100644 index 000000000..2b3401e90 --- /dev/null +++ b/s3-lambda-agentcore/variables.tf @@ -0,0 +1,5 @@ +variable aws_region { + type = string + default = "us-east-1" + description = "Please provide a region name to deploy the resources ex: us-east-1" +} \ No newline at end of file