Skip to content

Commit c46e7d0

Browse files
authored
Merge pull request #2702 from sidd130/sidd130-feature-sqs-lambda-s3
New pattern for SQS-Lambda-S3 using Terraform and Python
2 parents 5d3c42b + 47a54c9 commit c46e7d0

File tree

6 files changed

+441
-0
lines changed

6 files changed

+441
-0
lines changed
Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
# Amazon SQS to Amazon S3 integration using AWS Lambda
2+
3+
This pattern creates an SQS queue, a Lambda function, an S3 bucket along with event source mapping for the Lambda function and appropriate permissions to enable the interfacing between these resources.
4+
5+
An example of where this pattern could be useful is **handling large number of deployment requests asynchronously**. Given that deployment requests can vary in terms of application target and payload, this pattern can be employed as an _entry point_ component for deployment systems, that receive and process a large number of requests for deployments across multiple applications. Requests can be processed in batches and outcomes can be saved on the S3 bucket, which can further trigger notifications workflows.
6+
7+
Learn more about this pattern at Serverless Land Patterns: [SQS to Lambda to S3](https://serverlessland.com/patterns/sqs-lambda-s3)
8+
9+
**Important:** this application uses various AWS services and there are costs associated with these services after the Free Tier usage - please see the [AWS Pricing page](https://aws.amazon.com/pricing/) for details. You are responsible for any AWS costs incurred. No warranty is implied in this example.
10+
11+
## Requirements
12+
13+
* **AWS Resources**<br>
14+
Creation of AWS resources requires the following:
15+
* [AWS account](https://portal.aws.amazon.com/gp/aws/developer/registration/index.html) - An AWS account is required for creating the various resources. If you do not already have one, then create an account and log in. The IAM user that you use must have sufficient permissions to make necessary AWS service calls and manage AWS resources.
16+
* [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) - This is required for cloning this repo.
17+
* [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started) - Terraform is an IaC (Infrastructure as Code) software tool used for creating and managing AWS resources using a declarative configuration language.
18+
19+
* **Test Setup**<br>
20+
In order to test this integration, the following are required:
21+
* [Python](https://wiki.python.org/moin/BeginnersGuide/Download) is required to run the test script.
22+
* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) is a prerequisite for using boto3 module in the test script.
23+
24+
## Deployment Instructions
25+
26+
1. Create a new directory, navigate to that directory in a terminal and clone the GitHub repository:
27+
```
28+
git clone https://github.com/aws-samples/serverless-patterns
29+
```
30+
1. Change directory to the pattern directory:
31+
```
32+
cd sqs-lambda-s3-terraform-python
33+
```
34+
35+
1. Pick a unique name for the target S3 bucket eg. `my-bucket-20250329`. Replace the bucket name and AWS region in `variables.tf`:
36+
37+
```
38+
variable aws_region_name {
39+
type = string
40+
default = "ap-south-1"
41+
description = "AWS Region"
42+
}
43+
44+
variable "s3_bucket_name" {
45+
type = string
46+
default = "my-bucket-20250329"
47+
description = "S3 Bucket name"
48+
}
49+
```
50+
51+
1. Deploy the AWS resources through Terraform:
52+
53+
```
54+
terraform init -upgrade
55+
terraform fmt
56+
terraform validate
57+
terraform apply -auto-approve
58+
```
59+
60+
## How it works
61+
62+
The AWS resources created as a part of this integration are as follows:
63+
64+
* Amazon SQS queue
65+
* AWS Lambda function
66+
* Amazon S3 bucket
67+
* IAM policies and roles
68+
69+
The SQS queue is configured as a trigger for the Lambda function. Whenever a message is posted to the SQS queue, the Lambda function is invoked synchronously. This is useful in scenarios, where the message requires some pre-processing before storage.
70+
71+
## Testing
72+
73+
1. Before the test script can be executed, a few pre-steps should be completed:
74+
75+
1. IAM user creation - [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html)
76+
2. Grant permissions to IAM user - [https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html)
77+
3. Generate access key pair for IAM user - [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
78+
4. Configure AWS CLI - [https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration)
79+
80+
1. Update the AWS region in the test script `send_sqs_event.py` with the region, in which the SQS queue will be created:
81+
82+
```
83+
config = Config(region_name='ap-south-1')
84+
```
85+
86+
1. Run the test script:
87+
88+
```
89+
python send_sqs_event.py
90+
```
91+
92+
1. Check the S3 bucket to see if a new JSON object has been created:
93+
94+
```
95+
aws s3 ls [bucket-name]
96+
```
97+
98+
Alternately, the S3 bucket can be looked up on the AWS Console.
99+
100+
## Cleanup
101+
102+
1. Delete the AWS resources through Terraform:
103+
104+
```
105+
terraform apply -destroy -auto-approve
106+
```
107+
108+
## Resources
109+
110+
* [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html)
111+
* [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)
112+
* [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html)
113+
114+
----
115+
Copyright 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
116+
117+
SPDX-License-Identifier: MIT-0
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import boto3
2+
from botocore.config import Config
3+
import os
4+
import json
5+
6+
import boto3.s3
7+
8+
def lambda_handler(event, context):
9+
print(event['Records'][0]['body'])
10+
print(context)
11+
file_name = 'request_' + json.loads(event['Records'][0]['body'])["uniqueID"] + '.json'
12+
request_body = event['Records'][0]['body']
13+
aws_region = os.getenv('AWS_REGION_NAME')
14+
s3_bucket_ident = os.getenv('S3_BUCKET_NAME')
15+
config = Config(region_name=aws_region)
16+
s3_client = boto3.client('s3',config=config)
17+
resp = s3_client.put_object(
18+
Body=str(request_body).encode(encoding="utf-8"),
19+
Bucket=s3_bucket_ident,
20+
Key=file_name
21+
)
22+
23+
print(resp)
Lines changed: 192 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
terraform {
2+
required_providers {
3+
aws = {
4+
source = "hashicorp/aws"
5+
version = "~>5.41"
6+
}
7+
}
8+
9+
required_version = ">=1.2.0"
10+
}
11+
12+
provider "aws" {
13+
region = var.aws_region_name
14+
}
15+
16+
data "archive_file" "lambda_handler_zip_file" {
17+
type = "zip"
18+
source_file = "${path.module}/handler.py"
19+
output_path = "${path.module}/sqs-lambda-s3.zip"
20+
}
21+
22+
# Lambda function
23+
resource "aws_lambda_function" "event-processor" {
24+
function_name = "event-processor"
25+
filename = data.archive_file.lambda_handler_zip_file.output_path
26+
source_code_hash = filebase64sha256(data.archive_file.lambda_handler_zip_file.output_path)
27+
handler = "handler.lambda_handler"
28+
runtime = "python3.12"
29+
role = aws_iam_role.event-processor-exec-role.arn
30+
environment {
31+
variables = {
32+
AWS_REGION_NAME = var.aws_region_name
33+
S3_BUCKET_NAME = var.s3_bucket_name
34+
}
35+
}
36+
}
37+
38+
# Lambda execution role
39+
resource "aws_iam_role" "event-processor-exec-role" {
40+
name = "event-processor-exec-role"
41+
assume_role_policy = jsonencode({
42+
Version = "2012-10-17",
43+
Statement = [
44+
{
45+
Effect = "Allow"
46+
Principal = {
47+
Service = "lambda.amazonaws.com"
48+
}
49+
Action = [
50+
"sts:AssumeRole"
51+
]
52+
}
53+
]
54+
})
55+
}
56+
57+
# Lambda exec role policy
58+
resource "aws_iam_policy" "event-processor-policy" {
59+
name = "event-processor-policy"
60+
policy = jsonencode({
61+
Version = "2012-10-17"
62+
Statement = [
63+
{
64+
Effect = "Allow"
65+
Action = [
66+
"sts:AssumeRole"
67+
]
68+
Resource = [aws_lambda_function.event-processor.arn]
69+
},
70+
{
71+
Effect = "Allow"
72+
Action = [
73+
"sqs:ReceiveMessage",
74+
"sqs:GetQueueAttributes",
75+
"sqs:DeleteMessage"
76+
]
77+
Resource = aws_sqs_queue.event-collector.arn
78+
},
79+
{
80+
Effect = "Allow"
81+
Action = [
82+
"s3:PutObject"
83+
]
84+
Resource = [
85+
"${aws_s3_bucket.event-storage.arn}",
86+
"${aws_s3_bucket.event-storage.arn}/*",
87+
]
88+
}
89+
]
90+
})
91+
}
92+
93+
# Attach policy to Lambda execution role for SQS permissions
94+
resource "aws_iam_role_policy_attachment" "lambda-exec-role-policy" {
95+
policy_arn = aws_iam_policy.event-processor-policy.arn
96+
role = aws_iam_role.event-processor-exec-role.name
97+
}
98+
99+
# Attach policy to Lambda exec role for CloudWatch permissions
100+
resource "aws_iam_role_policy_attachment" "lambda-policy" {
101+
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
102+
role = aws_iam_role.event-processor-exec-role.name
103+
}
104+
105+
# Event source mapping to create a trigger for Lambda to read from SQS queue
106+
resource "aws_lambda_event_source_mapping" "event-processor-event-src-map" {
107+
function_name = aws_lambda_function.event-processor.arn
108+
event_source_arn = aws_sqs_queue.event-collector.arn
109+
enabled = true
110+
depends_on = [
111+
aws_lambda_function.event-processor,
112+
aws_sqs_queue.event-collector,
113+
aws_sqs_queue_policy.event-collector-policy,
114+
aws_iam_policy.event-processor-policy
115+
]
116+
}
117+
118+
# SQS Queue
119+
resource "aws_sqs_queue" "event-collector" {
120+
name = "event-collector-queue"
121+
max_message_size = 2048
122+
}
123+
124+
# SQS queue policy
125+
resource "aws_sqs_queue_policy" "event-collector-policy" {
126+
queue_url = aws_sqs_queue.event-collector.url
127+
policy = jsonencode({
128+
Version = "2012-10-17"
129+
Statement = [
130+
{
131+
Effect = "Allow"
132+
Principal = {
133+
Service = "lambda.amazonaws.com"
134+
}
135+
Action = [
136+
"sqs:ReceiveMessage",
137+
"sqs:GetQueueAttributes",
138+
"sqs:DeleteMessage"
139+
]
140+
Resource = aws_sqs_queue.event-collector.arn
141+
Condition = {
142+
ArnEquals = {
143+
"aws:SourceArn" = aws_lambda_function.event-processor.arn
144+
}
145+
}
146+
}
147+
]
148+
})
149+
150+
depends_on = [
151+
aws_sqs_queue.event-collector,
152+
aws_lambda_function.event-processor
153+
]
154+
}
155+
156+
# S3 bucket
157+
resource "aws_s3_bucket" "event-storage" {
158+
bucket = var.s3_bucket_name
159+
force_destroy = true
160+
tags = {
161+
Name = "event-storage"
162+
}
163+
}
164+
165+
# Bucket policy document
166+
data "aws_iam_policy_document" "bucket-policy" {
167+
statement {
168+
effect = "Allow"
169+
actions = ["s3:PutObject"]
170+
principals {
171+
type = "Service"
172+
identifiers = [
173+
"lambda.amazonaws.com"
174+
]
175+
}
176+
resources = [
177+
"${aws_s3_bucket.event-storage.arn}",
178+
"${aws_s3_bucket.event-storage.arn}/*",
179+
]
180+
condition {
181+
test = "ArnEquals"
182+
variable = "aws:SourceArn"
183+
values = ["${aws_lambda_function.event-processor.arn}"]
184+
}
185+
}
186+
}
187+
188+
# Bucket policy
189+
resource "aws_s3_bucket_policy" "event-storage-bucket-policy" {
190+
bucket = aws_s3_bucket.event-storage.id
191+
policy = data.aws_iam_policy_document.bucket-policy.json
192+
}
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
import boto3
2+
from botocore.config import Config
3+
import json
4+
import uuid
5+
6+
7+
config = Config(region_name='ap-south-1')
8+
sqs_client = boto3.client('sqs',
9+
config=config)
10+
uniq_id = str(uuid.uuid4())
11+
response = sqs_client.send_message(
12+
QueueUrl='event-collector-queue',
13+
MessageBody=json.dumps({"status": 200, "uniqueID": uniq_id})
14+
)
15+
print(response)

0 commit comments

Comments
 (0)