Skip to content

lee0210/aws-s3-fileupload-demo

Repository files navigation

S3 Presigned URL File Upload Demo

This project demonstrates how to use AWS S3 presigned URLs to upload files directly from a frontend application to an S3 bucket.

Workflow

Upload a file:

  1. frontend calls POST /file api
  2. backend receives the request and generate the presigned-post url with conditions
  3. frontend uses the presigned-post url to upload file to s3 bucket
  4. s3 bucket event notification triggers compressImg lambda function to create .webp file

Get the file:

  1. frontend calls GET /file/:objectKey
  2. backend generate the presigned GetObjectCommand url (get .webp if exists)
  3. frontend uses the presigned url to get the file

How to run

# run on local environment
docker compose up -d --build

Local environment uses localstack, which does NOT support condition checking.


# deploy to aws
sam build && sam deploy --guided

There will be an ApiGateway url in the output. Change the .env file in the frontend with the ApiGateway url to upload file to AWS S3 bucket

# use terraform
cd terraform
make

Hints

  1. Includes the Content-Type in Fields if it is a condition. Same for other fields.
const { url, fields } = await createPresignedPost(s3Client, {
    Bucket: process.env.AWS_S3_BUCKET_NAME,
    Key: objectKey,
    Expires: 3600,
    Fields: {
        'Content-Type': fileType,
    },
    Conditions: [
        ['content-length-range', 0, 5 * 1024 * 1024], // up to 5 MB
        ["starts-with", "$Content-Type", "image/"],
    ],
});
  1. Refer s3 bucket in a lambda function used for the bucket event could cause Circular Dependency issue. Check the post "How do I resolve circular dependencies with AWS SAM templates in CloudFormation?". It needs to use constant bucket name.
Resources:
  ImageUploadBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub "${AWS::StackName}-${BucketName}-${AWS::AccountId}-${AWS::Region}"

  ImageProcessingFunction:
    Type: AWS::Serverless::Function
    Properties: 
      Policies:
        - S3ReadPolicy:
            BucketName: !Sub "${AWS::StackName}-${BucketName}-${AWS::AccountId}-${AWS::Region}"
        - S3WritePolicy:
            BucketName: !Sub "${AWS::StackName}-${BucketName}-${AWS::AccountId}-${AWS::Region}"
  1. If you encouter 403 error during docker build when using terraform, check the docker setting "useContainerdSnapshotter". The value should be false.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published