- Professor: Miguel Xavier
- Código: 46504-04
- Semestre: 2024/2
- Nota: 9.0
This project is a simple API that describes an event-oriented ticketing system. It is a simple application that allows users to create events and tickets for those events, as well as buy tickets for events created by other users, provide feedback on the sellers, and view the feedback provided by other users, among other features.
For more specific business rules and project description, please refer to the full document.
Let's talk about the main entities of the application:
The tenant represents a logical grouping of users, events, tickets and transactions. It is used to separate the data of different users, so that each user can only see and interact with their own data. There are three types of tenants: Admin
, Seller
and Buyer
.
The user is one of the central components of the application, it is intrinsically linked to the tenant, as it must always be associated with one. This entity represents the users of the application, who can be Administrators
, Sellers
or Buyers
. The user can create events, tickets and transactions, as well as provide feedback on other users, depending on their role.
The event is an entity that represents a gathering of people for a specific purpose. It is created by an administrator (tenant) and can have a title, description, location and a list of tickets associated with it. The event can be public or private, and only the users who have the event's access code can buy tickets for it.
The ticket is an entity that represents the right to attend an event. It is created by a seller (tenant) and has a price, a verification code and a status. The ticket can be bought by a buyer (tenant) and can be used to attend the event. The ticket can be in one of the following statuses: Available
, Sold
, Used
or Refunded
.
The transaction is an entity that represents the purchase of a ticket by a buyer. It is created by the buyer and has a status, a timestamp and a reference to the ticket that was bought. The transaction can be in one of the following statuses: Pending
, Completed
or Refunded
. The transaction is used to verify the validity of the ticket and to provide feedback on the seller.
The evaluation is an entity that represents the feedback provided by a user about another user. It is created by the buyer and has a rating and a comment. The evaluation is used to provide feedback on the seller and can be used to help other buyers make informed decisions.
This project was developed using NestJS with TypeScript and uses a PostgreSQL database managed by TypeORM to store the data. It is deployed on AWS using Docker containers and Terraform to create the necessary resources as well as SAM for the serverless application.
More About Docker
Docker is used to create containers for the application and the database. It is used to create a development environment that is as close as possible to the production environment. It is highly recommended to use Docker to run the application. We are also using Docker Hub to store the application image.
We have a Dockerfile
that contains the configuration for the application container and two docker-compose
files, one for development and one for production.
For some useful Docker commands, click here.
More About Terraform
Terraform is used to create the infrastructure on AWS. It creates the necessary resources for the application to run on the cloud. We are using Terraform to create a simple ec2 instance with a security group and a key pair, which can be used to login into the instance.
We have a main.tf
file that contains the configuration for the resources that will be created on AWS.
For some useful Terraform commands, click here.
More About AWS CLI
The AWS CLI is used to interact with AWS services, it can be used for finer control over the resources created by Terraform, altough it is not necessary. It is being used to upload the docker-compose
file to the s3 bucket.
More About SAM
The AWS Serverless Application Model (SAM) is used to deploy the application as a serverless application on AWS. It creates the necessary resources for the application to run on the cloud and is being used to deploy two lambda functions that will be invoked by an API Gateway.
Why are we using both Terraform and SAM? The answer is simple: Terraform is used to create the infrastructure on AWS, while SAM is used to deploy the application as a serverless application on AWS. They are used together to create a complete environment for the application to run on the cloud either as a server or as a serverless application.
Now that you have all the necessary tools installed, you can run the application. To start the application with Docker, simply run the following command:
$ docker compose up --build
That's it! The application should be running on http://localhost:8000.
You can access the API documentation built with Swagger on http://localhost:8000/docs.
Now, assuming that you have made changes to the application and want to publish them to the cloud, here's a step-by-step guide on how to do it:
Before exporting the application, you should have your updated AWS credentials in the default directory for your system following the content in the credentials.template
file. This is only necessary if you want to deploy the application on AWS (using terraform and SAM), for local development you can ignore this step.
In order to setup the environment variables you need to have the AWS CLI installed and configured. You can setup your credentials by running the following command:
$ aws configure
# AWS Access Key ID [None]: YOUR_ACCESS_KEY
# AWS Secret Access Key [None]: YOUR_SECRET_KEY
# Default region name [us-east-1]: YOUR_REGION
# Default output format [json]: YOUR_OUTPUT_FORMAT
If you have a AWS_SESSION_TOKEN
, you will have to manually update the credentials
file with the necessary information.
On the terminal, you can run the following command to add this field:
# Linux
$ echo "aws_session_token = YOUR_SESSION_TOKEN" >> ~/.aws/credentials
# Windows (CMD)
$ echo aws_session_token = YOUR_SESSION_TOKEN >> %UserProfile%\.aws\credentials
# Windows (Powershell)
$ Add-Content -Path $env:UserProfile\.aws\credentials -Value "aws_session_token = YOUR_SESSION_TOKEN"
To verify if the credentials are correctly set, you can run the following command:
# Check the credentials
$ aws sts get-caller-identity
After setting up the AWS credentials, you should have the necessary values to run the Terraform script.
Inside the infra
directory, you will find a aws.auto.tfvars.example
file, which contains the necessary variables for the Terraform script. You could copy this file and rename it to aws.auto.tfvars
, then fill in the necessary information, or simply run the following command:
$ scripts/set-aws-tfvars.sh
Sometimes, we might also need these values as environment variables. In which case you can set them with the following command:
$ scripts/set-aws-env-variables.sh
To verify if the environment variables are correctly set, you can run the following commands:
# Change "TOKEN" to the name of the environment variable you want to check
# Available variables: AWS_PROFILE, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
# Check the environment variables (Linux)
$ echo $TOKEN
# Check the environment variables (Windows - CMD)
$ echo %TOKEN%
# Check the environment variables (Windows - Powershell)
$ echo $env:TOKEN
For the variables, you can simply check if the aws.auto.tfvars
file is present and properly filled.
In order to deploy the application, you should upload these variables to the HashiCorp Vault. You can do this by accessing the Vault UI and adding the necessary variables.
Remember to also set these credentials on GitHub to allow the deployment to run smoothly.
Now you can proceed with the deployment:
- Update the application image on Docker Hub:
For this step you should have a Docker Hub account and be logged in on the terminal, which can be done with the following command:
$ docker login
Remember to change the image name and tag variables in the update-image-dockerhub.sh
and [update-image-ecr.sh
] scripts to your own image name and tag.
This is the most important step, you must update the image on Docker Hub / ECR so that the instance can download it and run the application with the latest changes.
$ scripts/update-image-dockerhub.sh
$ scripts/update-image-ecr.sh
- (Optional) Update the
docker-compose
file on the s3 bucket:
Remember to change the bucket name variable in the upload-compose.sh
script to your own bucket name.
Run this only if you have made changes to the docker-compose.prod.yml
file and want to update the one on the s3 bucket.
Altough we are using a public bucket, if you update the file you must manually update the Access control list (ACL) file on the bucket to allow Read
access to the Object
for Everyone (public access)
. This is necessary because the instance will download the file from the bucket.
$ scripts/upload-compose.sh
- Infrastructure as Code (IaC) with Terraform and SAM:
First move to the infra
directory:
$ cd infra
- Initialize the Terraform environment:
This command will download the necessary plugins to run the Terraform script, you should only need to run this once.
$ terraform init
- Push the changes to the cloud:
Since we are using a HashiCorp Vault to store the sensitive information, you should have the necessary permissions to access it. You can use the following command to login into the vault:
$ terraform login
Then update the terraform (.tf
) files with the necessary information, such as the image name and tag, the bucket name and your organization details.
This command will show you what will be created on AWS, you can review it and then confirm the changes.
$ terraform apply
(Optional) Access your instance (under the EC2 tab) on the AWS console to get the public IP address and access the application.
To enter into the instance move back to the root directory and run the following command:
# Linux
$ ./scripts/access-instance.sh
# Windows (Powershell)
$ .\scripts\access-instance.ps1
Known issues with PowerShell:
- If you get an error message saying that running scripts is disabled on your system, you can run the following command to enable it for the current session:
$ Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope Process
- If you get an error saying permission is denied, open the file with file explorer, right click the my-key.pem file, go to properties, security, edit, and check that your user has at least read & execute permissions.
- Build the application (SAM):
This command will build the application using SAM, you should only need to run this once. Sometimes it throws a huge error, then you should run it again.
$ sam build -m ../package.json
- Deploy the application (SAM):
This command will deploy the application on AWS using SAM, you will be asked to provide some information about the deployment.
$ sam deploy --guided
That's it! The serverless application should be running on AWS.
# Build the containers
$ docker compose up --build
# List all containers
$ docker ps
# Access the container
$ docker exect -it {{container_id}} sh
# Initialize terraform environment
$ terraform init
# See what will be setup
$ terraform plan
# Push changes to cloud
$ terraform apply
# "Rollback" Changes
$ terraform destroy
# Configure the AWS CLI
$ aws configure
# Verify the credentials
$ aws sts get-caller-identity
# Build the application
$ sam build
# Deploy the application
$ sam deploy --guided
# Remove the application
$ sam delete --stack-name {{stack_name}}
Can be done locally if you have Node.js installed. Alternatively, you can run the tests inside the container (recommended).
# unit tests
$ npm test
# test coverage
$ npm run test:cov
# Update the application image on Docker Hub
# Must be logged in to Docker Hub on the terminal
$ scripts/update-image.sh
# Update the docker-compose file on the s3 bucket
# Must be logged in to AWS on the terminal and have the necessary permissions/credentials
$ scripts/upload-compose.sh