Step 1: Launch EC2 (Redhat 9):
- Provision an EC2 instance on AWS with Redhat 9.
- Connect to the instance using SSH.
- install t2.large instance with 25 GB volume
- Attach a elastic IP address (Static Public Ip adress) Allocate Elastic Ip address---Allocate--Name the IP address(Netflix-eip)---Allocate Elastic Ip address--Choose the instance--Associate
Step 2: Clone the Code:
-
Update all the packages and then clone the code.
-
Clone your application's code repository onto the EC2 instance:
#install The Git yum install git -y git clone https://github.com/madhu123-4/Netflix-clone.git
Step 3: Install Docker and Run the App Using a Container:
-
Set up Docker on the EC2 instance:
#Pre-requisite : Install JAVA sudo su yum install java-11* -y #Upgrade the repository(optional) yum update -y #Configure the repository - https://download.docker.com/linux/rhel/ vi /etc/yum.repos.d/docker-ce.repo [docker-ce-stable] name=Docker CE Stable - $basearch baseurl=https://download.docker.com/linux/centos/$releasever/$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg #Install Docker Package yum install docker-ce docker-ce-cli containerd.io -y #Enable and start docker service systemctl enable docker systemctl start docker #Start docker service systemctl status docker # Verify Docker is running : docker version docker info
Certainly! Let's break down the Dockerfile's first stage, which is responsible for building the application:
# Define the base image for this stage
FROM node:16.17.0-alpine as builder
# Set the working directory inside the container
WORKDIR /app
# Copy the package.json and yarn.lock files into the container
COPY ./package.json .
COPY ./yarn.lock .
# Install the project dependencies
RUN yarn install
# Copy the rest of the application code into the container
COPY . .
# Set the build argument TMDB_V3_API_KEY as an environment variable
ARG TMDB_V3_API_KEY
ENV VITE_APP_TMDB_V3_API_KEY=${TMDB_V3_API_KEY}
# Set the environment variable for the API endpoint URL
ENV VITE_APP_API_ENDPOINT_URL="https://api.themoviedb.org/3"
# Build the application
RUN yarn build-
FROM node:16.17.0-alpine as builder: This line specifies the base image for this stage. It uses Node.js 16.17.0-alpine, which is a lightweight Node.js image based on Alpine Linux, optimized for size. -
WORKDIR /app: Sets the working directory inside the container to/app. This is where the application code will be copied and where subsequent commands will be executed. -
COPY ./package.json .andCOPY ./yarn.lock .: Copies thepackage.jsonandyarn.lockfiles from the host machine (your local file system) into the container. These files are used for dependency management. -
RUN yarn install: Installs the project dependencies using Yarn. This command reads thepackage.jsonandyarn.lockfiles and installs the necessary packages into the container. -
COPY . .: Copies the rest of the application code (excludingpackage.jsonandyarn.lock) into the container. This includes all source code, configuration files, and any other assets needed to build the application. -
ARG TMDB_V3_API_KEYandENV VITE_APP_TMDB_V3_API_KEY=${TMDB_V3_API_KEY}: Defines an argumentTMDB_V3_API_KEYand sets it as an environment variableVITE_APP_TMDB_V3_API_KEY. This allows you to pass an API key to the container at build time, which can be used in your application code. -
ENV VITE_APP_API_ENDPOINT_URL="https://api.themoviedb.org/3": Sets theVITE_APP_API_ENDPOINT_URLenvironment variable tohttps://api.themoviedb.org/3. This is the base URL for the API endpoint used in the application. -
RUN yarn build: Builds the application using the build script defined in yourpackage.jsonfile. This command is typically responsible for transpiling code, bundling assets, and preparing the application for deployment.
Overall, this stage sets up the build environment, installs dependencies, copies the application code, sets environment variables, and builds the application, preparing it for the next stage in the Dockerfile.
FROM nginx:stable-alpine
WORKDIR /usr/share/nginx/html
RUN rm -rf ./*
COPY --from=builder /app/dist .
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]This Dockerfile sets up a multi-stage build for a web application. Here's a detailed explanation of each instruction:
-
FROM nginx:stable-alpine:- This sets the base image for the build stage. It uses the
nginx:stable-alpineimage, which is a lightweight Nginx image based on Alpine Linux. Alpine Linux is known for its small size and efficiency, making it a popular choice for Docker images.
- This sets the base image for the build stage. It uses the
-
WORKDIR /usr/share/nginx/html:- Sets the working directory inside the container where the Nginx server will serve files from. In this case, it's set to
/usr/share/nginx/html, which is the default directory for serving static content in Nginx.
- Sets the working directory inside the container where the Nginx server will serve files from. In this case, it's set to
-
RUN rm -rf ./*:- This instruction removes all existing files and directories in the Nginx HTML directory. It ensures that the directory is clean before copying files from the builder stage. This step is not always necessary but can be used to ensure a clean state.
-
COPY --from=builder /app/dist .:- Copies files from the
builderstage into the current directory (/usr/share/nginx/html). The--from=builderflag specifies that the files should be copied from the previous build stage namedbuilder. Thebuilderstage likely contains the built static assets of a web application.
- Copies files from the
-
EXPOSE 80:- Informs Docker that the container will listen on port 80 at runtime. This does not actually publish the port, but it serves as documentation for anyone running the container to know which ports to publish or map.
-
ENTRYPOINT ["nginx", "-g", "daemon off;"]:- Sets the default command to run when the container starts. It starts the Nginx server in the foreground (
daemon off;). This is a common practice for Docker containers, as it allows Docker to manage the process and keeps the container running as long as the Nginx process is active.
- Sets the default command to run when the container starts. It starts the Nginx server in the foreground (
- Build and run your application using Docker containers:
docker build -t netflix . docker run -d --name netflix -p 8081:80 netflix:latest #to delete docker stop <containerid> docker rmi -f netflix
It will show an error cause you need API key
Step 4: Get the API Key:
- Open a web browser and navigate to TMDB (The Movie Database) website.
- Click on "Login" and create an account.
- Once logged in, go to your profile and select "Settings."
- Click on "API" from the left-side panel.
- Create a new API key by clicking "Create" and accepting the terms and conditions.
- Provide the required basic details and click "Submit."
- You will receive your TMDB API key.
Now recreate the Docker image with your api key:
docker build --build-arg TMDB_V3_API_KEY=<your-api-key> -t netflix .
Phase 2: Security
- SonarQube is an code inspection tool or static analysis tool for code quality checks to detect the bugs and vulnerabilities in the early stage of development.
- SonarQube supports multiple languages like java, go, ruby, dot net, python,xml etc
- SonarQube detects all the duplication of code.
- SonarQube also suggest what to improve and how to improve the code in case of bugs and vulnerabilities issues via set of rules.
- SonarQube provides an criteria to set the project level settings for code quality checks.
- SonarQube also has an proper user and access management feature to track the issues.
-
Install SonarQube and Trivy:
-
Install SonarQube and Trivy on the EC2 instance to scan for vulnerabilities.
sonarqube
docker run -d --name sonar -p 9000:9000 sonarqube:lts-communityTo access:
publicIP:9000 (by default username & password is admin)
sudo vim /etc/yum.repos.d/trivy.repo [trivy] name=Trivy repository baseurl=https://aquasecurity.github.io/trivy-repo/rpm/releases/$releasever/$basearch/ gpgcheck=0 enabled=1 sudo yum -y update sudo yum -y install trivyto scan image using trivy
trivy image <imageid>
-
-
Integrate SonarQube and Configure:
- Integrate SonarQube with your CI/CD pipeline.
- Configure SonarQube to analyze code for quality and security issues.
Phase 3: CI/CD Setup
-
Install Jenkins for Automation:
- Install Jenkins on the EC2 instance to automate deployment: Install Java
#Pre-requsite: #Install java and wget sudo su yum install java-11* -y yum install wget -y # Download the rpm file wget https://archives.jenkins-ci.org/redhat-stable/jenkins-2.426.2-1.1.noarch.rpm #Download and install key for connecting jenkins repository rpm --import http://pkg.jenkins-ci.org/redhat-rc/jenkins-ci.org.key #Install rpm & verify rpm package rpm -ivh jenkins-2.426.2-1.1.noarch.rpm rpm -qa | grep -i jenkins #Start Jenkins service : systemctl daemon-reload systemctl enable jenkins systemctl start jenkins systemctl status jenkins #Note : systemctl stop jenkins systemctl restart jenkins #Hit URL in Browser : http:// public-ip:8080
-
Access Jenkins in a web browser using the public IP of your EC2 instance.
publicIp:8080
-
Install Necessary Plugins in Jenkins:
Goto Manage Jenkins →Plugins → Available Plugins →
Install below plugins
1 Eclipse Temurin Installer (Install without restart)
2 SonarQube Scanner (Install without restart)
3 NodeJs Plugin (Install Without restart)
4 Email Extension Plugin
Goto Manage Jenkins → Tools → Install JDK(17) and NodeJs(16)→ Click on Apply and Save
Create the token
Goto Jenkins Dashboard → Manage Jenkins → Credentials → Add Secret Text. It should look like this
After adding sonar token
Click on Apply and Save
The Configure System option is used in Jenkins to configure different server
Global Tool Configuration is used to configure different tools that we install using Plugins
We will install a sonar scanner in the tools.
Create a Jenkins webhook
- Configure CI/CD Pipeline in Jenkins:
- Create a CI/CD pipeline in Jenkins to automate your application deployment.
pipeline {
agent any
tools {
jdk 'jdk17'
nodejs 'node16'
}
environment {
SCANNER_HOME = tool 'sonar-scanner'
}
stages {
stage('clean workspace') {
steps {
cleanWs()
}
}
stage('Checkout from Git') {
steps {
git branch: 'main', url: 'https://github.com/madhu123-4/Netflix-clone.git'
}
}
stage("Sonarqube Analysis") {
steps {
withSonarQubeEnv('sonar-server') {
sh '''$SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Netflix \
-Dsonar.projectKey=Netflix'''
}
}
}
stage("quality gate") {
steps {
script {
waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token'
}
}
}
stage('Install Dependencies') {
steps {
sh "npm install"
}
}
}
}Certainly, here are the instructions without step numbers:
Install Dependency-Check and Docker Tools in Jenkins
Install Dependency-Check Plugin:
- Go to "Dashboard" in your Jenkins web interface.
- Navigate to "Manage Jenkins" → "Manage Plugins."
- Click on the "Available" tab and search for "OWASP Dependency-Check."
- Check the checkbox for "OWASP Dependency-Check" and click on the "Install without restart" button.
Configure Dependency-Check Tool:
- After installing the Dependency-Check plugin, you need to configure the tool.
- Go to "Dashboard" → "Manage Jenkins" → "Global Tool Configuration."
- Find the section for "OWASP Dependency-Check."
- Add the tool's name, e.g., "DP-Check."
- Save your settings.
Install Docker Tools and Docker Plugins:
- Go to "Dashboard" in your Jenkins web interface.
- Navigate to "Manage Jenkins" → "Manage Plugins."
- Click on the "Available" tab and search for "Docker."
- Check the following Docker-related plugins:
- Docker
- Docker Commons
- Docker Pipeline
- Docker API
- docker-build-step
- Click on the "Install without restart" button to install these plugins.
Add DockerHub Credentials:
- To securely handle DockerHub credentials in your Jenkins pipeline, follow these steps:
- Go to "Dashboard" → "Manage Jenkins" → "Manage Credentials."
- Click on "System" and then "Global credentials (unrestricted)."
- Click on "Add Credentials" on the left side.
- Choose "Secret text" as the kind of credentials.
- Enter your DockerHub credentials (Username and Password) and give the credentials an ID (e.g., "docker").
- Click "OK" to save your DockerHub credentials.
Now, you have installed the Dependency-Check plugin, configured the tool, and added Docker-related plugins along with your DockerHub credentials in Jenkins. You can now proceed with configuring your Jenkins pipeline to include these tools and credentials in your CI/CD process.
pipeline{
agent any
tools{
jdk 'jdk17'
nodejs 'node16'
}
environment {
SCANNER_HOME=tool 'sonar-scanner'
}
stages {
stage('clean workspace'){
steps{
cleanWs()
}
}
stage('Checkout from Git'){
steps{
git branch: 'main', url: 'https://github.com/madhu123-4/Netflix-clone.git'
}
}
stage("Sonarqube Analysis "){
steps{
withSonarQubeEnv('sonar-server') {
sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Netflix \
-Dsonar.projectKey=Netflix '''
}
}
}
stage("quality gate"){
steps {
script {
waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token'
}
}
}
stage('Install Dependencies') {
steps {
sh "npm install"
}
}
stage('OWASP FS SCAN') {
steps {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
stage('TRIVY FS SCAN') {
steps {
sh "trivy fs . > trivyfs.txt"
}
}
stage("Docker Build & Push"){
steps{
script{
withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){
sh "docker build --build-arg TMDB_V3_API_KEY=<yourapikey> -t netflix ."
sh "docker tag netflix madhu123-4/netflix:latest "
sh "docker push madhu123-4/netflix:latest "
}
}
}
}
stage("TRIVY"){
steps{
sh "trivy image madhu123-4/netflix:latest > trivyimage.txt"
}
}
stage('Deploy to container'){
steps{
sh 'docker run -d --name netflix -p 8081:80 madhu123-4/netflix:latest'
}
}
}
}
If you get docker login failed errorr
sudo su
sudo usermod -aG docker jenkins
sudo systemctl restart jenkins
Phase 4: Monitoring
-
Install Prometheus and Grafana:
Set up Prometheus and Grafana to monitor your application.
Installing Prometheus:
First, create a dedicated Linux user for Prometheus and download Prometheus:
sudo useradd --system --no-create-home --shell /bin/false prometheus wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz
Extract Prometheus files, move them, and create directories:
tar -xvf prometheus-2.47.1.linux-amd64.tar.gz cd prometheus-2.47.1.linux-amd64/ sudo mkdir -p /data /etc/prometheus sudo mv prometheus promtool /usr/local/bin/ sudo mv consoles/ console_libraries/ /etc/prometheus/ sudo mv prometheus.yml /etc/prometheus/prometheus.ymlSet ownership for directories:
sudo chown -R prometheus:prometheus /etc/prometheus/ /data/
Create a systemd unit configuration file for Prometheus:
sudo nano /etc/systemd/system/prometheus.service
Add the following content to the
prometheus.servicefile:[Unit] Description=Prometheus Wants=network-online.target After=network-online.target StartLimitIntervalSec=500 StartLimitBurst=5 [Service] User=prometheus Group=prometheus Type=simple Restart=on-failure RestartSec=5s ExecStart=/usr/local/bin/prometheus \ --config.file=/etc/prometheus/prometheus.yml \ --storage.tsdb.path=/data \ --web.console.templates=/etc/prometheus/consoles \ --web.console.libraries=/etc/prometheus/console_libraries \ --web.listen-address=0.0.0.0:9090 \ --web.enable-lifecycle [Install] WantedBy=multi-user.targetHere's a brief explanation of the key parts in this
prometheus.servicefile:-
UserandGroupspecify the Linux user and group under which Prometheus will run. -
ExecStartis where you specify the Prometheus binary path, the location of the configuration file (prometheus.yml), the storage directory, and other settings. -
web.listen-addressconfigures Prometheus to listen on all network interfaces on port 9090. -
web.enable-lifecycleallows for management of Prometheus through API calls.
Enable and start Prometheus:
sudo systemctl enable prometheus sudo systemctl start prometheusVerify Prometheus's status:
sudo systemctl status prometheus
You can access Prometheus in a web browser using your server's IP and port 9090:
http://<your-server-ip>:9090Installing Node Exporter:
Create a system user for Node Exporter and download Node Exporter:
sudo useradd --system --no-create-home --shell /bin/false node_exporter wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz
Extract Node Exporter files, move the binary, and clean up:
tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz sudo mv node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/ rm -rf node_exporter*Create a systemd unit configuration file for Node Exporter:
sudo nano /etc/systemd/system/node_exporter.service
Add the following content to the
node_exporter.servicefile:[Unit] Description=Node Exporter Wants=network-online.target After=network-online.target StartLimitIntervalSec=500 StartLimitBurst=5 [Service] User=node_exporter Group=node_exporter Type=simple Restart=on-failure RestartSec=5s ExecStart=/usr/local/bin/node_exporter --collector.logind [Install] WantedBy=multi-user.targetReplace
--collector.logindwith any additional flags as needed.Enable and start Node Exporter:
sudo systemctl enable node_exporter sudo systemctl start node_exporterVerify the Node Exporter's status:
sudo systemctl status node_exporter
You can access Node Exporter metrics in Prometheus.
-
-
Configure Prometheus Plugin Integration:
Integrate Jenkins with Prometheus to monitor the CI/CD pipeline.
Prometheus Configuration:
To configure Prometheus to scrape metrics from Node Exporter and Jenkins, you need to modify the
prometheus.ymlfile. Here is an exampleprometheus.ymlconfiguration for your setup:global: scrape_interval: 15s scrape_configs: - job_name: 'node_exporter' static_configs: - targets: ['localhost:9100'] - job_name: 'jenkins' metrics_path: '/prometheus' static_configs: - targets: ['<your-jenkins-ip>:<your-jenkins-port>']
Make sure to replace
<your-jenkins-ip>and<your-jenkins-port>with the appropriate values for your Jenkins setup.Check the validity of the configuration file:
promtool check config /etc/prometheus/prometheus.yml
Reload the Prometheus configuration without restarting:
curl -X POST http://localhost:9090/-/reload
You can access Prometheus targets at:
http://<your-prometheus-ip>:9090/targets
####Grafana
Install Grafana on Ubuntu 22.04 and Set it up to Work with Prometheus
Step 1: Install Dependencies:
First, ensure that all necessary dependencies are installed:
sudo apt-get update
sudo apt-get install -y apt-transport-https software-properties-commonStep 2: Add the GPG Key:
Add the GPG key for Grafana:
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -Step 3: Add Grafana Repository:
Add the repository for Grafana stable releases:
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.listStep 4: Update and Install Grafana:
Update the package list and install Grafana:
sudo apt-get update
sudo apt-get -y install grafanaStep 5: Enable and Start Grafana Service:
To automatically start Grafana after a reboot, enable the service:
sudo systemctl enable grafana-serverThen, start Grafana:
sudo systemctl start grafana-serverStep 6: Check Grafana Status:
Verify the status of the Grafana service to ensure it's running correctly:
sudo systemctl status grafana-serverStep 7: Access Grafana Web Interface:
Open a web browser and navigate to Grafana using your server's IP address. The default port for Grafana is 3000. For example:
http://<your-server-ip>:3000
You'll be prompted to log in to Grafana. The default username is "admin," and the default password is also "admin."
Step 8: Change the Default Password:
When you log in for the first time, Grafana will prompt you to change the default password for security reasons. Follow the prompts to set a new password.
Step 9: Add Prometheus Data Source:
To visualize metrics, you need to add a data source. Follow these steps:
-
Click on the gear icon (⚙️) in the left sidebar to open the "Configuration" menu.
-
Select "Data Sources."
-
Click on the "Add data source" button.
-
Choose "Prometheus" as the data source type.
-
In the "HTTP" section:
- Set the "URL" to
http://localhost:9090(assuming Prometheus is running on the same server). - Click the "Save & Test" button to ensure the data source is working.
- Set the "URL" to
Step 10: Import a Dashboard:
To make it easier to view metrics, you can import a pre-configured dashboard. Follow these steps:
-
Click on the "+" (plus) icon in the left sidebar to open the "Create" menu.
-
Select "Dashboard."
-
Click on the "Import" dashboard option.
-
Enter the dashboard code you want to import (e.g., code 1860).
-
Click the "Load" button.
-
Select the data source you added (Prometheus) from the dropdown.
-
Click on the "Import" button.
You should now have a Grafana dashboard set up to visualize metrics from Prometheus.
Grafana is a powerful tool for creating visualizations and dashboards, and you can further customize it to suit your specific monitoring needs.
That's it! You've successfully installed and set up Grafana to work with Prometheus for monitoring and visualization.
- Configure Prometheus Plugin Integration:
- Integrate Jenkins with Prometheus to monitor the CI/CD pipeline.
Phase 5: Notification
- Implement Notification Services:
- Set up email notifications in Jenkins or other notification mechanisms.
In this phase, you'll set up a Kubernetes cluster with node groups. This will provide a scalable environment to deploy and manage your applications.
Prometheus is a powerful monitoring and alerting toolkit, and you'll use it to monitor your Kubernetes cluster. Additionally, you'll install the node exporter using Helm to collect metrics from your cluster nodes.
To begin monitoring your Kubernetes cluster, you'll install the Prometheus Node Exporter. This component allows you to collect system-level metrics from your cluster nodes. Here are the steps to install the Node Exporter using Helm:
-
Add the Prometheus Community Helm repository:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
-
Create a Kubernetes namespace for the Node Exporter:
kubectl create namespace prometheus-node-exporter
-
Install the Node Exporter using Helm:
helm install prometheus-node-exporter prometheus-community/prometheus-node-exporter --namespace prometheus-node-exporter
Add a Job to Scrape Metrics on nodeip:9001/metrics in prometheus.yml:
Update your Prometheus configuration (prometheus.yml) to add a new job for scraping metrics from nodeip:9001/metrics. You can do this by adding the following configuration to your prometheus.yml file:
- job_name: 'Netflix'
metrics_path: '/metrics'
static_configs:
- targets: ['node1Ip:9100']
Replace 'your-job-name' with a descriptive name for your job. The static_configs section specifies the targets to scrape metrics from, and in this case, it's set to nodeip:9001.
Don't forget to reload or restart Prometheus to apply these changes to your configuration.
To deploy an application with ArgoCD, you can follow these steps, which I'll outline in Markdown format:
-
Install ArgoCD:
You can install ArgoCD on your Kubernetes cluster by following the instructions provided in the EKS Workshop documentation.
-
Set Your GitHub Repository as a Source:
After installing ArgoCD, you need to set up your GitHub repository as a source for your application deployment. This typically involves configuring the connection to your repository and defining the source for your ArgoCD application. The specific steps will depend on your setup and requirements.
-
Create an ArgoCD Application:
name: Set the name for your application.destination: Define the destination where your application should be deployed.project: Specify the project the application belongs to.source: Set the source of your application, including the GitHub repository URL, revision, and the path to the application within the repository.syncPolicy: Configure the sync policy, including automatic syncing, pruning, and self-healing.
-
Access your Application
- To Access the app make sure port 30007 is open in your security group and then open a new tab paste your NodeIP:30007, your app should be running.
Phase 7: Cleanup
- Cleanup AWS EC2 Instances:
- Terminate AWS EC2 instances that are no longer needed.# Netflix-clone

