An advanced Gemini Clone built with Next.js, featuring enhanced functionality and faster response times.
Note
To read more about this Google Gemini Clone like it's Chat Functionality, Advanced Features and Technology Stack, for that Read this ABOUT_APP.md file.
Follow this guide to set up a DevSecOps-ready Google Gemini Clone if you can afford the AWS EKS bill and associated costs.
-
Fork this repo on GitHub: https://github.com/Amitabh-DevOps/dev-gemini-clone.git
-
Clone the forked repo to your system.
-
Open the project in VSCode or your preferred code editor.
-
Open the integrated terminal in VSCode.
-
Login to your VPS or EC2 Instance via SSH
-
Now again Clone that repo into that VPS or EC2 Instance.
-
Then switch to the
DevOps
branch, and go to root dirdev-gemini-clone
using the command:git checkout DevOps cd dev-gemini-clone
-
You're all set! Go ahead with this guide — best of luck!
Tip
If you are Windows user and don't know how to get Linux/Ubuntu in your VSCode, then don't worry follow this guide : Setting up Linux/Ubuntu in Windows VSCode
- Refer to the ENV_SETUP.md file for detailed instructions on configuring the environment variables as specified in the
.env.sample
. - Once you have collected all the required environment variables, create a
.env.local
file in the root directory of the project. - Enter all the correct environment variable values in the
.env.local
file.
Note: This file will need to be uploaded to Jenkins during your CI/CD pipeline process, so please ensure that all values are accurate. Additionally, these environment variables are required at the time of the Docker build.
Follow this DOCKER_BUILD.md to Build → Tag → Push Docker Image → Update Kubernetes Deployment
and then follow next steps.
Caution
-
Ensure your
.env.local
file is present in the project root when running thedocker build
command, asNext.js
apps require build‑time environment variables prefixed withNEXT_
. -
In other words, your
.env.local
must exist before you rundocker build
with the appropriate environment variables. -
You do not need to specify
--env-file .env.local
in thedocker build
command; Docker will automatically load.env.local
if it’s present in the build context.
-
Keep your
.env.local
file with you. -
Provide your NEXTAUTH_URL in your
kubernetes/configmap.yml
from the.env.local
file. -
After that, you have to put base64‑encoded values in
kubernetes/secrets.yml
for the following keys:
GOOGLE_ID, GOOGLE_SECRET, NEXTAUTH_SECRET, NEXT_PUBLIC_API_KEY, MONGODB_URI -
For encoding, you can use the command:
echo -n "<STRING_TO_ENCODE>" | base64
-
For decoding, you can use the command:
echo -n "<STRING_TO_DECODE>" | base64 --decode
- Docker installed and configured
- EKSCTL (Amazon Elastic Kubernetes Service)
- kubectl
- aws-cli (with
aws configure
completed)
Tip
If you want a one stop solution to Install above Prerequisites tools, then follow this guide : 👇
Use only for above tools, do not use other installation form this guide.
This README provides a complete step-by-step guide with all the commands required to set up ArgoCD on an AWS EKS cluster, deploy your applications, and configure GitOps.
- Go to Terraform-EKS-Deployment dir and comeback after EKS Cluster creation and follow next steps
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
watch kubectl get pods -n argocd
curl --silent --location -o /usr/local/bin/argocd \
https://github.com/argoproj/argo-cd/releases/download/v2.4.7/argocd-linux-amd64
chmod +x /usr/local/bin/argocd
argocd version
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
kubectl get svc -n argocd
- In the AWS Console, update the security group for your EKS worker nodes to allow inbound traffic on the NodePort assigned to the
argocd-server
service.
-
Open your browser and navigate to:
http://<public-ip-of-worker-node>:<NodePort>
argocd login <public-ip-of-worker-node>:<NodePort> --username admin
kubectl get secret argocd-initial-admin-secret -n argocd \
-o jsonpath="{.data.password}" | base64 -d
argocd cluster list
kubectl config get-contexts
argocd cluster add <cluster-context-name> --name gemini-eks-cluster
- Replace
<cluster-context-name>
with your EKS cluster context name (e.g.,[email protected]
).
- Go to the ArgoCD UI.
- Add your repository in the settings.
# Download the Helm installation script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
# Make the script executable
chmod 700 get_helm.sh
# Run the installation script
./get_helm.sh
# Add the NGINX Ingress controller Helm repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# Update the Helm repository to ensure you have the latest charts
helm repo update
# Install the ingress-nginx controller in the ingress-nginx namespace
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace
-
Apply the components for the metrics server:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
-
Then edit the metrics server deployment to add necessary arguments:
kubectl -n kube-system edit deployment metrics-server
-
Add these arguments under
spec.containers.args
:--kubelet-insecure-tls
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
-
Save the changes, then restart and verify the deployment:
kubectl -n kube-system rollout restart deployment metrics-server kubectl get pods -n kube-system kubectl top nodes
kubectl apply -f \
https://github.com/cert-manager/cert-manager/releases/download/v1.16.2/cert-manager.yaml
After completing the setup, create a new application in ArgoCD with the following details:
- Application Name: Choose a name for your application.
- Project Name: Select default.
- Sync Policy: Set to Automatic.
- Enable Prune Resources and Self-Heal.
- Check Auto Create Namespace.
- Repo URL: Enter the URL of your Git repository.
- Revision: Select the branch (e.g.,
DevOps
). - Path: Specify the directory containing your Kubernetes manifests (e.g.,
kubernetes
).
- Cluster: Select your desired cluster.
- Namespace: Use
gemini-namespace
.
Before clicking on Create App, ensure the following:
Caution
- Your
configmap.yml
file hasNEXTAUTH_URL
set to<YOUR_DOMAIN_NAME>
. - The Ingress configuration specifies the host and TLS settings to use
<YOUR_DOMAIN_NAME>
. - Ensure
cert-issuer.yml
has the correct email.
In this step, Once the application is healthy, we will walk through to expose your application to the outside world, using an ALB (Application Load Balancer) with a CNAME record.
-
Expose via ALB and CNAME
Run the following command to get the ALB External‑IP of the ingress-nginx-controller:kubectl get svc -n ingress-nginx
-
Copy the External‑IP from the output and create a CNAME record on your domain.
Updategemini-ingress.yml
with your domain. -
After updating
gemini-ingress.yml
, sync the application in ArgoCD. -
Once synchronized, open your browser and access the application via your domain (e.g.,
amitabh.letsdeployit.com
).
-
Navigate to your Terraform directory.
-
Generate an SSH key for the EC2 instances:
Run the following command and specify the key name asgemini-instance-key
(or enter your own name).ssh-keygen
Once generated, update the key name in
/terraform/variable.tf
accordingly. -
Initialize Terraform:
terraform init
-
Preview the Terraform execution plan:
terraform plan
-
Apply the Terraform plan:
This will create two EC2 instances (one for the Jenkins Master and one for the Jenkins Agent) in the eu-west-1 region.terraform apply --auto-approve
-
Connect to both instances via SSH.
Run the following on both instances:
sudo apt update && sudo apt upgrade -y
Install Java (required by Jenkins) on each instance:
sudo apt install openjdk-17-jre -y
java -version
Install necessary dependencies:
sudo apt-get install -y ca-certificates curl gnupg
curl -fsSL https://pkg.jenkins.io/debian/jenkins.io-2023.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian binary/ | \
sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins -y
sudo systemctl enable jenkins
sudo systemctl start jenkins
Verify that Jenkins is running:
sudo systemctl status jenkins
sudo apt install docker.io -y
sudo usermod -aG docker $USER
Refresh your group membership:
newgrp docker
-
Access Jenkins UI:
Navigate tohttp://<MASTER_PUBLIC_IP>:8080
in your browser. -
Retrieve the Jenkins Admin Password:
Run the following command on the Jenkins Master:sudo cat /var/lib/jenkins/secrets/initialAdminPassword
-
Complete the Setup:
Use the retrieved password to set up your admin account and install the suggested plugins.
Run on the Jenkins Master (hit enter for default options):
ssh-keygen
- On the Jenkins Master, navigate to the
~/.ssh
directory and copy the generated.pub
file. - On the Jenkins Agent, navigate to the
~/.ssh
directory. - Append the public key from the master to the Agent’s
authorized_keys
file.
Copy the corresponding private key from the Jenkins Master (located in ~/.ssh
) for use when configuring the Jenkins Agent in the Jenkins UI.
- Log in to the Jenkins UI and navigate to Manage Jenkins > Manage Nodes and Clouds.
- Click New Node and provide a name (e.g.,
Gemini-server
), then choose Permanent Agent. - Configure Node Settings:
- Executors: 2 (to allow parallel execution when CI completes).
- Remote Root Directory:
/home/ubuntu/gemini
- Labels:
dev-server
- Usage: Select "Only build jobs with label expressions matching this node."
- Launch Method: Choose "Launch agents via SSH."
- Host: Enter the Public IP of your Jenkins Agent instance.
- Credentials:
- Add a new credential of type SSH Username with Private Key.
- Use
ubuntu
as the username. - Paste the private key copied from the Jenkins Master.
- Host Key Verification Strategy: Select Non verifying Verification Strategy.
- Availability: Set to "Keep this agent online as much as possible."
- Save the configuration.
After a successful connection, runningls
in the agent's remote root (/home/ubuntu/gemini
) should list agemini
directory.
Install the following plugins from Manage Jenkins > Plugin Manager and choose "Restart Jenkins when installation is complete and no jobs are running":
- OWASP Dependency-Check
- SonarQube Scanner
- Sonar Quality GatesVersion
- Pipeline: Stage View
-
Fork the Shared Library Repository:
Fork Jenkins-shared-libraries to your GitHub account. -
Configure Global Trusted Pipeline Libraries in Jenkins:
- Navigate to Manage Jenkins > System > Global Trusted Pipeline Libraries.
- Click Add under Global Pipeline Libraries.
- Library Configuration:
- Name:
Shared
(to match@Library('Shared')
in your Jenkinsfile). - Default Version:
main
- Retrieval Method: Modern SCM
- Source Code Management: Choose Git and enter your fork’s repository URL:
https://github.com/<YOUR_GITHUB_USERNAME>/Jenkins-shared-library.git
- Add credentials if your repository is private.
- Name:
- Save the configuration.
docker run -itd --name SonarQube-Server -p 9000:9000 sonarqube:lts-community
Access SonarQube via http://<MASTER_PUBLIC_IP>:9000
.
Use username and password as admin
(and change the password later).
-
Log in to SonarQube.
-
Navigate to Administration → Security → Users → Token.
-
Use the following images as references during token creation:
-
Go to Manage Jenkins > Credentials and add the SonarQube token as a new credential. Use the following image as a reference:
-
Navigate to Manage Jenkins > Tools > SonarQube Scanner and then to Manage Jenkins > System > SonarQube installations. Use this image as a guide:
sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update -y
sudo apt-get install trivy -y
-
Log in to your Gmail account and go to Manage your Google Account → Security.
-
Ensure that 2-Step Verification is enabled.
-
Create an App Password for Jenkins. Use the image below for reference:
-
Then, generate the App Password for Jenkins. Refer to these images:
-
Navigate to Manage Jenkins > Credentials and add a new credential (Username with password) for email notifications using your Gmail address and the generated App Password. Reference:
-
Go to Manage Jenkins > System and search for Extended E-mail Notification. Configure the settings under the Advanced section with your Gmail App Password. See the images below:
-
Scroll down and search for E-mail Notification and setup email notification
Important
Enter your gmail password which we copied recently in password field E-mail Notification --> Advance
- Navigate to Manage Jenkins > Security > Credentials > System > Global credentials (unrestricted).
- Click Add Credentials.
- Set the Kind to Username with password.
- Enter an ID as
dockerHub
. - Add your Docker Hub username and a Personal Access Token (PAT) as the password.
- Again, navigate to Manage Jenkins > Security > Credentials > System > Global credentials (unrestricted).
- Click Add Credentials.
- Set the Kind to Username with password.
- Enter an ID as
Github
. - Add your GitHub username and a GitHub Personal Access Token as the password.
- Navigate to Manage Jenkins > Security > Credentials > System > Global credentials (unrestricted).
- Click Add Credentials.
- Set the Kind to Secret file.
- Enter an ID as
.env.local
. - Upload your
.env.local
file and save.
- From the Jenkins dashboard, click New Item.
- Enter the name
Gemini-CI
, select Pipeline, and click OK. - General Section:
- Check GitHub project and provide the repository URL.
- Pipeline Section:
- Select Pipeline script from SCM.
- Set SCM to Git and provide the repository URL.
- Add GitHub credentials if the repository is private.
- Choose the
DevOps
branch and set Script Path toJenkinsfile
.
- From the Jenkins dashboard, click New Item.
- Enter the name
Gemini-CD
, select Pipeline, and click OK. - General Section:
- Check GitHub project and provide the repository URL.
- Pipeline Section:
- Select Pipeline script from SCM.
- Set SCM to Git and provide the repository URL.
- Add GitHub credentials if necessary.
- Choose the
DevOps
branch and set Script Path toGitOps/Jenkinsfile
.
- Trigger the Gemini-CI job:
Run this job (even though it is parameterized) for the first time. Subsequent triggers will prompt for parameters. - Automated CD Trigger:
When theGemini-CI
job completes successfully, theGemini-CD
job is automatically triggered. This job will update the application image version in thegemini-deployment
, push the changes to GitHub, and trigger ArgoCD to update the deployment.
Note
- The first run of the OWASP Dependency Check may take 20–25 minutes to download required resources(If you don't have NVD API KEY); subsequent runs should complete in under a minute.
Tip
- Also if your last stage(Declarative: Post Actions) of Pipline is taking too much time then check the Agents
Number of executors
it should be set to2
After your CI/CD pipeline is in place, proceed with setting up observability tools to monitor application performance and security.
Caution
Go to that server, on which you have created EKS Cluster and follow below guide
Start by adding the Prometheus Helm repository:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
Create a dedicated namespace for Prometheus:
kubectl create namespace prometheus
Install the Prometheus and Grafana stack using Helm in the prometheus
namespace:
helm install stable prometheus-community/kube-prometheus-stack -n prometheus
To view the services running in the prometheus
namespace, use the following command:
kubectl get svc -n prometheus
Expose Grafana through NodePort by patching the service:
kubectl patch svc stable-grafana -n prometheus -p '{"spec": {"type": "NodePort"}}'
kubectl get svc -n prometheus
kubectl port-forward --address 0.0.0.0 svc/stable-grafana <NODEPORT>:80 -n prometheus &
Important
Open it in your browser using the
<INSTANCE_PUBLIC_IP>:<NODEPORT>
, where <INSTANCE_PUBLIC_IP>
is the server where your EKS cluster is running.
To access Grafana, use the admin username and retrieve the password by running:
kubectl get secret --namespace prometheus stable-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Now that Prometheus and Grafana are set up, you can use Grafana to monitor your application metrics. Grafana will pull metrics from Prometheus, allowing you to create dashboards to visualize various aspects of your application’s performance.
You are all set—you have successfully completed this Google Gemini Clone project. All components have been configured and integrated, including:
- Infrastructure Provisioning: Terraform-based provisioning of EC2 instances for Jenkins master and agent.
- CI/CD Pipeline: Jenkins master/agent setup, pipeline jobs, and integration of code quality/security tools such as OWASP, SonarQube, and Trivy.
- Observability: Monitoring setup using Prometheus and Grafana.
- Kubernetes Integration: EKS cluster creation and ArgoCD setup for automated deployments.
This comprehensive configuration establishes a robust DevSecOps workflow ready for production environments.
Watch this video for quick workflow of the DevSecOps in Google Gemini Clone(skip the intro and outro)
Untitled.video.-.Made.with.Clipchamp.1.mp4
In case you can not access it, then here is YT video link : https://youtu.be/CCWsMZtri2I?si=teF9ThDoXBWp_AmO