This part of the repository contains the implementation of the Kubernetes Node Autoscaler Karpenter.
Karpenter serves as an alternative to the Cluster Autoscaler and scales up nodes according demands. In contrast to the Cluster Autoscaler, new nodes are provisioned independently of the existing node groups and can vary in size based on the requirements of the workload.
- Provisioned EKS Cluster: Baseline Architecture.
- Connection to the cluster (via
aws eks --region us-east-2 update-kubeconfig --name eks-cluster
). - AWS CLI - A command line tool for interacting with AWS services.
- kubectl - A command line tool for working with Kubernetes clusters.
- eksctl - A command line tool for working with EKS clusters.
- Helm 3.7+ - A tool for installing and managing Kubernetes applications.
- Initialize the repository with:
Terraform init
. - Install Karpenter on the AWS EKS cluster:
Terraform apply
and confirm withyes
. - Since the AWS EKS is created seperately, the AWS Configmap has to be updated.
kubectl edit configmap aws-auth -n kube-system
. The configmap should resemble the following: ensure that AccountID and NodeGroupIDs are replaced accordingly.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::ACCOUNTID:role/KarpenterNodeRole
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::ACCOUNTID:role/node-group-1-eks-node-group-20230930153533048600000002
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::ACCOUNTID:role/node-group-2-eks-node-group-20230930153533048100000001
username: system:node:{{EC2PrivateDNSName}}
- Deploy the Provisioner Resource:
kubectl create -f provisioner.yaml
- Deploy the application, for example: ALB.
For testing the Karpenter node provisioner, replicas of services could be scaled up, so that the cluster capacity exceeds.
kubectl -n teastore-namespace scale --replicas=5 deployment/teastore-webui
.
Via kubectl get nodes
and kubectl logs -f -n karpenter -c controller -l app.kubernetes.io/name=karpenter
the behavior of Karpenter can be monitored.
- Delete application via
kubectl delete -f Teastore\teastore-alb.yaml
from within the baseline architecture directory. - Delete Policies:
Terraform destroy
confirm withyes
. (Within this folder). - Delete Cluster:
Terraform destroy
confirm withyes
. (Within the Baseline Arhcitecture Folder).