Home Project Stack
The stack is deployed using Kubernetes cluster enabled using microk8s (https://microk8s.io/docs). microk8s is installed using snap package manger. Package is provided by Canonical (publisher of Ubuntu).
- Resources: quad-core ARMx64 processor with 8GB RAM
- Kernel: GNU/Linux 6.8.0-1015-raspi aarch64
- OS: Ubuntu 24.04.1
As of now it is deployed on 2 nodes cluster.
- Alok Singh
- Home Stack
- Table of contents
- Prerequisites
- Deployment of home-stack Kubernetes Stack
- Create Namespaces
- Roll Binding for cluster admin user:
alok - Node Taint
- Kubernetes Dashboard
- Kubernetes Metrics Server
- Create ConfigMap
- Create Secrets
- Create Network policy
- MySQL Service - Pod/Deployment/Service
- Home Network Troubleshoot - Pod/Statefulset/Service
- Home API Service - Pod/Deployment/Service
- Home Email Service - Pod/Deployment/Service
- Home Auth Service - Pod/Deployment/Service
- Home Analytics Service - Pod/Deployment/Service
- Home Search Service - Pod/Deployment/Service
- Home Event Service - Pod/Deployment/Service
- Home ETL Service - Pod/Statefulset/Service
- Home GIT Commit CronJob (retired)
- Dashboard Service - Pod/Deployment/Service
- Jaeger Service
- Mosquitto MqTT Service
- IoT Telemetry Service
- Delete Stack
- Ingress
- Horizontal Autoscaling
- Miscellaneous commands
- Client and Server version
- API Resources
- Get Node Details
- Get Cluster Dump
- Get all from all namespaces
- Get all Services
- Describe a Service
- Get Pod Log
- Describe a Pod
- top a pod
- Get All Pods under All Namespaces
- Describe a spec
- List all Docker images in Microk8s cluster (within the cluster node)
- Prune Docker Images from Microk8s CLuster
- Service Mesh - Istio
- Backup
- Network Monitoring
- Deployment Architecture
ssh alok@jgte "mkdir yaml"scp yaml/namespace.yaml alok@jgte:yaml/ssh alok@jgte "kubectl apply -f yaml/namespace.yaml"So that cluster operation can be performed by running kubectl remotely
scp yaml/home-user-rback-cluster-admin-user.yaml alok@jgte:yaml/ssh alok@jgte "kubectl apply -f yaml/home-user-rback-cluster-admin-user.yaml"at the end - to remove
kubectl taint nodes jgte nodeType=master:NoSchedule-at the end - to remove
kubectl taint nodes khbr nodeType=worker:NoSchedule-kubectl apply -f yaml/kubernetes-dashboard.yamlNote: the dashboard service type is LoadBalancer and host IP (static) is assigned. The Dashboard can be access directly - https://jgte:8443/
kubectl delete -f yaml/kubernetes-dashboard.yamlkubectl get all --namespace kubernetes-dashboardkubectl get svc --namespace kubernetes-dashboardkubectl apply -f yaml/kubernetes-dashboard-rback-dashboard-admin-user.yamlkubectl create token k8s-dashboard-admin-user --duration=999999h -n kubernetes-dashboardkubectl apply -f yaml/kubernetes-dashboard-rback-cluster-admin-user.yamlkubectl create token k8s-dashboard-cluster-admin-user --duration=999999h -n kubernetes-dashboardNotes:
- the last one doesnt have workloads get role
- use one of this token for Kubernetes Dashboard login
kubectl apply -f yaml/metrix-server.yamlkubectl delete -f yaml/metrix-server.yamlkubectl get deployment metrics-server -n kube-systemkubectl top nodeskubectl apply -f yaml/config-map.yamlNote: add/update below configs from backup ~/k8s
- home-api-cofig (home-stack) 2. iot-secure-keystore-password 3. iot-secure-truststore-password
- home-auth-cofig (home-stack) 5. application-security-jwt-secret 6. oauth-google-client-id 7. logging-level-com-alok
- home-etl-cofig (home-stack) 9. git-bearer-token
kubectl apply -f yaml/secrets.yamlkubectl apply -f yaml/networkpolicy.yamlssh alok@jgte mkdir -p /home/alok/data/mysqlkubectl apply --validate=true --dry-run=client -f yaml/mysql-service.yaml kubectl apply -f yaml/mysql-service.yamlkubectl delete -f yaml/mysql-service.yamlkubectl exec -it pod/mysql-0 --namespace home-stack-db -- mysql -u root -p<<password>>CREATE DATABASE `home-stack`;
kubectl exec -it pod/mysql-0 --namespace home-stack-db -- mysql -u root -p home-stackkubectl logs pod/mysql-0 --namespace home-stack-dbmysql -u root -p home-stack --host 127.0.0.1 --port 32306Note:
Run liquibase to create batch tables and add application users and roles
Follow the link to configure sqldeveloper on Mac to connect to MySQL server remotely
kubectl apply --validate=true --dry-run=client -f yaml/home-nw-tshoot.yaml kubectl apply -f yaml/home-nw-tshoot.yaml --namespace=home-stackkubectl delete -f yaml/home-nw-tshoot.yaml --namespace=home-stackkubectl exec -it pod/home-nw-tshoot-deployment-0 --namespace home-stack -- zshkubectl apply --validate=true --dry-run=client -f yaml/home-api-service.yaml kubectl apply -f yaml/home-api-service.yaml --namespace=home-stackkubectl delete -f yaml/home-api-service.yaml --namespace=home-stackkubectl exec -it pod/home-api-deployment-0 --namespace home-stack -- bashkubectl exec -it pod/home-api-deployment-0 --namespace home-stack -- tail -f /opt/logs/application.logkubectl logs pod/home-api-deployment-0 --namespace home-stackkubectl rollout restart statefulset.apps/home-api-deployment -n home-stackkubectl apply --validate=true --dry-run=client -f yaml/home-email-service.yaml kubectl apply -f yaml/home-email-service.yaml --namespace=home-stackkubectl delete -f yaml/home-email-service.yaml --namespace=home-stackkubectl apply --validate=true --dry-run=client -f yaml/home-auth-service.yaml kubectl apply -f yaml/home-auth-service.yaml --namespace=home-stackkubectl delete -f yaml/home-auth-service.yaml --namespace=home-stackkubectl exec -it pod/home-auth-deployment-0 --namespace home-stack -- bashread instancekubectl exec -it pod/home-auth-deployment-$instance --namespace home-stack -- tail -f /opt/logs/application.logkubectl logs pod/home-auth-deployment-$instance --namespace home-stackkubectl rollout restart statefulset.apps/home-api-deployment -n home-stackkubectl apply --validate=true --dry-run=client -f yaml/home-analytics-service.yaml kubectl apply -f yaml/home-analytics-service.yaml --namespace=home-stackkubectl delete -f yaml/home-analytics-service.yaml --namespace=home-stackread instancekubectl logs pod/home-analytics-deployment-$instance --namespace home-stackkubectl exec -it pod/home-analytics-deployment-$instance --namespace home-stack -- bashkubectl apply --validate=true --dry-run=client -f yaml/home-search-service.yaml kubectl apply -f yaml/home-search-service.yaml --namespace=home-stackkubectl delete -f yaml/home-search-service.yaml --namespace=home-stackread instancekubectl logs pod/home-search-deployment-$instance --namespace home-stackkubectl exec -it pod/home-search-deployment-$instance --namespace home-stack -- bashkubectl apply --validate=true --dry-run=client -f yaml/home-event-service.yaml kubectl apply -f yaml/home-event-service.yaml --namespace=home-stackkubectl delete -f yaml/home-event-service.yaml --namespace=home-stackread instancekubectl logs pod/home-event-deployment-$instance --namespace home-stackkubectl exec -it pod/home-event-deployment-$instance --namespace home-stack -- bashkubectl apply --validate=true --dry-run=client -f yaml/home-etl-service.yaml kubectl apply -f yaml/home-etl-service.yaml --namespace=home-stackkubectl delete -f yaml/home-etl-service.yaml --namespace=home-stackkubectl exec -it pod/home-etl-deployment-0 --namespace home-stack -- bashkubectl exec -it pod/home-etl-deployment-0 --namespace home-stack -- tail -f /opt/logs/application.logkubectl logs pod/home-etl-deployment-0 --namespace home-stackkubectl rollout restart statefulset.apps/home-api-deployment -n home-stackkubectl apply --validate=true --dry-run=client -f yaml/git-commit-cronjob.yaml kubectl apply -f yaml/git-commit-cronjob.yaml --namespace=home-stackkubectl delete -f yaml/git-commit-cronjob.yaml --namespace=home-stackkubectl apply -f yaml/dashboard-nginx-config-map.yamlkubectl apply --validate=true --dry-run=client -f yaml/dashboard-service.yaml kubectl apply -f yaml/dashboard-service.yamlkubectl delete -f yaml/dashboard-service.yamlkubectl exec -it deployment.apps/dashboard-deployment --namespace home-stack-dmz -- /bin/shkubectl logs deployment.apps/dashboard-deployment --namespace home-stack-dmzkubectl apply --validate=true --dry-run=client -f yaml/jaeger-all-in-one-template.yml kubectl apply -f yaml/jaeger-all-in-one-template.yml --namespace=home-stackkubectl delete -f yaml/jaeger-all-in-one-template.yml --namespace=home-stackkubectl apply -f yaml/iot-config-map.yamlkubectl apply --validate=true --dry-run=client -f yaml/mosquitto-service.yaml kubectl create secret tls mosquitto-secret --cert=../iot-home-stack/secret/server.crt --key=../iot-home-stack/secret/server.key --namespace=home-stack-iotkubectl create secret generic mosquitto-ca-secret --from-file=../iot-home-stack/secret/mqtt-signer-ca.crt --namespace=home-stack-iotkubectl delete secret mosquitto-acl-secret --namespace=home-stack-iotkubectl create secret generic mosquitto-acl-secret --from-file=../iot-home-stack/secret/acl.conf --namespace=home-stack-iotkubectl apply -f yaml/iot-mosquitto-service.yaml --namespace=home-stack-iotkubectl delete -f yaml/iot-mosquitto-service.yaml --namespace=home-stack-iotkubectl apply -f yaml/iot-telemetry-config-map.yamlkubectl create secret generic iot-telemetry-secret --from-file=keystore.jks=../iot-home-stack/secret/mqtt.client.home-telemetry-svc.jks --namespace=home-stack-iotkubectl apply --validate=true --dry-run=client -f yaml/iot-telemetry-service.yaml kubectl apply -f yaml/iot-telemetry-service.yaml --namespace=home-stack-iotkubectl delete -f yaml/iot-telemetry-service.yaml --namespace=home-stack-iotkubectl delete namespace home-stack-dmz
kubectl delete namespace home-stack
kubectl delete namespace home-stack-db
kubectl apply -f yaml/ingress.yamlkubectl delete -f yaml/ingress.yamlkubectl get ingress -n home-stack-dmzkubectl describe ingress -n home-stack-dmzkubectl describe ingress ingress-home-jgte --namespace home-stack-dmzkubectl get all --namespace ingresskubectl describe daemonset.apps/nginx-ingress-microk8s-controller --namespace ingresskubectl describe pod/nginx-ingress-microk8s-controller-8wmwc --namespace ingress kubectl get all --namespace ingresskubectl logs nginx-ingress-microk8s-controller-8wmwc --namespace ingresskubectl apply --validate=true --dry-run=client -f yaml/home-hpa.yamlkubectl apply -f yaml/home-hpa.yaml --namespace=home-stackkubectl get hpakubectl describe hpa home-auth-hpakubectl describe hpa home-api-hpakubectl describe hpa home-analytics-hpakubectl autoscale deployment dashboard-deployment --min=2 --max=3 -n home-stackkubectl get hpa --namespace home-stackkubectl edit hpa dashboard-deployment --namespace home-stackkubectl scale -n home-stack deployment dashboard-deployment --replicas=1kubectl version --output=jsonkubectl api-resourcesThis gives details about nodes including images in local
kubectl get nodes -o yamlkubectl describe nodeskubectl get ResourceQuotaThis gives cluster dump including all pods log
kubectl cluster-info dump > ~/k8s/cluster-dump.logkubectl get all --all-namespaceskubectl get svc --all-namespaces kubectl describe svc dashboard-service --namespace home-stack-dmzkubectl describe svc kubernetes-dashboard --namespace kubernetes-dashboardkubectl logs pod/dashboard-deployment-65cf5b8858-7x8z8 --namespace home-stackkubectl describe pod home-etl-deployment-0 --namespace=home-stackkubectl top podskubectl top pod home-etl-deployment-0 --containerskubectl get po -A -o widekubectl api-resources kubectl explain --api-version="networking.k8s.io/v1" NetworkPolicy.speckubectl explain --api-version="networking.k8s.io/v1" NetworkPolicy.spec.ingresskubectl explain --api-version="batch/v1beta1" cronjobs.speckubectl get crd kubectl explain --api-version="apiregistration.k8s.io/v1" APIServicekubectl explain --api-version="apiextensions.k8s.io/v1" CustomResourceDefinitionsudo microk8s ctr images lsVERSION="v1.26.0" # check latest version in /releases page
curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-arm64.tar.gz --output crictl-${VERSION}-linux-arm64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-arm64.tar.gz -C /usr/local/bin
rm -f crictl-$VERSION-linux-arm64.tar.gzsudo vim /etc/crictl.yaml runtime-endpoint: unix:///var/snap/microk8s/common/run/containerd.sock
image-endpoint: unix:///var/snap/microk8s/common/run/containerd.sock
timeout: 10
debug: truesudo crictl rmi --prunekubectl cheat sheet - https://kubernetes.io/docs/reference/kubectl/cheatsheet/
To be explored - seems microk8s isteo addon not supported for ARMx64 architecture. Where the same is supported for minikube.
This is needed as some config items are directly updated in the cluster through Kubernetes Dashboard for security reason
kubectl get configmap --namespace=home-stack stmt-parser-cofig -o yaml > ~/k8s/stmt-parser-cofig.yaml
kubectl get configmap --namespace=home-stack home-etl-cofig -o yaml > ~/k8s/home-etl-cofig.yaml
kubectl get configmap --namespace=home-stack home-api-cofig -o yaml > ~/k8s/home-api-cofig.yaml
kubectl get configmap --namespace=home-stack home-auth-cofig -o yaml > ~/k8s/home-auth-cofig.yaml
kubectl get configmap --namespace=home-stack dashboard-cofig -o yaml > ~/k8s/dashboard-cofig.yaml
kubectl get configmap --namespace=home-stack home-common-cofig -o yaml > ~/k8s/home-common-cofig.yaml
kubectl get configmap --namespace=home-stack-dmz nginx-conf -o yaml > ~/k8s/nginx-conf.yaml
kubectl get configmap --namespace=home-stack home-email-cofig -o yaml > ~/k8s/home-email-cofig.yamlThis is needed as some secret items are directly updated in the cluster through Kubernetes Dashboard for security reason
kubectl get secrets --namespace=home-stack mysql-secrets -o yaml > ~/k8s/mysql-secrets.yaml
kubectl get secrets --namespace=home-stack-db mysql-secrets -o yaml > ~/k8s/mysql-secrets-db.yamlkubeshark tapKubeshark dashboard is accessible using http://localhost:8899
kubeshark clean| Application | Description | Service Type | Deployment/StatefulSet/CronJob/DaemonSet | URL | Comments |
|---|---|---|---|---|---|
| Home ETL Service | ETL for bank statement and other sources | ClusterIP (Headless) | StatefulSet | /home/etl | NA |
| Home Auth Service | Home AuthN and AuthZ service | ClusterIP | Deployment | /home/api | GraalVM based native Image |
| Home API Service | API for Bank/Expense/Tax/Investment/etc... | ClusterIP | Deployment | /home/api | GraalVM based native Image |
| Home Analytics Service | gRPC interface to categorize expense | ClusterIP | Deployment | /home/api | GraalVM based native Image |
| Home Email Service | IMAP to read bank transactions and SMTP to send mail | ClusterIP | Deployment | /home/api | GraalVM based native Image |
| Home Dashboard | ReactJS App on Nginx | NodePort | Deployment | http://jgte:30080 or https://jgte | - For multinode deployment Interface has to be changed to ClusterIP and put behind Ingress - externalTrafficPolicy: Local to disable SNATing |
| Home GIT Cronjob | Cronjob to update GIT with uploaded statement (not in use) | None | CronJob | NA | NA |
| Database | MySQL | NodePort | StatefulSet | jdbc:mysql://mysql:3306/home-stack | - NodePort because I want to access SQL from outside of the cluster |
| Kubernetes Dashboard | LoadBalancer (static IP) | Deployment | https://jgte:8443/ | ||
| Kubernetes Matrix | Generating resource utilization matrix | ClusterIP | Deployment | NA | |
| Kubernetes Matrix Scraper | Matrix scrapper from pods | ClusterIP | Deployment | NA | |
| Jaeger Dashboard | NodePort | Deployment | http://jgte:31686/ | ||
| Ingress Controller | Nginx Ingress Controller | NodePort | DaemonSet | Port: 443 | API/ETL/Dashboard are behind Nginx but still we have Dashboard accessible directly (from mobile cant access host name - require local DNS server) |
graph LR
A[Write Code] --> B{Does it work?}
B -- Yes --> C[Great!]
B -- No --> D[Google]
D --> A
