This repository contains resources related to my homelab, currently called
torvalds
. I give each of my servers a unique name so that I can keep track of
them over time.
Currently my server is managed with Kubernetes. I've used Docker, Ansible, and bash scripts in the past. Kubernetes has been an interesting experiment and I think it's overall worthwhile since the ecosystem is so rich.
I've spent a lot of time making this project pleasant to work with. Here are some things I'm proud of:
- Close to zero host setup
- It's literally just a few commands to deploy my entire cluster
- Entirely written in TypeScript built with cdk8s and Deno
- Automated backups
- The applications I care about are regularly backed up to BorgBase
- HTTPS ingress with Tailscale
- All secrets managed with 1Password
- Jenkins CI w/ Earthly used by my open-source projects
- Entirely automated deployment for updates, upgrades, etc.
- Commit-to-deployment takes about 1min
- Automated dependency updates
- For Docker images (w/ pinned SHAs)
- For Helm charts
- For Jenkins plugins
- For Deno dependencies
- My approach allows all of my dependencies to be pinned and updated regularly
-
Create
secrets.yaml
-
Create the configuration file:
talosctl gen config \ --with-secrets secrets.yaml \ --config-patch-control-plane @patches/scheduling.yaml \ --config-patch @patches/image.yaml \ --config-patch @patches/tailscale.yaml \ torvalds https://192.168.1.81:6443 --force
-
Configure
endpoints
intalosconfig
- This allows commands to be run without the
--endpoints
argument
- This allows commands to be run without the
-
Move the talosconfig:
- This allows commands to be run without the
--talosconfig
argument
mv talosconfig ~/.talos/config
- This allows commands to be run without the
-
Apply the configuration:
talosctl apply-config --insecure --nodes 192.168.1.81 --file controlplane.yaml
-
If needed, update:
talosctl apply-config --nodes 192.168.1.81 --file controlplane.yaml
Upgrade:
talosctl upgrade --nodes 192.168.1.81 --image <image>
-
Bootstrap the Kubernetes cluster:
talosctl bootstrap --nodes 192.168.1.8 talosctl bootstrap --nodes 192.168.1.811
-
Create a Kubernetes configuration:
talosctl kubeconfig --nodes 192.168.1.81
-
Install
helm
:brew install helm
-
Install Argo CD manually:
[!NOTE] This will be imported into Argo CD itself as part of the CDK8s manifest
kubectl create namespace argocd helm repo add argo https://argoproj.github.io/argo-helm helm install argocd argo/argo-cd --namespace argocd
-
Set the credentials in the
secrets
directory:-
Be sure not to commit any changes to these files so that secrets don't leak.
-
These should be the only credentials that are manually set. Everything else can be retrieved from 1Password.
-
Annoyingly, the credential in
1password-secret.yaml
must be base64 encoded.cat 1password-credentials.json | base64 -w 0
kubectl create namespace 1password kubectl apply -f secrets/1password-secret.yaml kubectl apply -f secrets/1password-token.yaml
-
-
Build and deploy the manifests in this repo:
cd cdk8s && deno task up
-
Get the initial Argo CD
admin
password:kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
-
Change Argo CD the
admin
password.
Adapted from https://www.roosmaa.net/blog/2024/setting-up-zfs-on-talos/
-
Create a shell with
pods/shell.yaml
:kubectl apply -f pods/shell.yaml
-
Try to run a ZFS command:
kubectl exec pod/shell -n maintenance -- \ nsenter --mount=/proc/1/ns/mnt -- \ zpool status
-
Create a ZFS pool:
# for nvme storage kubectl exec pod/shell -n maintenance -- \ nsenter --mount=/proc/1/ns/mnt -- \ zpool create -m legacy -f zfspv-pool-nvme \ /dev/disk/by-id/nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X511734N # for hdd storage kubectl exec pod/shell -n maintenance -- \ nsenter --mount=/proc/1/ns/mnt -- \ zpool create -m legacy -f zfspv-pool-hdd raidz2 \ /dev/sda \ /dev/sdb \ /dev/sdc \ /dev/sdd \ /dev/sde
-
Install OpenEBS:
helm repo add openebs https://openebs.github.io/openebs helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --set engines.local.lvm.enabled=false --set zfs-localpv.zfsNode.encrKeysDir=/var --create-namespace