Skip to content

Commit e9b09fa

Browse files
committed
Improve readme
1 parent 9b9f038 commit e9b09fa

File tree

1 file changed

+87
-82
lines changed

1 file changed

+87
-82
lines changed

README.md

+87-82
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,41 @@
11
# 🚀 K8s Sandbox
22

3-
Gitops repo with k8s tools simulating a real enterprise project
3+
A GitOps repository with Kubernetes tools simulating a real enterprise project.
4+
5+
---
46

57
## 🔥 Features
68

79
***Implemented:***
8-
- Devbox
9-
- K3d with two clusters (control and apps)
10-
- Sops secrets for secrets management
11-
- Ingress Nginx
12-
- Argocd with SSO
13-
- ApplicationSet for preview PR
14-
- Monitoring
15-
- VM Metrics
16-
- Alert Manager
17-
- Grafana
18-
- Custom dashboards and alerts created as code
19-
- Collectors:
20-
- Grafana Alloy (k8s-monitoring-stack)
21-
- kube-prometheus-stack
22-
- Opentelemetry kube stack
23-
- Grafana Tempo
24-
- Logs:
25-
- Grafana Loki
26-
- VM Logs
27-
- K8s events
28-
- Renovate
29-
- Argo rollouts
30-
- Keda
31-
- Argo Rollouts
32-
- Kargo for progressive rollouts between envs
33-
- Windmill just for test
34-
- Trivy
35-
- Teleport
10+
- **Development & Cluster Setup:**
11+
- Devbox
12+
- K3d with two clusters (control and apps)
13+
- Sops secrets for secrets management
14+
- Ingress Nginx
15+
- **GitOps & Deployment:**
16+
- Argocd with SSO
17+
- ApplicationSet for preview PR
18+
- Argo Rollouts (Blue-Green & Canary Deployments)
19+
- Kargo for progressive rollouts between environments
20+
- **Monitoring & Logging:**
21+
- VM Metrics
22+
- Alert Manager
23+
- Grafana (Dashboards & Alerts as Code)
24+
- Metrics Collectors:
25+
- Grafana Alloy (k8s-monitoring-stack)
26+
- kube-prometheus-stack
27+
- Opentelemetry kube stack
28+
- Grafana Tempo (Tracing)
29+
- Logs:
30+
- Grafana Loki
31+
- VM Logs
32+
- Kubernetes Events
33+
- **Automation & Security:**
34+
- Renovate
35+
- Keda
36+
- Windmill just for test
37+
- Trivy
38+
- Teleport
3639

3740
***To-do:***
3841
- [vmalertmanagerconfig](https://docs.victoriametrics.com/operator/resources/vmalertmanagerconfig/)
@@ -44,54 +47,50 @@ Gitops repo with k8s tools simulating a real enterprise project
4447
- Crossplane
4548
- Github action runner
4649

50+
---
4751

48-
## 🏗️How to setup this beauty on local machine
49-
50-
### Install all the dependencies
52+
## 🏗️How to set this beauty on the local machine
5153

52-
Nix and devbox must be installed before running this command. If you don't want to install nix, take a look at the dependencies from [./devbox.json](./devbox.json) and make sure they are available on your host.
54+
### Install Dependencies
5355

56+
Ensure Nix and Devbox are installed. If you prefer not to install Nix, manually install the dependencies listed in [./devbox.json](./devbox.json)
5457
```sh
5558
$ devbox shell
5659
```
5760

58-
### Create a k3d cluster with a local volume
61+
### Create a K3d Cluster
5962

6063
```sh
6164
$ k3d cluster create --config k3d-control-config.yaml
6265
```
6366

64-
### Configure SOPS locally
65-
66-
First generate a public/private key pair:
67-
```sh
68-
age-keygen -o ./ignore/key.txt
69-
```
70-
71-
Copy the public generate key and paste it in the [.sops.yaml](.sops.yaml) file on the `age:` attribute.
72-
73-
Next, create a K8s secret with the generated key for the sops-operator to be able to decrypt the secrets:
67+
### Configure SOPS Locally
7468

75-
```sh
76-
kubectl create namespace sops-operator
77-
kubectl create secret generic sops-age-key -n sops-operator --from-file=./ignore/key.txt
78-
```
69+
1. Generate an encryption key pair:
70+
```sh
71+
age-keygen -o ./ignore/key.txt
72+
```
73+
2. Copy the **public key** into `.sops.yaml` under the `age:` attribute.
74+
3. Create a K8s secret with the generated key for the sops-operator to be able to decrypt the secrets:
75+
```sh
76+
kubectl create namespace sops-operator
77+
kubectl create secret generic sops-age-key -n sops-operator --from-file=./ignore/key.txt
78+
```
7979

80-
### Deploy infra related resources
80+
### Deploy Infrastructure Resources
8181

82-
Create a new Github OAuth application if you want to use [SSO with ArgoCD](https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/#dex).
83-
Paste the client id and secret in to the [argocd-secrets.yaml](infra/argocd/argocd-secrets.yaml) encryptedfile.
82+
1. Create a **GitHub OAuth Application** if you want to use [SSO with ArgoCD](https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/#dex). Paste the client ID and secret into the [argocd-secrets.yaml](infra/argocd/argocd-secrets.yaml) encryptedfile.
8483

85-
```sh
86-
kubectl apply -k infra/argocd
87-
kubectl apply -f infra/init-argo-apps.yaml
88-
```
84+
2. Deploy ArgoCD:
85+
```sh
86+
kubectl apply -k infra/argocd
87+
kubectl apply -f infra/init-argo-apps.yaml
88+
```
8989

90-
Visit ArgoCD UI at https://argocd.127.0.0.1.nip.io
90+
Access the ArgoCD UI at [https://argocd.127.0.0.1.nip.io](https://argocd.127.0.0.1.nip.io)
9191

92-
### Add the second application cluster
92+
### Add the Second Application Cluster
9393

94-
Create the k3d cluster:
9594
```sh
9695
k3d cluster create --config k3d-apps-config.yaml --volume $(pwd)/tmp:/tmp/k3dvol
9796
```
@@ -103,7 +102,7 @@ kubectl create namespace sops-operator
103102
kubectl create secret generic sops-age-key -n sops-operator --from-file=./ignore/key.txt
104103
```
105104

106-
Add the SA, Role and RB to the target cluster
105+
Add the SA, Role and, RB to the target cluster
107106
```sh
108107
argocd cluster add k3d-sandbox-apps --name k3d-sandbox-apps --kube-context k3d-sandbox-control
109108
```
@@ -128,45 +127,51 @@ Encrypt the secret:
128127
sops encrypt --in-place infra/argocd/clusters/k3d-apps-secret.yaml
129128
```
130129

131-
### Add the rest of the components from monitoring or apps folders
130+
### Deploy Applications & Monitoring Components
132131

133-
Add everything with one command:
132+
Deploy all applications:
134133
```sh
135134
kubectl apply -f apps/init-argo-apps.yaml --context k3d-sandbox-apps
136135
```
137136

138-
Add one by one:
137+
Deploy individual applications:
139138
```sh
140139
kubectl apply -f apps/argo-apps/<app-name> --context k3d-sandbox-apps
141140
```
142141

143-
## Projects
142+
## 📂 Project Structure
144143

145-
### Apps - our own develop applications
144+
### **Apps** - Our Own Applications
146145

147-
We use this space for defining applications developed by us. Applications are deployed to k3d-sandbox-apps cluster. We have defined here, `production`, `staging`, `dev` as well as `ephemeral environments` created from PR environemnts.
146+
We use this space for defining applications developed by us. Applications are deployed to `k3d-sandbox-apps` cluster. We have defined here, `production`, `staging`, `dev` as well as `ephemeral environments` created from PR environments.
148147

149148
The application deployed here is just an example of a basic backend/frontend service. The code and helm chart can be found on the [playground-sandbox repo](https://github.com/Utwo/playground-sandbox).
150149

151-
### Infra - infrastructure related services
150+
Available URLs:
151+
* http://dev.127.0.0.1.nip.io:8000
152+
* http://staging.127.0.0.1.nip.io:8000
153+
* http://127.0.0.1.nip.io:8000
154+
155+
### **Infra** - Infrastructure Services
152156

153157
#### ArgoCD
158+
154159
[ArgoCD](./infra/argo-apps/argocd.yaml) is a continuous deployment tool for gitOps workflows.
155160

156-
SSO with Github Login is enabled for the ArgoCD web UI using Github OAuth App.
161+
SSO with Github Login is enabled for the ArgoCD web UI using the Github OAuth App.
157162
Another Github APP is used to:
158-
* read the content of the playground-sandbox repo and to bypass the rate limit rule.
163+
* read the content of the playground-sandbox repo and bypass the rate limit rule.
159164
* send argocd notifications on open pull requests when we deploy an ephemeral environment
160165

161-
We use one repo to deploy to multiple clusters. Registered clusters can be found on the [clusters](./infra/argocd/clusters/) folder. For each cluster, we define a couple of labels that are later used to deploy or fill information in the CD pipeline.
166+
It uses one repo to deploy to multiple clusters. Registered clusters can be found in the [clusters](./infra/argocd/clusters/) folder. For each cluster, we define a couple of labels that are later used to deploy or fill information in the CD pipeline.
162167

163168
Visit ArgoCD UI at https://argocd.127.0.0.1.nip.io
164169

165170
#### SOPS operator
166171

167-
[SOPS operator](./infra/argo-apps/sops-secret-operator.yaml) is used to decrypt the sops secrets stored in the git repository and transform them into kubernetes secrets
172+
[SOPS operator](./infra/argo-apps/sops-secret-operator.yaml) is used to decrypt the sops secrets stored in the git repository and transform them into Kubernetes secrets
168173

169-
Below is an example of how to create a secret with sops to safely store it on git.
174+
Below is an example of how to create a secret with Sops to safely store it on git.
170175

171176
Create a yaml SopsSecret:
172177

@@ -200,7 +205,7 @@ SOPS_AGE_KEY_FILE=./ignore/key.txt sops test.enc.yaml
200205
```
201206

202207
#### Argo Rollouts
203-
[Argo Rollouts](./infra/argo-apps/argo-rollouts.yaml) is a Kubernetes controller and set of CRDs which provide advanced deployment capabilities such as blue-green, canary, canary analysis, experimentation, and progressive delivery features to Kubernetes.
208+
[Argo Rollouts](./infra/argo-apps/argo-rollouts.yaml) is a Kubernetes controller and set of CRDs that provide advanced deployment capabilities such as blue-green, canary, canary analysis, experimentation, and progressive delivery features to Kubernetes.
204209

205210
We use Argo Rollouts to enable canary deployments for the `playground-sandbox` app using `ingress-ngnix`.
206211
To expose the Argo Rollouts web UI locally run:
@@ -209,7 +214,7 @@ kubectl port-forward services/argo-rollouts-k3d-apps-dashboard 3100:3100 -n argo
209214
```
210215

211216
#### Kargo
212-
[Kargo](./infra/argo-apps/kargo.yaml) is a continuous promotion orchestration layer, that complements Argo CD for Kubernetes. With Kargo we can define and promote the steps necesary to deploy to `dev`, `staging` and `production`. The project is defined [here](./infra/kargo/projects/playground-sandbox/). It listens for both this repo and the application repo and when there is a new change, it generates the plain manifests from the app helm chart. The output is then pushed to [stage/dev branch](https://github.com/Utwo/k8s-playground/tree/stage/dev), [stage/staging branch](https://github.com/Utwo/k8s-playground/tree/stage/staging) or [stage/production branch](https://github.com/Utwo/k8s-playground/tree/stage/production) and applied to ArgoCD.
217+
[Kargo](./infra/argo-apps/kargo.yaml) is a continuous promotion orchestration layer, that complements Argo CD for Kubernetes. With Kargo we can define and promote the steps necessary to deploy to `dev`, `staging`, and `production`. The project is defined [here](./infra/kargo/projects/playground-sandbox/). It listens for both this repo and the application repo and when there is a new change, it generates the plain manifests from the app helm chart. The output is then pushed to [stage/dev branch](https://github.com/Utwo/k8s-playground/tree/stage/dev), [stage/staging branch](https://github.com/Utwo/k8s-playground/tree/stage/staging) or [stage/production branch](https://github.com/Utwo/k8s-playground/tree/stage/production) and applied to ArgoCD.
213218

214219
To expose the Kargo web UI locally run:
215220
```sh
@@ -244,13 +249,13 @@ https://docs.victoriametrics.com/guides/multi-regional-setup-dedicated-regions/
244249
[Loki](./monitoring/argo-apps/loki.yaml) is used for storing logs. This service is exposed in order for other clusters to be able to send logs here.
245250

246251
#### Victoria Metrics logs
247-
[vmlogs](./monitoring/argo-apps/victoria-metrics.yaml) is used for collecting logs. It was added just as a prof of concept. It does not have backup/recovery solutions and it cannot offload old logs to a bucket. For now we will relly on Loki for storing logs.
252+
[vmlogs](./monitoring/argo-apps/victoria-metrics.yaml) is used for collecting logs. It was added just as a proof of concept. It does not have backup/recovery solutions and it cannot offload old logs to a bucket. For now, we will rely on Loki for storing logs.
248253

249254
#### Kube Prometheus Stack
250-
[kube-prometheus-stack](./monitoring/argo-apps/kube-prometheus-stack.yaml) is used just for deploying Prometheus rules, alerts and Grafana dashboards. Everything else is disabled because, Grafana Alloy is used for collecting metrics and ServiceMonitors.
255+
[kube-prometheus-stack](./monitoring/argo-apps/kube-prometheus-stack.yaml) is used just for deploying Prometheus rules, alerts and Grafana dashboards. Everything else is disabled because Grafana Alloy is used for collecting metrics and ServiceMonitors.
251256

252257
#### Grafana Alloy
253-
[Alloy](./monitoring/argo-apps/alloy.yaml) is an open-telemetry collector distribution, used to collect metrics, logs, traces and profiles. It is installed on every cluster that has the label `alloy: true`. Alloy also installs Prometheus CRDs to collect metrics from `ServiceMonitor`. It is also used to collect Kubernetes events. Logs are sent to Loki, traces to Tempo and metrics to Victoria Metrics. All the data can be visualized in Grafana.
258+
[Alloy](./monitoring/argo-apps/alloy.yaml) is an open-telemetry collector distribution, used to collect metrics, logs, traces and, profiles. It is installed on every cluster that has the label `alloy: true`. Alloy also installs Prometheus CRDs to collect metrics from `ServiceMonitor`. It is also used to collect Kubernetes events. Logs are sent to Loki, traces to Tempo and, metrics to Victoria Metrics. All the data can be visualized in Grafana.
254259

255260
#### Opentelemetry Kube Stack
256261
[Opentelemetry Kube Stack](./monitoring/argo-apps/opentelemetry-kube-stack.yaml) is an open-telemetry collector distribution, used to collect metrics, logs and traces. It is similar to Grafana Alloy but I couldn't make it work with the kube-prometheus-stack dashboards. We miss the `job` label and because of that the dashboards are not populated. Open [issue](https://github.com/open-telemetry/opentelemetry-helm-charts/issues/1545#issuecomment-2694671722).
@@ -260,15 +265,15 @@ Alerts defined in kube-prometheus-stack are sent to alert-manager. From there we
260265
Alerts from Grafana can be sent directly to external services like Pagerduty or to our own Alertmanager. One option would be to have a contact point defined in Grafana for every application/service, and every contact point to be mapped to a different PagerDuty service.
261266

262267
---
263-
Alert Manager and Grafana can be installed with kube-prometheus-stack but I prefer to have it over separate argo app to swap/update components easier.
264-
Datasources, providers, dashboards, alerting, plugins can be loaded in grafana from the helm value file or from configmaps using sidecare (we can place the configmaps in any namespace, for example, we can put datasources victoria metrics configmap in the victoria metrics folder).
265-
Promtail is discontinued and Alloy is the new recommanded option for collecting logs.
266-
Kubernetes Event Exporter is not needed anymore since Alloy is also collecting Kubernetes events.
268+
Alert Manager and Grafana can be installed using kube-prometheus-stack, but I prefer to handle them as a separate Argo application. This approach makes it easier to swap or update components.
269+
In Grafana, datasources, providers, dashboards, alerting configurations, and plugins can be loaded through the Helm values file or from ConfigMaps using a sidecar. We can place the ConfigMaps in any namespace; for example, we can store the Victoria Metrics datasource in the Victoria Metrics folder.
270+
Note that Promtail has been discontinued, and Alloy is now the recommended option for collecting logs.
271+
Kubernetes Event Exporter is no longer necessary, as Alloy also collects Kubernetes events.
267272

268273
### Services - other services and deployments
269274

270275
Services that cannot be added to any other category.
271276

272-
[Trivy](./services/argo-apps/trivy.yaml) is a security scanner that finds vulnerabilities, misconfigurations, secrets, SBOM in containers.
277+
[Trivy](./services/argo-apps/trivy.yaml) is a security scanner that finds vulnerabilities, misconfigurations, secrets, and SBOM in containers.
273278

274-
[Windmill](./services/argo-apps/windmill.yaml) is a developer platform and workflow engine. Turn scripts into auto-generated UIs, APIs and cron jobs. Compose them as workflows or data pipelines.
279+
[Windmill](./services/argo-apps/windmill.yaml) is a developer platform and workflow engine. Turn scripts into auto-generated UIs, APIs and, cron jobs. Compose them as workflows or data pipelines.

0 commit comments

Comments
 (0)