You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -44,54 +47,50 @@ Gitops repo with k8s tools simulating a real enterprise project
44
47
- Crossplane
45
48
- Github action runner
46
49
50
+
---
47
51
48
-
## 🏗️How to setup this beauty on local machine
49
-
50
-
### Install all the dependencies
52
+
## 🏗️How to set this beauty on the local machine
51
53
52
-
Nix and devbox must be installed before running this command. If you don't want to install nix, take a look at the dependencies from [./devbox.json](./devbox.json) and make sure they are available on your host.
54
+
### Install Dependencies
53
55
56
+
Ensure Nix and Devbox are installed. If you prefer not to install Nix, manually install the dependencies listed in [./devbox.json](./devbox.json)
Create a new Github OAuth application if you want to use [SSO with ArgoCD](https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/#dex).
83
-
Paste the client id and secret in to the [argocd-secrets.yaml](infra/argocd/argocd-secrets.yaml) encryptedfile.
82
+
1. Create a **GitHub OAuth Application** if you want to use [SSO with ArgoCD](https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/#dex). Paste the client ID and secret into the [argocd-secrets.yaml](infra/argocd/argocd-secrets.yaml) encryptedfile.
84
83
85
-
```sh
86
-
kubectl apply -k infra/argocd
87
-
kubectl apply -f infra/init-argo-apps.yaml
88
-
```
84
+
2. Deploy ArgoCD:
85
+
```sh
86
+
kubectl apply -k infra/argocd
87
+
kubectl apply -f infra/init-argo-apps.yaml
88
+
```
89
89
90
-
Visit ArgoCD UI at https://argocd.127.0.0.1.nip.io
90
+
Access the ArgoCD UI at [https://argocd.127.0.0.1.nip.io](https://argocd.127.0.0.1.nip.io)
We use this space for defining applications developed by us. Applications are deployed to k3d-sandbox-apps cluster. We have defined here, `production`, `staging`, `dev` as well as `ephemeral environments` created from PR environemnts.
146
+
We use this space for defining applications developed by us. Applications are deployed to `k3d-sandbox-apps` cluster. We have defined here, `production`, `staging`, `dev` as well as `ephemeral environments` created from PR environments.
148
147
149
148
The application deployed here is just an example of a basic backend/frontend service. The code and helm chart can be found on the [playground-sandbox repo](https://github.com/Utwo/playground-sandbox).
150
149
151
-
### Infra - infrastructure related services
150
+
Available URLs:
151
+
*http://dev.127.0.0.1.nip.io:8000
152
+
*http://staging.127.0.0.1.nip.io:8000
153
+
*http://127.0.0.1.nip.io:8000
154
+
155
+
### **Infra** - Infrastructure Services
152
156
153
157
#### ArgoCD
158
+
154
159
[ArgoCD](./infra/argo-apps/argocd.yaml) is a continuous deployment tool for gitOps workflows.
155
160
156
-
SSO with Github Login is enabled for the ArgoCD web UI using Github OAuth App.
161
+
SSO with Github Login is enabled for the ArgoCD web UI using the Github OAuth App.
157
162
Another Github APP is used to:
158
-
* read the content of the playground-sandbox repo and to bypass the rate limit rule.
163
+
* read the content of the playground-sandbox repo and bypass the rate limit rule.
159
164
* send argocd notifications on open pull requests when we deploy an ephemeral environment
160
165
161
-
We use one repo to deploy to multiple clusters. Registered clusters can be found on the [clusters](./infra/argocd/clusters/) folder. For each cluster, we define a couple of labels that are later used to deploy or fill information in the CD pipeline.
166
+
It uses one repo to deploy to multiple clusters. Registered clusters can be found in the [clusters](./infra/argocd/clusters/) folder. For each cluster, we define a couple of labels that are later used to deploy or fill information in the CD pipeline.
162
167
163
168
Visit ArgoCD UI at https://argocd.127.0.0.1.nip.io
164
169
165
170
#### SOPS operator
166
171
167
-
[SOPS operator](./infra/argo-apps/sops-secret-operator.yaml) is used to decrypt the sops secrets stored in the git repository and transform them into kubernetes secrets
172
+
[SOPS operator](./infra/argo-apps/sops-secret-operator.yaml) is used to decrypt the sops secrets stored in the git repository and transform them into Kubernetes secrets
168
173
169
-
Below is an example of how to create a secret with sops to safely store it on git.
174
+
Below is an example of how to create a secret with Sops to safely store it on git.
[Argo Rollouts](./infra/argo-apps/argo-rollouts.yaml) is a Kubernetes controller and set of CRDs which provide advanced deployment capabilities such as blue-green, canary, canary analysis, experimentation, and progressive delivery features to Kubernetes.
208
+
[Argo Rollouts](./infra/argo-apps/argo-rollouts.yaml) is a Kubernetes controller and set of CRDs that provide advanced deployment capabilities such as blue-green, canary, canary analysis, experimentation, and progressive delivery features to Kubernetes.
204
209
205
210
We use Argo Rollouts to enable canary deployments for the `playground-sandbox` app using `ingress-ngnix`.
[Kargo](./infra/argo-apps/kargo.yaml) is a continuous promotion orchestration layer, that complements Argo CD for Kubernetes. With Kargo we can define and promote the steps necesary to deploy to `dev`, `staging` and `production`. The project is defined [here](./infra/kargo/projects/playground-sandbox/). It listens for both this repo and the application repo and when there is a new change, it generates the plain manifests from the app helm chart. The output is then pushed to [stage/dev branch](https://github.com/Utwo/k8s-playground/tree/stage/dev), [stage/staging branch](https://github.com/Utwo/k8s-playground/tree/stage/staging) or [stage/production branch](https://github.com/Utwo/k8s-playground/tree/stage/production) and applied to ArgoCD.
217
+
[Kargo](./infra/argo-apps/kargo.yaml) is a continuous promotion orchestration layer, that complements Argo CD for Kubernetes. With Kargo we can define and promote the steps necessary to deploy to `dev`, `staging`, and `production`. The project is defined [here](./infra/kargo/projects/playground-sandbox/). It listens for both this repo and the application repo and when there is a new change, it generates the plain manifests from the app helm chart. The output is then pushed to [stage/dev branch](https://github.com/Utwo/k8s-playground/tree/stage/dev), [stage/staging branch](https://github.com/Utwo/k8s-playground/tree/stage/staging) or [stage/production branch](https://github.com/Utwo/k8s-playground/tree/stage/production) and applied to ArgoCD.
[Loki](./monitoring/argo-apps/loki.yaml) is used for storing logs. This service is exposed in order for other clusters to be able to send logs here.
245
250
246
251
#### Victoria Metrics logs
247
-
[vmlogs](./monitoring/argo-apps/victoria-metrics.yaml) is used for collecting logs. It was added just as a prof of concept. It does not have backup/recovery solutions and it cannot offload old logs to a bucket. For now we will relly on Loki for storing logs.
252
+
[vmlogs](./monitoring/argo-apps/victoria-metrics.yaml) is used for collecting logs. It was added just as a proof of concept. It does not have backup/recovery solutions and it cannot offload old logs to a bucket. For now, we will rely on Loki for storing logs.
248
253
249
254
#### Kube Prometheus Stack
250
-
[kube-prometheus-stack](./monitoring/argo-apps/kube-prometheus-stack.yaml) is used just for deploying Prometheus rules, alerts and Grafana dashboards. Everything else is disabled because, Grafana Alloy is used for collecting metrics and ServiceMonitors.
255
+
[kube-prometheus-stack](./monitoring/argo-apps/kube-prometheus-stack.yaml) is used just for deploying Prometheus rules, alerts and Grafana dashboards. Everything else is disabled because Grafana Alloy is used for collecting metrics and ServiceMonitors.
251
256
252
257
#### Grafana Alloy
253
-
[Alloy](./monitoring/argo-apps/alloy.yaml) is an open-telemetry collector distribution, used to collect metrics, logs, traces and profiles. It is installed on every cluster that has the label `alloy: true`. Alloy also installs Prometheus CRDs to collect metrics from `ServiceMonitor`. It is also used to collect Kubernetes events. Logs are sent to Loki, traces to Tempo and metrics to Victoria Metrics. All the data can be visualized in Grafana.
258
+
[Alloy](./monitoring/argo-apps/alloy.yaml) is an open-telemetry collector distribution, used to collect metrics, logs, traces and, profiles. It is installed on every cluster that has the label `alloy: true`. Alloy also installs Prometheus CRDs to collect metrics from `ServiceMonitor`. It is also used to collect Kubernetes events. Logs are sent to Loki, traces to Tempo and, metrics to Victoria Metrics. All the data can be visualized in Grafana.
254
259
255
260
#### Opentelemetry Kube Stack
256
261
[Opentelemetry Kube Stack](./monitoring/argo-apps/opentelemetry-kube-stack.yaml) is an open-telemetry collector distribution, used to collect metrics, logs and traces. It is similar to Grafana Alloy but I couldn't make it work with the kube-prometheus-stack dashboards. We miss the `job` label and because of that the dashboards are not populated. Open [issue](https://github.com/open-telemetry/opentelemetry-helm-charts/issues/1545#issuecomment-2694671722).
@@ -260,15 +265,15 @@ Alerts defined in kube-prometheus-stack are sent to alert-manager. From there we
260
265
Alerts from Grafana can be sent directly to external services like Pagerduty or to our own Alertmanager. One option would be to have a contact point defined in Grafana for every application/service, and every contact point to be mapped to a different PagerDuty service.
261
266
262
267
---
263
-
Alert Manager and Grafana can be installed with kube-prometheus-stack but I prefer to have it over separate argo app to swap/update components easier.
264
-
Datasources, providers, dashboards, alerting, plugins can be loaded in grafana from the helm value file or from configmaps using sidecare (we can place the configmaps in any namespace, for example, we can put datasources victoria metrics configmap in the victoria metrics folder).
265
-
Promtail is discontinued and Alloy is the new recommanded option for collecting logs.
266
-
Kubernetes Event Exporter is not needed anymore since Alloy is also collecting Kubernetes events.
268
+
Alert Manager and Grafana can be installed using kube-prometheus-stack, but I prefer to handle them as a separate Argo application. This approach makes it easier to swap or update components.
269
+
In Grafana, datasources, providers, dashboards, alerting configurations, and plugins can be loaded through the Helm values file or from ConfigMaps using a sidecar. We can place the ConfigMaps in any namespace; for example, we can store the Victoria Metrics datasource in the Victoria Metrics folder.
270
+
Note that Promtail has been discontinued, and Alloy is now the recommended option for collecting logs.
271
+
Kubernetes Event Exporter is no longer necessary, as Alloy also collects Kubernetes events.
267
272
268
273
### Services - other services and deployments
269
274
270
275
Services that cannot be added to any other category.
271
276
272
-
[Trivy](./services/argo-apps/trivy.yaml) is a security scanner that finds vulnerabilities, misconfigurations, secrets, SBOM in containers.
277
+
[Trivy](./services/argo-apps/trivy.yaml) is a security scanner that finds vulnerabilities, misconfigurations, secrets, and SBOM in containers.
273
278
274
-
[Windmill](./services/argo-apps/windmill.yaml) is a developer platform and workflow engine. Turn scripts into auto-generated UIs, APIs and cron jobs. Compose them as workflows or data pipelines.
279
+
[Windmill](./services/argo-apps/windmill.yaml) is a developer platform and workflow engine. Turn scripts into auto-generated UIs, APIs and, cron jobs. Compose them as workflows or data pipelines.
0 commit comments