-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GitOps Bridge Enhancement Proposal (GBEP) v2 #73
Comments
Based on POC I think there's some definite wins with the improvements made here but I have maybe a few foods for thought or concerns that I think we should think about... Extensibility of AddonsWhen dealing with external charts there is usually a need I have found to potentially add additional resources to your chart, for example things like I am struggling here in this current implementation to see how I could point to a locally managed helm chart and say "add this additional helm chart to my addon". Which I have found to be a common need where I may not want to add an additional X number of applications just to create an HTTPRoute resource. Other General Comments/Questions
I think this makes sense to single cluster deployments I also just want to be cognizant of maybe the overhead of another file to manage to enable even initial install of an addon to a cluster can be.
I wonder why we can't just solve the dependency problem by just making addons that rely on each other apart of the same application then managing using ArgoCD sync waves. For example Karpenter, you have helm chart go first then you can add a source for your provisioner that has an annotation to go on a later sync wave. Final NoteTo me I think the biggest problem with the proposal is we still have a very 1:1 mapping of applicationset to a single helm source where I wonder why we wouldn't want to consolidate the number of applications we ultimately make by saying "I want to deploy a helm chart for Karpenter, and my provisioner helm chart, and my CNI all in one Applicationset". And then we have the Application be a "Stack" that can be redeployed with slightly different config to X number of clusters. |
Thanks for the feedback @blakeromano Yes I'm looking into adding additional resources to an addon like kubernetes nodeclass. I intend to add this under "8. Deploying resources before and after the addon" |
Is this something you are implementing for yourself or part of Argo project? For my use cases, I was going to use sync-wave with app of application sets patterns but it seemed a bit limiting. I try to use established tools for dependency management stuff because it can get messy quick. Is there a problem with the app of application set pattern here?
I'd love to see this for sure!
I tend to agree with this. Something simple like a small script to push to a repo might be ok for the first implementation. But something a bit more guided / frame worked approach might be needed in the future. |
@nabuskey I want to implement an experimental validation webhook to delay the entry of an app until dependencies are met. ArgoCD upstream is working on a From @crenshaw-dev review argoproj/argo-cd#15280 (review)
I want to implement the |
What Michael said makes a lot of sense to me. This is something Crossplane project has struggled with for a long time as well. Often, the most significant use case for defined order is resource deletion. Creations are usually fine because of the reconciliation loop. For this project, can you not get away with eventual consistency? Because I think adding dependency management stuff can become very messy, I must ask. Is the validation webhook correct place for this kind of thing? |
I agree with @nabuskey that ensuring eventual consistency and configuring ArgoCD apps to retry should be the best practice. This means that using the sync wave should be avoided at all costs. The goal of the GitOps bridge is to help users learn about GitOps best practices while working with a specific list of popular addons. The framework will not come with a dependency graph out of the box, but it will provide a way to implement dependencies using different strategies. This will be especially useful in situations where it's impossible to use eventual consistency when bootstrapping a new cluster. We will document and provide examples of these strategies. I agree with you that we don't want to promote using ordering in ArgoCD. However, we can't ignore that users face problems, and they might think that the only solution is ordering. We need to document and provide examples of these corner cases and explore if they can be solved using eventual consistency or if one of the strategies needs to be used. |
One major complaint I have about GitOps bridge repo structure is that there is no workflow accounted for provisioning infrastructure itself using a Kubernetes control plane (e.g. Crossplane or Cluster API). It predominantly assumes you will manually provision all clusters with Terraform which doesn't really scale well. It also makes the folder structure confusing. For instance "clusters" is used to not provision new clusters for a team, but what gets installed on specific clusters overriding environment. |
Good point @dtzar I agree the project is missing the IaC examples using other tools other than terraform. I agree we should add this as part of the enhancement proposal, I created the folder structure as place holders thinking on building them. Which IaC would you prioritized? I’m thinking on CAPI/CAPA to implement first |
Our implementation went public yesterday: https://github.com/Azure-Samples/aks-platform-engineering It might help to have a live chat/sync on this subject |
Nice @dtzar ! yes lets connect on Slack |
The enhancement proposal makes sense to me, particularly the idea of having an addon stack related to the Kubernetes version and a single Helm chart to facilitate addon management. I have started working on a single ApplicationSet Helm chart. Here is a brief summary of the concept I want to establish:
addons:
aws_load_balancer_controller:
enabled: false
releaseName: aws-load-balancer-controller
chart: aws-load-balancer-controller
repoUrl: https://aws.github.io/eks-charts
targetRevision: "1.8.1"
namespace: kube-system
values:
vpcId: '{{.metadata.annotations.aws_vpc_id}}'
clusterName: '{{.metadata.annotations.aws_cluster_name}}'
serviceAccount:
name: '{{.metadata.annotations.aws_load_balancer_controller_service_account}}'
annotations:
eks.amazonaws.com/role-arn: '{{.metadata.annotations.aws_load_balancer_controller_iam_role_arn}}'
external_secrets:
enabled: false
releaseName: external-secrets
namespace: external-secrets
chart: external-secrets
repoUrl: https://charts.external-secrets.io
targetRevision: "0.10.0"
values:
serviceAccount:
name: '{{.metadata.annotations.external_secrets_service_account}}'
annotations:
eks.amazonaws.com/role-arn: '{{.metadata.annotations.external_secrets_iam_role_arn}}'
aws_ingress_nginx:
enabled: false
releaseName: ingress-nginx
namespace: ingress-nginx
chart: ingress-nginx
repoUrl: https://kubernetes.github.io/ingress-nginx
targetRevision: "4.11.1"
values:
controller:
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
loadBalancerClass: service.k8s.aws/nlb
{{- range $name, $addon := .Values.addons }}
{{- if $addon.enabled }}
{{- $nameNormalize := printf "%s" $name | replace "_" "-" | trunc 63 | trimSuffix "-" -}}
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: {{ $nameNormalize }}
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- clusters:
selector:
matchExpressions:
- key: enable_{{ $name }}
operator: In
values: ['true']
template:
metadata:
name: addon-{{ $nameNormalize }}
namespace: argocd
spec:
destination:
server: "https://kubernetes.default.svc"
namespace: {{ $addon.namespace }}
project: default
source:
repoURL: {{ $addon.repoUrl }}
chart: {{ $addon.chart }}
targetRevision: {{ $addon.targetRevision }}
helm:
releaseName: {{ $addon.releaseName }}
{{- with $addon.values }}
valuesObject:
{{- toYaml . | nindent 12 }}
{{- end }}
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
---
{{- end }}
{{- end }}
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: gitops-addons-appset
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- clusters:
selector:
matchExpressions:
- key: cluster_name
operator: In
values: [in-cluster]
template:
metadata:
name: gitops-addons
spec:
destination:
server: "https://kubernetes.default.svc"
namespace: argocd
project: default
source:
repoURL: 'https://benmalekarim.github.io/helm-scenarios-charts/'
chart: 'gitops-addons'
targetRevision: '1.30.5'
helm:
releaseName: gitops-addons
valuesObject:
addons:
aws_load_balancer_controller:
enabled: '{{ .metadata.labels.enable_aws_load_balancer_controller }}'
external_secrets:
enabled: '{{ .metadata.labels.enable_external_secrets }}'
aws_ingress_nginx:
enabled: '{{ .metadata.labels.enable_ingress_nginx }}'
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true By following these specifications, we will address the first four points of improvements effectively. I have a small example that reproduces these specifications. I am open to discussing and contributing further. Please let me know your thoughts and any suggestions for additional scenarios or improvements. |
@BENMALEKarim thanks for the proposal. I think we can do better than having different chart versions per cluster version. Here is a gitops-bridge helm chart that addresses the versions via overrides, its POC as we are testing in a workshop environment |
GitOps Bridge Enhancement Proposal
Background
The GitOps Bridge is a project that collects examples and patterns demonstrating how to use ArgoCD Application Sets, particularly when working with Cloud Provider Managed Kubernetes. Based on feedback from field engagements, several improvements have been identified for the next major version.
Objectives
Proposed Improvements
1. Implement Addon Stacks
2. Version Promotion via Stacks
3. Centralized Version Management
4. Reduce ApplicationSet Duplication
5. CNOE Compatibility
6. Single Cluster Deployment Improvements
enable_$addon_name
7. Applications from ApplicationSet Dependency (order/sync-wave)
depends_on: "otherapp1,otherapp2"
8. Deploying resources before and after the addon
9. Complete Airgapped environments
10. Cross Accounts configurations (AWS Specififc)
11. Additional examples for other IaC tools
Feedback and Discussion
Please comment on this issue with your thoughts, suggestions, or concerns regarding these proposed improvements. We welcome all feedback to ensure the GitOps Bridge v2 meets the needs of our users and aligns with best practices in GitOps and Kubernetes management.
The text was updated successfully, but these errors were encountered: