-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] clarifications on live
commands relationship with kustomize
#1545
Comments
Hi @marshall007, first of all thank you for being an active user of our toolchain. There is a lot to unpack here. upstream remotesFirst of all you are correct that remotes have a stability gap over pulling packages locally. We have seen that from customers and that is one of the key motivations for kpt (even though remotes are going to be hard to unwind because of how popular kustomize is). For the purposes of analyzing your situation further I am going to use the term So in your example of having kpt live applyIn this workflow I would consider a slightly different design than what I understand you currently have. First of all our recommendation is that you have a "wet repo" where the resources are fully hydrated. Why? This gives you a stable checkpoint which is useful for a lot of reasons. The new version of kpt live apply is introducing a resourcegroup that connects your KRM on disk with what is in the cluster (you can read a bit about it here) https://googlecontainertools.github.io/kpt/reference/live/init-alpha/ . So if you have an out of place hydration tools like kustomize what you can consider is piping the output to a directory which is consequently versioned in git. The power of keeping the configuration data separate than the transformation instructions or template languages is that you can use a number of tools or even your own scripts that modify yaml and the package contents are always in an interoperable state. |
@mikebz thanks for the detailed response, much appreciated! The part that feels burdensome to us is piping of the kustomize output to a separate directory which is also maintained in the git repo. It feels like following all the other best practices (mainly unwinding remote refs and not doing anything that references external state in generators/transformers) results in having kustomizations that are completely deterministic and thus "good enough" to be considered the source of truth without hydration. Here is our WIP packages repo to provide some concrete examples: https://appdat.jsc.nasa.gov/appdat/kpt-packages If you look you will see that our kustomizations are virtually all static resource manifests anyway. Thus, requiring our consumers to maintain both a copy of the kpt package as well as a separate fully hydrated directory in their git repo seems like a lot to ask and undermines the value prop of It feels like Thanks again! |
@marshall007 I took a quick look at your packages and it seems like almost all of the changes you are looking the package consumers to do are string value setting updates. Is that correct? if you are asking people to just set some setters and then output a fully hydrated config here: When you don't mix and match kustomize and kpt your life around You end user instructions can be something along the lines of
Is the only issue is the reuse of a component that might go into several parts of the application? |
I'm following along here, because we have quite a bit of kustomize. I am gathering that the easiest path forward is to convert all the kustomizations to kpt. I.e. they really are not complementary - more like two different approaches to the problem. I think that's OK - but would suggest the docs cover this - and perhaps provide a migration guide. |
We are primarily exposing setters as a convenience for three primary use cases:
The reason we want/need We are also recommending that customers compose and deploy this entire stack as a single As a result a typical customer onboarding looks something like: kpt live init .
kpt pkg get "https://appdat.jsc.nasa.gov/appdat/kpt-packages.git@$PKG_VERSION" packages
# optional static setter overrides
kpt cfg set ...
cat <<EOF > appdat-system.yaml
apiVersion: v1
kind: Namespace
metadata:
name: appdat-system
EOF
cat <<EOF > kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- appdat-system.yaml
- inventory-template.yaml
- packages/gitlab-admin
- packages/cert-manager
- packages/ingress-nginx
components:
- packages/cert-manager/account-key
- packages/ingress-nginx/aws
EOF ... and a CI deployment looks like: # optional deploy-time setter overrides
kpt cfg set ...
# optional (depending on what components are included) deploy-time secret injection
cp $LETSENCRYPT_ACCOUNT_KEY packages/cert-manager/account-key/secrets/tls.key
kustomize build . | kpt live apply
@mikebz this will be an issue for us as we add more packages that are intended to be deployed as multiple instances with different configurations, but this is really just a special case of our general problem of it not being easy for customers to compose their configuration from the packages we provide. With that in mind, I don't see how we could simply ditch kustomize and avoid "mix and match" with kpt. In particular, we are putting a lot of work into the Thanks again and look forward to any further recommendations you may have. |
It might be better to connect live and walk through some of these use cases. I am [email protected] (fill in the blank :) ) |
@mikebz email sent, thanks! |
I think the final action here is to ensure that we have some good documentation for the kpt+kustomize mix. We are going to make sure we add this to the doc refresh task. |
This is an interesting read!
for our users. I'm curious if there are other ways in kpt ecosystem to standardize such a workflow, can we use kpt fn to do that? /cc @zijianjoy |
@Bobgy thanks for chiming in! Coincidentally we just ran into the same problem you brought up in kubeflow/pipelines#5368 and had to migrate all our patch files to JSON as a workaround (thanks for the suggestion!). Anyway, it seems like a real minefield for |
Sorry I didn't do an explicit update, but #2136 outlined a way that we think both tools play nice. You can check out the sample and the way it works with I'd love to hear feedback on that sample and maybe if there are none we can close this issue and open new ones for specific scenarios. |
Hey @mikebz, I saw this new example when I was digging through the v1 migration docs and forgot to chime in here myself. I do not think the approach of requiring the You would also have to include custom It is no longer possible for different tooling to operate on the same set of resources. See kubernetes-sigs/cli-utils#364 where we are trying to find a portable way for the GitLab Kubernetes Agent (which already uses |
Proposal for the design is here: #2576 |
We have recently adopted
kpt
/kustomize
as our preferred mechanism for maintaining and distributing a consistent baseline cluster configuration. The tooling is all great, but we are still trying to understand best practices and continue developing our own.Discussion in issues like #1447 make it seem like we're going about it all wrong... our understanding of
kpt
's primary role is that it provides strong advantages over referencing remotes from yourkustomization.yaml
files inline:In practice, there is little consistently in terms of whether the upstream remote is:
kustomize
package with good overlays/patches/componentskustomize
package that only provides you a "kitchen sink" deploymentSince our goal is to provide a consistent set of sub-packages that play well together it seemed like our best option was to wrap the upstream of each sub-package with overlays to support consumer use cases + a
kustomization.yaml
at the root representing a default installation. We also create anupstream/kustomization.yaml
if it doesn't exist so that (1) it can be easily referenced from anotherkustomization
and (2) that we can include necessarypatches
/nameReference
overrides/etc so that everything works as expected when further kustomizations are applied.This feels right as both a publisher and consumer, but only because we are putting the extra work in to guarantee to the consumer that
kustomize build <dir> | kpt live apply
always works to deploy any sub-package or all of them at once.It would be much easier for us to use, easier to explain to consumers, and safer default behavior if all
kpt live
commands were aware ofKustomization
resources and built them appropriately.I am still getting caught up to speed on the project but I've seen several issues (like #407 (comment)) where it seems like this is loosely planned but it is not actually mentioned in the roadmap and similar issues have been closed as won't fix.
Thanks in advance for any feedback on our approach and/or clarifications around future integration with
kustomize
!The text was updated successfully, but these errors were encountered: