Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ClusterLoader2 run-e2e.sh script does not work if kube context is set to a namespace other than default #3102

Open
Jont828 opened this issue Jan 14, 2025 · 1 comment · May be fixed by #3101
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@Jont828
Copy link
Contributor

Jont828 commented Jan 14, 2025

What happened: The kubectl commands in ClusterLoader2 do not explicitly specify a namespace and will inherit the namespace if set by the kube context. However, this secret is hard coded to be created in the default namespace, and if we are using a different on in the kube context, it will look for the clutser-loader secret in the wrong namespace. This wasn't an issue in existing CI tests b/c they already use the default namespace automatically when looking at the kube context:

CURRENT   NAME                                           CLUSTER                                        AUTHINFO                                       NAMESPACE
*         k8s-infra-e2e-boskos-scale-29_e2e-3101-62db2   k8s-infra-e2e-boskos-scale-29_e2e-3101-62db2   k8s-infra-e2e-boskos-scale-29_e2e-3101-62db2  

For my use case, I was trying to build a cluster with Azure, and I ran into an issue saying that error: failed to create serviceaccount: namespaces "test-pods" not found in these logs. However, that namespace isn't being passed anywhere to clusterloader2 and it seems to be implicitly picking it up.

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:
@Jont828 Jont828 added the kind/bug Categorizes issue or PR as related to a bug. label Jan 14, 2025
@BenTheElder
Copy link
Member

test-pods is seemingly coming from https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/#directly-accessing-the-rest-api

the jobs on azure are mounting a kubernetes service account in prow, and prow runs jobs in test-pods namespace

most previous CI jobs did not mount a kubernetes service account and we default opt them out of the automounted default service account, which avoids this behavior

but ideally we wouldn't rely on the context's default, we don't in e.g. test/e2e?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants