Skip to content

Conversation

@RayyanSeliya
Copy link
Contributor

@RayyanSeliya RayyanSeliya commented Aug 16, 2025

Summary

This PR fixes issue #453 where the kubeflex-controller-manager pod was not updating with new images after running make install-local-chart. The root cause was a problematic sed command in the Makefile's chart target that caused image references to remain as <placeholder> instead of the actual image name.

Related issue(s)

Fixes #453

Problem

After running make install-local-chart, the kubeflex-controller-manager pod was not restarting with the new image, continuing to run the old version. The deployment object was not being updated with the new image reference, causing the pod to continue running the old version.

Evidence from the original issue:

  • New image was built: ko.local/manager:f72a332
  • Helm upgrade completed successfully
  • But pod name remained the same: kubeflex-controller-manager-699d869487-p6wsx
  • No new pod was created - indicating the Deployment spec didn't change

Root Cause

The issue was in the Makefile's chart target:

# BROKEN - This sed command was failing silently
cd config/manager && $(KUSTOMIZE) edit set image controller=$(shell echo ${IMG} | sed 's/\(:.*\)v/\1/')

The sed command was designed to remove 'v' from version tags, but IMAGE_TAG uses git commit hashes (like f72a332), not version tags with 'v'. This caused the image reference to remain as <placeholder> in the generated chart.

Solution

1. Fixed Makefile chart target

# FIXED - Simplified to use the full image reference
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}

2. Enhanced Deployment configuration

  • Added explicit RollingUpdate strategy for better update control

3. Added comprehensive E2E test

  • Created test/e2e/test-controller-image-update.sh to verify the fix
  • Tests image reference updates, pod recreation, and deployment stability
  • Ensures all pods are running and deployment is available

Testing

The fix has been thoroughly tested with the following results:

Image Update: Image reference updates correctly with new tags
Pod Recreation: New pods are created during deployment
Deployment Stability: All pods are running and ready
E2E Test Passes: Comprehensive test verifies the complete workflow

Test Output:

rayyan@rayyan-seliya:/mnt/c/Users/RAYYAN/Desktop/kubeflex$ ./test/e2e/test-controller-image-update.sh
+ set -e
+++ dirname ./test/e2e/test-controller-image-update.sh
++ cd ./test/e2e
++ pwd
+ SRC_DIR=/mnt/c/Users/RAYYAN/Desktop/kubeflex/test/e2e
+ source /mnt/c/Users/RAYYAN/Desktop/kubeflex/test/e2e/setup-shell.sh
++ export -f wait-for-cmd
++ export -f expect-cmd-output
++ export -f wait-for-secret
+ :
+ : -------------------------------------------------------------------------
+ : Test that controller manager image updates properly with make install-local-chart       
+ : This test verifies the fix for issue
+ : with new images after running make install-local-chart
+ :
+ echo '=== Testing Controller Manager Image Update Fix ==='
=== Testing Controller Manager Image Update Fix ===
+ echo '1. Verifying kubeflex installation...'
1. Verifying kubeflex installation...
+ kubectl get namespace kubeflex-system
+ echo '2. Getting current controller manager image...'
2. Getting current controller manager image...
++ kubectl get deployment kubeflex-controller-manager -n kubeflex-system -o 'jsonpath={.spec.template.spec.containers[?(@.name=="manager")].image}'
+ CURRENT_IMAGE=ko.local/manager:e2e-test-1755340795
+ '[' -z ko.local/manager:e2e-test-1755340795 ']'
+ echo 'Current image: ko.local/manager:e2e-test-1755340795'
Current image: ko.local/manager:e2e-test-1755340795
+ echo '3. Getting current pod names...'
3. Getting current pod names...
++ kubectl get pods -n kubeflex-system -l control-plane=controller-manager -o 'jsonpath={.items[*].metadata.name}'
+ CURRENT_PODS=kubeflex-controller-manager-5b6cc584d4-8hrrs
+ echo 'Current pods: kubeflex-controller-manager-5b6cc584d4-8hrrs'
Current pods: kubeflex-controller-manager-5b6cc584d4-8hrrs
+ echo '4. Setting unique image tag for testing...'
4. Setting unique image tag for testing...
++ date +%s
+ export IMAGE_TAG=e2e-test-1755341657
+ IMAGE_TAG=e2e-test-1755341657
+ echo 'Using image tag: e2e-test-1755341657'
Using image tag: e2e-test-1755341657
+ echo '5. Running make install-local-chart...'
5. Running make install-local-chart...
+ make install-local-chart
test -s /mnt/c/Users/RAYYAN/Desktop/kubeflex/bin/controller-gen && /mnt/c/Users/RAYYAN/Desktop/kubeflex/bin/controller-gen --version | grep -q v0.15.0 || \
GOBIN=/mnt/c/Users/RAYYAN/Desktop/kubeflex/bin go install sigs.k8s.io/controller-tools/cmd/[email protected]
/mnt/c/Users/RAYYAN/Desktop/kubeflex/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cd config/manager && /mnt/c/Users/RAYYAN/Desktop/kubeflex/bin/kustomize edit set image controller=ko.local/manager:e2e-test-1755341657
/mnt/c/Users/RAYYAN/Desktop/kubeflex/bin/kustomize build config/default > chart/templates/operator.yaml
# Warning: 'patchesStrategicMerge' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
/mnt/c/Users/RAYYAN/Desktop/kubeflex/bin/kustomize build config/crd > chart/crds/crds.yaml
KO_DOCKER_REPO=ko.local ko build -B ./cmd/manager -t e2e-test-1755341657 --platform linux/amd64
2025/08/16 16:24:33 Using base cgr.dev/chainguard/static:latest@sha256:6a4b683f4708f1f167ba218e31fcac0b7515d94c33c3acf223c36d5c6acd3783 for github.com/kubestellar/kubeflex/cmd/manager 
2025/08/16 16:24:38 git is in a dirty state
Please check in your pipeline what can be changing the following files:
 M .ci-operator.yaml
 M .dockerignore
 M .github/ISSUE_TEMPLATE/bug_report.yaml
 M .github/ISSUE_TEMPLATE/epic.yaml
 M .github/ISSUE_TEMPLATE/feature_request.yaml
 M .github/dependabot.yaml
 M .github/pull_request_template.md
 M .github/spellcheck/.spellcheck.yml
 M .github/spellcheck/.wordlist.txt
 M .github/workflows/OWNERS
 M .github/workflows/ci.yaml
 M .github/workflows/dco.yaml
 M .github/workflows/goreleaser.yml
 M .github/workflows/pr-verifier.yaml
 M .github/workflows/spellcheck_action.yml
 M .github/workflows/test-e2e.yaml
 M .gitignore
 M .goreleaser.yaml
 M .prow.yaml
 M CODE_OF_CONDUCT.md
 M CONTRIBUTING.md
 M DCO
 M Dockerfile
 M LICENSE
MM Makefile
 M OWNERS
 M PROJECT
 M README.md
 M api/v1alpha1/conditions.go
 M api/v1alpha1/conditions_test.go
 M api/v1alpha1/controlplane_types.go
 M api/v1alpha1/groupversion_info.go
 M api/v1alpha1/postcreatehook_types.go
 M api/v1alpha1/zz_generated.deepcopy.go
 M chart/.helmignore
 M chart/Chart.yaml
 M chart/crds/crds.yaml
 M chart/templates/NOTES.txt
 M chart/templates/_helpers.tpl
 M chart/templates/builtin-hooks.yaml
 M chart/templates/install-hooks.yaml
 M chart/templates/operator.yaml
 M chart/values.yaml
 M cmd/cmupdate/main.go
 M cmd/kflex/adopt/adopt.go
 M cmd/kflex/common/cp.go
 M cmd/kflex/common/flags.go
 M cmd/kflex/config/config.go
 M cmd/kflex/config/config_test.go
 M cmd/kflex/config/diagnose.go
 M cmd/kflex/config/set_hosting_cluster_ctx.go
 M cmd/kflex/config/set_hosting_cluster_ctx_test.go
 M cmd/kflex/create/create.go
 M cmd/kflex/ctx/ctx.go
 M cmd/kflex/ctx/ctx_test.go
 M cmd/kflex/ctx/delete.go
 M cmd/kflex/ctx/delete_test.go
 M cmd/kflex/ctx/get.go
 M cmd/kflex/ctx/list.go
 M cmd/kflex/ctx/rename.go
 M cmd/kflex/ctx/rename_test.go
 M cmd/kflex/delete/delete.go
 M cmd/kflex/init/cluster/kind.go
 M cmd/kflex/init/config.go
 M cmd/kflex/init/init.go
 M cmd/kflex/list/list.go
 M cmd/kflex/main.go
 M cmd/kflex/version/version.go
 M cmd/manager/main.go
 M config/crd/bases/tenancy.kflex.kubestellar.org_controlplanes.yaml
 M config/crd/bases/tenancy.kflex.kubestellar.org_postcreatehooks.yaml
 M config/crd/kustomization.yaml
 M config/crd/kustomizeconfig.yaml
 M config/crd/patches/cainjection_in_controlplanes.yaml
 M config/crd/patches/webhook_in_controlplanes.yaml
 M config/default/kustomization.yaml
 M config/default/manager_auth_proxy_patch.yaml
 M config/default/manager_config_patch.yaml
 M config/manager/config.yaml
 M config/manager/kustomization.yaml
MM config/manager/manager.yaml
 M config/prometheus/kustomization.yaml
 M config/prometheus/monitor.yaml
 M config/rbac/auth_proxy_client_clusterrole.yaml
 M config/rbac/auth_proxy_role.yaml
 M config/rbac/auth_proxy_role_binding.yaml
 M config/rbac/auth_proxy_service.yaml
 M config/rbac/controlplane_editor_role.yaml
 M config/rbac/controlplane_viewer_role.yaml
 M config/rbac/kustomization.yaml
 M config/rbac/leader_election_role.yaml
 M config/rbac/leader_election_role_binding.yaml
 M config/rbac/role.yaml
 M config/rbac/role_binding.yaml
 M config/rbac/service_account.yaml
 M config/samples/kustomization.yaml
 M config/samples/postcreate-hooks/hello.yaml
 M config/samples/postcreate-hooks/openshift-crds.yaml
 M config/samples/postcreate-hooks/postgres.yaml
 M config/samples/tenancy_v1alpha1_controlplane.yaml
 M docs/architecture.md
 M docs/contributors.md
 M docs/debugging.md
 M docs/postgresql-architecture-decision.md
 M docs/users.md
 M go.mod
 M go.sum
 M hack/boilerplate.go.txt
 M hack/verify-go-versions.sh
 M internal/controller/controlplane_controller.go
 M internal/controller/suite_test.go
 M kflex.rb
 M pkg/certs/certgen.go
 M pkg/certs/certgen_test.go
 M pkg/certs/kconfig_gen.go
 M pkg/client/client.go
 M pkg/client/client_test.go
 M pkg/helm/installer.go
 M pkg/kubeconfig/extensions.go
 M pkg/kubeconfig/extensions_test.go
 M pkg/kubeconfig/kubeconfig.go
 M pkg/kubeconfig/kubeconfig_test.go
 M pkg/reconcilers/external/kubeconfig.go
 M pkg/reconcilers/external/reconciler.go
 M pkg/reconcilers/host/rbac.go
 M pkg/reconcilers/host/reconciler.go
 M pkg/reconcilers/host/secret.go
 M pkg/reconcilers/host/service_account.go
 M pkg/reconcilers/k8s/deployment.go
 M pkg/reconcilers/k8s/reconciler.go
 M pkg/reconcilers/k8s/secret.go
 M pkg/reconcilers/k8s/service.go
 M pkg/reconcilers/ocm/chart.go
 M pkg/reconcilers/ocm/reconciler.go
 M pkg/reconcilers/ocm/service.go
 M pkg/reconcilers/shared/config.go
 M pkg/reconcilers/shared/ingress.go
 M pkg/reconcilers/shared/job.go
 M pkg/reconcilers/shared/namespace.go
 M pkg/reconcilers/shared/postcreate_hook.go
 M pkg/reconcilers/shared/rbac.go
 M pkg/reconcilers/shared/reconciler.go
 M pkg/reconcilers/shared/route.go
 M pkg/reconcilers/vcluster/chart.go
 M pkg/reconcilers/vcluster/reconciler.go
 M pkg/reconcilers/vcluster/secret.go
 M pkg/reconcilers/vcluster/service.go
 M pkg/util/clusterscoped_refs.go
 M pkg/util/errors.go
 M pkg/util/ocp.go
 M pkg/util/pg.go
 M pkg/util/print.go
 M pkg/util/status_check.go
 M pkg/util/unstructured.go
 M pkg/util/util.go
 M scripts/install-kubeflex.sh
 M test/e2e/README.md
 M test/e2e/cleanup.sh
 M test/e2e/kind-config.yaml
 M test/e2e/list-controller-pch.yaml
 M test/e2e/manage-ctx.sh
 M test/e2e/manage-type-external.sh
 M test/e2e/manage-type-k8s.sh
 M test/e2e/manage-type-vcluster.sh
 M test/e2e/nginx-patch.yaml
 M test/e2e/run.sh
 M test/e2e/setup-kubeflex.sh
 M test/e2e/setup-shell.sh
A  test/e2e/test-controller-image-update.sh
 M test/e2e/test-postcreate-completion.sh

2025/08/16 16:24:38 Building github.com/kubestellar/kubeflex/cmd/manager for linux/amd64
2025/08/16 16:24:50 Loading ko.local/manager:a11efa475d1edc257a9e49d68c3da104c80ee4461196b978e66251c6fc078582
2025/08/16 16:24:50 Loaded ko.local/manager:a11efa475d1edc257a9e49d68c3da104c80ee4461196b978e66251c6fc078582
2025/08/16 16:24:50 Adding tag e2e-test-1755341657
2025/08/16 16:24:50 Added tag e2e-test-1755341657
ko.local/manager:a11efa475d1edc257a9e49d68c3da104c80ee4461196b978e66251c6fc078582
kind load docker-image ko.local/manager:e2e-test-1755341657 --name kubeflex
Image with ID: sha256:7fb1e7632e765b23c737b647f7bcbf2f00e14dd3a475f3afbf7baf6b774cbef4 already present on the node kubeflex-control-plane but is missing the tag ko.local/manager:e2e-test-1755341657. re-tagging...
helm upgrade --install --create-namespace -n kubeflex-system kubeflex-operator ./chart
Release "kubeflex-operator" has been upgraded. Happy Helming!
NAME: kubeflex-operator
LAST DEPLOYED: Sat Aug 16 16:24:52 2025
NAMESPACE: kubeflex-system
STATUS: deployed
REVISION: 13
TEST SUITE: None
NOTES:

+ echo '6. Waiting for deployment to update...'
6. Waiting for deployment to update...
+ kubectl rollout status deployment/kubeflex-controller-manager -n kubeflex-system --timeout=180s
Waiting for deployment "kubeflex-controller-manager" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "kubeflex-controller-manager" rollout to finish: 1 old replicas are pending termination...
deployment "kubeflex-controller-manager" successfully rolled out
+ echo '7. Getting new controller manager image...'
7. Getting new controller manager image...
++ kubectl get deployment kubeflex-controller-manager -n kubeflex-system -o 'jsonpath={.spec.template.spec.containers[?(@.name=="manager")].image}'
+ NEW_IMAGE=ko.local/manager:e2e-test-1755341657
+ '[' -z ko.local/manager:e2e-test-1755341657 ']'
+ echo 'New image: ko.local/manager:e2e-test-1755341657'
New image: ko.local/manager:e2e-test-1755341657
+ echo '8. Getting new pod names...'
8. Getting new pod names...
++ kubectl get pods -n kubeflex-system -l control-plane=controller-manager -o 'jsonpath={.items[*].metadata.name}'
+ NEW_PODS=kubeflex-controller-manager-7967d45876-68xhm
+ echo 'New pods: kubeflex-controller-manager-7967d45876-68xhm'
New pods: kubeflex-controller-manager-7967d45876-68xhm
+ echo '9. Waiting for deployment rollout to complete...'
9. Waiting for deployment rollout to complete...
+ kubectl rollout status deployment/kubeflex-controller-manager -n kubeflex-system --timeout=300s
deployment "kubeflex-controller-manager" successfully rolled out
+ echo '10. Waiting for all pods to be ready...'
10. Waiting for all pods to be ready...
+ kubectl wait --for=condition=Ready pods -l control-plane=controller-manager -n kubeflex-system --timeout=120s
pod/kubeflex-controller-manager-7967d45876-68xhm condition met
+ echo '11. Verifying the fix...'
11. Verifying the fix...
+ '[' ko.local/manager:e2e-test-1755340795 = ko.local/manager:e2e-test-1755341657 ']'
+ '[' kubeflex-controller-manager-5b6cc584d4-8hrrs = kubeflex-controller-manager-7967d45876-68xhm ']'
+ [[ ko.local/manager:e2e-test-1755341657 != *\e\2\e\-\t\e\s\t\-\1\7\5\5\3\4\1\6\5\7* ]]
++ kubectl get pods -n kubeflex-system -l control-plane=controller-manager -o 'jsonpath={.items[*].status.phase}'
+ POD_STATUS=Running
+ [[ Running != *\R\u\n\n\i\n\g* ]]
+ echo '12. Verifying deployment is available...'
12. Verifying deployment is available...
+ kubectl wait --for=condition=Available deployment/kubeflex-controller-manager -n kubeflex-system --timeout=60s
deployment.apps/kubeflex-controller-manager condition met
+ echo '   SUCCESS: Controller manager image update test passed!'
!
+ echo '   Image changed from '\''ko.local/manager:e2e-test-1755340795'\'' to '\''ko.local/manager:e2e-test-1755341657'\'''
   Image changed from 'ko.local/manager:e2e-test-1755340795' to 'ko.local/manager:e2e-test-1755341657'
+ echo '   New pods created: kubeflex-controller-manager-7967d45876-68xhm'        
controller-manager-7967d45876-68xhm
controller-manager-7967d45876-68xhm
+ echo '   All pods are running'
controller-manager-7967d45876-68xhm
+ echo '   All pods are running'
   All pods are running
+ echo '   Deployment is available'
   Deployment is available
+ :
+ : -------------------------------------------------------------------------
+ : SUCCESS: Controller manager image updates properly with make install-local-chart
+ :

@kubestellar-prow kubestellar-prow bot added dco-signoff: yes Indicates the PR's author has signed the DCO. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Aug 16, 2025
@kubestellar-prow
Copy link
Contributor

Hi @RayyanSeliya. Thanks for your PR.

I'm waiting for a kubestellar member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@kubestellar-prow kubestellar-prow bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Aug 16, 2025
@RayyanSeliya RayyanSeliya changed the title 🐛 fix: resolve controller manager image update issue (#453) 🐛 fix: resolve controller manager image update issue Aug 16, 2025
@pdettori
Copy link
Contributor

pdettori commented Sep 8, 2025

/ok-to-test

@kubestellar-prow kubestellar-prow bot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 8, 2025
fi

# Check if pods changed (new pods created)
if [ "$CURRENT_PODS" = "$NEW_PODS" ]; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comparing pod names as space-separated strings might not reliably detect pod changes if pod ordering changes or there are extra pods. Please do comparison with sorted lists or a more robust set comparison.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure @pdettori will use a set approach for this !! Thx for pointing out !

@RayyanSeliya RayyanSeliya force-pushed the fix-controller-image-update branch from 03b1c25 to fd557af Compare September 8, 2025 18:57
@pdettori
Copy link
Contributor

pdettori commented Sep 9, 2025

/lgtm

@kubestellar-prow kubestellar-prow bot added the lgtm Indicates that a PR is ready to be merged. label Sep 9, 2025
@kubestellar-prow
Copy link
Contributor

LGTM label has been added.

DetailsGit tree hash: 3528c0c0dae40b77f1776ff3773a4219f1002efc

@pdettori
Copy link
Contributor

pdettori commented Sep 9, 2025

/approve

@kubestellar-prow
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pdettori

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubestellar-prow kubestellar-prow bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 9, 2025
@kubestellar-prow kubestellar-prow bot merged commit 81bb206 into kubestellar:main Sep 9, 2025
9 checks passed
@MikeSpreitzer
Copy link
Contributor

This PR introduced bug #588


.PHONY: chart
chart: manifests kustomize
cd config/manager && $(KUSTOMIZE) edit set image controller=$(shell echo ${IMG} | sed 's/\(:.*\)v/\1/')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sed command is not the source of the problem reported in #453 .

bash-5.3$ echo foo:1234 | sed 's/\(:.*\)v/\1/'
foo:1234

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing invokes this test.

bash-5.3$ find * .github/workflows/* -type f -exec grep test-controller-image-update \{\} \; -print -exec echo \; 
bash-5.3$ 

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is not marked "executable".

bash-5.3$ ls -l test/e2e/test-c*
-rw-r--r--  1 mspreitz  staff  6175 Oct 21 14:24 test/e2e/test-controller-image-update.sh
-rwxr-xr-x  1 mspreitz  staff  2124 Dec 14 21:37 test/e2e/test-custom-cluster-name.sh

Copy link
Contributor

@MikeSpreitzer MikeSpreitzer Dec 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing this file in the current edition of main (commit a6acd41) fails. I have attached a log.

fail1214a.log

Oddly, that helm get manifest output includes the following:

---
# Source: kubeflex-operator/templates/operator.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: manager
    app.kubernetes.io/created-by: kubeflex
    app.kubernetes.io/instance: controller-manager
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/name: deployment
    app.kubernetes.io/part-of: kubeflex
    control-plane: controller-manager
  name: kubeflex-controller-manager
  namespace: kubeflex-system

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha! The trick is that the helm get manifest output does NOT include the kubeflex-system Namespace.

echo "Pod status:"
kubectl get pods -n kubeflex-system -l control-plane=controller-manager
echo "Pod events:"
kubectl describe pods -n kubeflex-system -l control-plane=controller-manager
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not show all the events about Pods. In particular, it does not show events about Pods that no longer exist when this command starts to run.


# Wait for all pods to be ready
echo "10. Waiting for all pods to be ready..."
if ! kubectl wait --for=condition=Ready pods -l control-plane=controller-manager -n kubeflex-system --timeout=120s; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested commit 81bb206 --- the one created by merging this PR into main --- hacked to run this test as part of the E2E suite. This test failed here because it waited on a Pod from the old Deployment.

10. Waiting for all pods to be ready...
+ kubectl wait --for=condition=Ready pods -l control-plane=controller-manager -n kubeflex-system --timeout=120s
pod/kubeflex-controller-manager-57d5b6b7cb-x7mpp condition met
error: timed out waiting for the condition on pods/kubeflex-controller-manager-656c9ddc78-2xmc5

I have attached the full log.

fail1215a.log

@MikeSpreitzer
Copy link
Contributor

Issues resolved in #590 and #592

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has signed the DCO. lgtm Indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Controller manager pod not restarting after make install-local-chart

3 participants