-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup orphaned ManagedCluster resources after setting hubAcceptsClient to false #816
Comments
@zhiweiyin318 @elgnay thought on this? |
We have |
We want to trigger unjoin/deregister from spoke. If my understanding is correct spoke doesn't have permission to delete ManagedCluster resource on hub. So we thought, spoke can update hubAcceptsClient flag to false, which will trigger the whole cleanup. The reasin we want to trigger from spoke is, we will have a long lived hub but many spokes can come up and go down during a week. So before we destroy spoke, we want to update the flag to false, so that all spoke specific resources are cleaned up from hub. |
I think gc controller is the place where we do resources cleanup, we can put the resources cleanup code to there. |
Ok thanks, will look at it. |
@zhiweiyin318 I took a look at gccontrolller. Looks like it is meant to delete resources like ManifestWork, rbac resources etc. However my question was to delete ManagedCluster resource itself when everything else gets deleted. That is the reason I proposed to add a CronJob or we can add new controoller to delete it. |
Describe the enhancement
Currently if spoke sets hubAcceptsClient field to false on ManagedCluster CR the hub will cleanup all rbac resources and namespace from hub.
https://github.com/open-cluster-management-io/ocm/blob/main/pkg/registration/hub/managedcluster/controller.go#L162
We will also update this controller to cleanup IAM resources created by hub for this specific spoke. The only leftover resource will be ManagedCluster. We have a lot of temporary clusters which are created and destroyed every week for testing, which will lead to a lot of orphaned resources ManagedCluster resources.
To clean them up, we are planning to add a CronJob to OCM hub. @qiujian16 also suggested to create a contoller on hub side instead of cronjob. Creating this issue to get recommendation from qiujian and othe maintainers.
The text was updated successfully, but these errors were encountered: