Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor refactor to scale-up orchestrator for more re-usability #7649

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

kawych
Copy link
Contributor

@kawych kawych commented Dec 30, 2024

What type of PR is this?

What this PR does / why we need it:

It's a minor refactor that makes it easier to re-use parts of the core scale-up logic while replacing other parts:

  • More methods are made public for re-usability
  • Async initializer is extracted from CreateNodeGroup() function to substitute the initializer easier

Special notes for your reviewer:

Does this PR introduce a user-facing change?

NONE

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. area/cluster-autoscaler labels Dec 30, 2024
@k8s-ci-robot k8s-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Dec 30, 2024
@kawych kawych force-pushed the dws-htn branch 2 times, most recently from 79f141d to 86d6ac1 Compare December 30, 2024 14:06
@@ -188,7 +188,8 @@ func (e *scaleUpExecutor) executeScaleUp(
return nil
}

func combineConcurrentScaleUpErrors(errs []errors.AutoscalerError) errors.AutoscalerError {
// CombineConcurrentScaleUpErrors returns combined scale-up error to report after multiple concurrent scale-ups might haver failed.
func CombineConcurrentScaleUpErrors(errs []errors.AutoscalerError) errors.AutoscalerError {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it make more sense as a part of the errors package?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -222,7 +222,9 @@ func (o *ScaleUpOrchestrator) ScaleUp(
return buildNoOptionsAvailableStatus(markedEquivalenceGroups, skippedNodeGroups, nodeGroups), nil
}
var scaleUpStatus *status.ScaleUpStatus
createNodeGroupResults, scaleUpStatus, aErr = o.CreateNodeGroup(bestOption, nodeInfos, schedulablePodGroups, podEquivalenceGroups, daemonSets, allOrNothing)
oldId := bestOption.NodeGroup.Id()
initializer := NewAsyncNodeGroupInitializer(bestOption.NodeGroup, nodeInfos[oldId], o.scaleUpExecutor, o.taintConfig, daemonSets, o.processors.ScaleUpStatusProcessor, o.autoscalingContext, allOrNothing)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creation of the initializer used to be flag guarded and here it is no longer the case - is that intentional? If not, can you keep the flag guard?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may not be ideal, but I preferred this over the alternatives:

  • passing around a nil
  • creating a dummy initializer implementation for the case when flag is not flipped
    Overall creation of the initializer doesn't really do anything yet.

One obvious option that might make more sense (PLMK WDYT) is to split off orchestrator's CreateNodeGroupAsync method.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate a bit on why you rejected passing around a nil? CreateNodeGroup could be doing a nil check instead of o.autoscalingContext.AsyncNodeGroupsEnabled check. Right now there's not a lot of logic in NewAsyncNodeGroupInitializer, but still it is code we execute even when o.autoscalingContext.AsyncNodeGroupsEnabled is false.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mainly due to the nil pointer exception risk (in case the paths between initialization and usage gets longer and more complicated, then someone submits some small change without testing some specific scenario...).

Regarding your suggestion, I prefer to opt for checking the "AsyncNodeGroupsEnabled" option explicitly each time. Please let me know if you're OK with the current solution. An added benefit is that now the "CreateNodeGroup()" function doesn't change behavior relative to the state before "async mode" was introduced.
So if someone creates an alternative implementation of orchestrator that re-uses the old CreateNodeGroup() function, but that alternative implementation doesn't support the async mode (hint: that's exactly what we did), it should work correctly.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: kawych
Once this PR has been reviewed and has the lgtm label, please assign towca for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Jan 10, 2025
@@ -131,3 +133,63 @@ func (e autoscalerErrorImpl) Type() AutoscalerErrorType {
func (e autoscalerErrorImpl) AddPrefix(msg string, args ...interface{}) AutoscalerError {
return autoscalerErrorImpl{errorType: e.errorType, wrappedErr: e, msg: fmt.Sprintf(msg, args...)}
}

// CombineConcurrentScaleUpErrors returns combined scale-up error to report after multiple concurrent scale-ups might haver failed.
func CombineConcurrentScaleUpErrors(errs []AutoscalerError) AutoscalerError {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there's anything scale up specific here, can you rename this to just CombineConcurrentErrors? Or even CombineErrors if you also remove the word "concurrent" from line 173?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -222,7 +222,9 @@ func (o *ScaleUpOrchestrator) ScaleUp(
return buildNoOptionsAvailableStatus(markedEquivalenceGroups, skippedNodeGroups, nodeGroups), nil
}
var scaleUpStatus *status.ScaleUpStatus
createNodeGroupResults, scaleUpStatus, aErr = o.CreateNodeGroup(bestOption, nodeInfos, schedulablePodGroups, podEquivalenceGroups, daemonSets, allOrNothing)
oldId := bestOption.NodeGroup.Id()
initializer := NewAsyncNodeGroupInitializer(bestOption.NodeGroup, nodeInfos[oldId], o.scaleUpExecutor, o.taintConfig, daemonSets, o.processors.ScaleUpStatusProcessor, o.autoscalingContext, allOrNothing)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate a bit on why you rejected passing around a nil? CreateNodeGroup could be doing a nil check instead of o.autoscalingContext.AsyncNodeGroupsEnabled check. Right now there's not a lot of logic in NewAsyncNodeGroupInitializer, but still it is code we execute even when o.autoscalingContext.AsyncNodeGroupsEnabled is false.

@kawych kawych force-pushed the dws-htn branch 4 times, most recently from 6344f45 to c936440 Compare January 14, 2025 15:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants