Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve frequent loops when only one of activities is productive #7679

Conversation

macsko
Copy link
Member

@macsko macsko commented Jan 9, 2025

What type of PR is this?

/kind feature

What this PR does / why we need it:

In a single iteration of the autoscaler, only one of the activities can occur alternately: scaling up or processing ProvisioningRequests. If the activity is productive (scale up attempt was successful or ProvisioningRequest was processed), the next iteration is started immediately. Unfortunately, if there is only successful processing of ProvisioningRequests without scaling up activity (so if there are only unschedulable pods that CA scale up won't help), the autoscaler will be put to sleep unnecessarily. This PR changes the logic slightly to compare not with the start of the last iteration, but with the previous to the last, to support alternate activities well.

For example, before this PR:
successful provreq processing -> immediate next loop (because provreq was processed) -> scale up attempt with non-successful result -> sleep for scanInterval duration (because scale up attempt was not successful)-> next provreq processing

After:
successful provreq processing -> immediate next loop (because provreq was processed) -> scale up attempt with non-successful result -> immediate next loop (because before the scale up attempt provreq was processed)-> next provreq processing

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Improved frequent loops when there is only one of scale up activity or ProvisioningRequest processing is productive.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jan 9, 2025
@macsko
Copy link
Member Author

macsko commented Jan 9, 2025

/cc @aleksandra-malinowska

@k8s-ci-robot k8s-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Jan 9, 2025
Copy link
Contributor

@gabesaba gabesaba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jan 13, 2025
@aleksandra-malinowska
Copy link
Contributor

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aleksandra-malinowska, macsko

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 23, 2025
@k8s-ci-robot k8s-ci-robot merged commit 0b3c289 into kubernetes:master Jan 23, 2025
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cluster-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants