Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VPA: Implement in-place updates support #7673

Open
wants to merge 15 commits into
base: master
Choose a base branch
from

Conversation

maxcao13
Copy link

@maxcao13 maxcao13 commented Jan 7, 2025

What type of PR is this?

/kind feature
/kind api-change

What this PR does / why we need it:

This PR is an attempt to implement VPA in-place vertical scaling according to AEP-4016. It uses the VPA updater to actuate recommendations by sending resize patch requests to pods which allows in-place resize as enabled by the InPlacePodVerticalScaling feature flag in k8s 1.27.0 alpha and above (or by eventual graduation).

It includes some e2e tests currently according to the AEP, but I am sure we will probably need more.

This PR is a continuation of #6652 started by @jkyros with a cleaner git commit history.

Which issue(s) this PR fixes:

Fixes #4016

Special notes for your reviewer:

Notable general areas of concern:

  • We just kind of hacked the in-place stuff into the eviction limiter, maybe it should have been its own thing, or maybe we need a "disruption limiter", but in-place and eviction needed to know about each other because they have the same "disruption limit"
  • For now, there are many many TODOs literred throughout the code which need attention from reviewers/maintainers. A lot is because of design decisions I probably shouldn't make on my own. I resolved some of John's TODOs but he still has relevant comments that need to be addressed as well. I am using the TODOs as the "special notes for your reviewer" section, if people would like a comment somewhere which lays them all out nicely, I'm more than happy to make one.
  • Requires a lot more unit testing, but if a lot of architecture is to change, I chose delay writing them until I get feedback.
  • There's additional comments by John which can help aid review in the earlier commit descriptions.

Does this PR introduce a user-facing change?

In-place VPA scaling implemented, it can be enabled by setting `updateMode` on your VPA to `InPlaceOrRecreate` (Depends on `InPlaceVerticalPodScaling` feature gate being enabled or having graduated) 

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

[AEP] https://github.com/kubernetes/autoscaler/tree/09954b6741cbb910971916c079f45f6e8878d192/vertical-pod-autoscaler/enhancements/4016-in-place-updates-support
Depends on: 
[KEP] https://github.com/kubernetes/enhancements/tree/25e53c93e4730146e4ae2f22d0599124d52d02e7/keps/sig-node/1287-in-place-update-pod-resources

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API labels Jan 7, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: maxcao13
Once this PR has been reviewed and has the lgtm label, please assign voelzmo for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added area/vertical-pod-autoscaler needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jan 7, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @maxcao13. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Jan 7, 2025
@k8s-triage-robot
Copy link

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

@adrianmoisey
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jan 7, 2025
Copy link
Member

@omerap12 omerap12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just my first review after going through all the changes. I will go over it multiple times, but these are my initial comments for now.

vertical-pod-autoscaler/hack/run-e2e-locally.sh Outdated Show resolved Hide resolved

package annotations

// TODO(maxcao13): This annotation currently doesn't do anything. Do we want an annotation to show vpa inplace resized only for cosmetic reasons?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a fan of embedding logic into annotations. So, +1 for making this change purely for cosmetic reasons.

@@ -172,6 +172,10 @@ const (
// using any available update method. Currently this is equivalent to
// Recreate, which is the only available update method.
UpdateModeAuto UpdateMode = "Auto"
// UpdateModeInPlaceOrRecreate means that autoscaler tries to assign resources in-place
// first, and if it cannot ( resize takes too long or is Infeasible ) it falls back to the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

// If this is not possible (e.g., resizing takes too long or is infeasible), it falls back to the

"evicted", singleGroupStats.evicted,
"updating", singleGroupStats.inPlaceUpdating)

if singleGroupStats.running-(singleGroupStats.evicted+(singleGroupStats.inPlaceUpdating-1)) > shouldBeAlive {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain this logic? why do we have singleGroupStats.inPlaceUpdating-1 and not just singleGroupStats.inPlaceUpdating? is it because that the pod is already in inPlaceUpdating phase? (because of this line: if IsInPlaceUpdating(pod) )

//result := []*apiv1.Pod{}
result := []*PrioritizedPod{}
for num, podPrio := range calc.pods {
if admission.Admit(podPrio.pod, podPrio.recommendation) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please explain what do we check here? I thought the purpose of this function is to sort pods based on priority


for _, container := range pod.Spec.Containers {
// If we don't have a resize policy, we can't check it
if len(container.ResizePolicy) == 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we check if the it's not nil?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm by no means a go expert, but apparently the len of a nil slice is still 0 since nil is an identifier. So this should be okay?

jkyros and others added 14 commits January 14, 2025 08:56
This just addes the UpdateModeInPlaceOrRecreate mode to the types so we
can use it. I did not add InPlaceOnly, as that seemed contentious and it
didn't seem like we had a good use case for it yet.
So because of InPlacePodVerticalScaling, we can have a pod object whose
resource spec is correct, but whose status is not, because that pod may
have been updated in-place after the original admission.

This would have been ignored until now because "the spec looks correct",
but we need to take the status into account as well if a resize is in
progress.

This commit:
- takes status resources into account for pods/containers that are being
  in-place resized
- makes sure that any pods that are "stuck" in-place updating (i.e. the
node doesn't have enough resources either temporarily or permanently)
will still show up in the list as having "wrong" resources so they can
still get queued for eviction and be re-assigned to nodes
that do have enough resources
This commit makes the eviction restrictor in-place update aware. While
this possibly could be a separate restrictor or refactored into a shared
"disruption restrictor", I chose not to do that at this time.

I don't think eviction/in-place update can be completely separate as
they can both cause disruption (albeit in-place less so) -- they both
need to factor in the total disruption -- so I just hacked the in-place
update functions into the existing evictor and added some additional
counters for disruption tracking.

While we have the pod lifecycle to look at to figure out "where we are"
in eviction, we don't have that luxury with in-place, so that's why we
need the additional "IsInPlaceUpdating" helper.
The updater logic wasn't in-place aware, so I tried to make it so.

The thought here is that we try to in-place update if we can, if we
can't or if it gets stuck/can't satisfy the recommendation, then we
fall back to eviction.

I tried to keep the "blast radius" small by stuffing the in-place logic
in its own function and then falling back to eviction if it's not
possible.

It would be nice if we had some sort of "can the node support an
in-place resize with the current recommendation" but that seemed like a
whole other can of worms and math.
We might want to add a few more that are combined disruption counters,
e.g. in-place + eviction totals, but for now just add some separate
counters to keep track of what in-place updates are doing.
For now, this just updates the mock with the new functions I added to
the eviction interface. We need some in-place test cases.
TODO(jkyros): come back here and look at this after you get it working
So far this is just:
- Make sure it scales when it can

But we still need a bunch of other ones like
- Test fallback to eviction
- Test timeout/eviction when it gets stuck, etc
In the event that we can't perform the whole update, this calculates a
set of updates that should be disruptionless and only queues that
partial set, omitting the parts that would cause disruption.
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 15, 2025
@maxcao13
Copy link
Author

Appreciate the review! I will respond to the other comments tomorrow, just wanted to get the easy stuff out of the way and less cluttered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vertical-pod-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support in-place Pod vertical scaling in VPA
6 participants