Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VPA: Implement in-place updates support #6652

Draft
wants to merge 23 commits into
base: master
Choose a base branch
from

Conversation

jkyros
Copy link

@jkyros jkyros commented Mar 25, 2024

What type of PR is this?

/kind feature

What this PR does / why we need it:

This is a "hack and slash" attempt at getting in-place scaling working according to AEP-4016. It mostly works, but it's still a mess and missing some details, so don't take it too seriously just yet.

The TL;DR is that it seems to work okay as long as the pod specifies limits, but if no limits are specified, there seems to be a high likelihood that the InPlacePodVerticalScaling feature gets stuck in InProgress seemingly indefinitely (well, or until we fall back and evict -- but I hadn't implemented that yet).

Which issue(s) this PR fixes:

Fixes #4016

Special notes for your reviewer:

Don't spend a bunch of time on actual review yet, it's a mess, it's littered with TODOs, parts of it are...hmm...questionable, and it needs tests. I just wanted to have something tangible to reference conceptually when i bring this up in the sig-autoscaling meeting.

Notable general areas of concern:

  • I just kind of hacked the in-place stuff into the eviction limiter, maybe it should have been its own thing, or maybe we need a "disruption limiter", but in-place and eviction needed to know about each other because they have the same "disruption limit"
  • I'm letting the admission controller do the patching since it's good at that, the updater is just annotating the pod (which admission removes immediately). That might be a terrible idea.
  • I need to think about the "priority processor" stuff more, I made kind of a mess in there and probably missed some corners

Does this PR introduce a user-facing change?

In-place VPA scaling implemented, it can be enabled by setting `updateMode` on your VPA to `InPlaceOrRecreate` (Depends on `InPlaceVerticalPodScaling` feature gate being enabled or having graduated) 

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

[AEP] https://github.com/kubernetes/autoscaler/tree/09954b6741cbb910971916c079f45f6e8878d192/vertical-pod-autoscaler/enhancements/4016-in-place-updates-support
Depends on: 
[KEP] https://github.com/kubernetes/enhancements/tree/25e53c93e4730146e4ae2f22d0599124d52d02e7/keps/sig-node/1287-in-place-update-pod-resources

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. labels Mar 25, 2024
Copy link

linux-foundation-easycla bot commented Mar 25, 2024

CLA Signed

The committers listed above are authorized under a signed CLA.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Mar 25, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @jkyros!

It looks like this is your first PR to kubernetes/autoscaler 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/autoscaler has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @jkyros. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Mar 25, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jkyros
Once this PR has been reviewed and has the lgtm label, please assign krzysied for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added area/vertical-pod-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Mar 25, 2024
@jkyros jkyros force-pushed the vpa-implement-in-place-updates-support branch from 0211fd9 to db872fc Compare April 1, 2024 14:01
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 30, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 28, 2024
@jkyros jkyros force-pushed the vpa-implement-in-place-updates-support branch from db872fc to 2a1040e Compare August 30, 2024 20:27
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Aug 30, 2024
@jkyros jkyros force-pushed the vpa-implement-in-place-updates-support branch from 2a1040e to 6be1549 Compare August 30, 2024 21:25
@jkyros
Copy link
Author

jkyros commented Aug 30, 2024

Rebased, and I had to adjust a couple of the newly-added tests to account for in-place.

This isn't abandoned, but I was kind of hoping to resolve kubernetes/kubernetes#124712 first, so we could use in-place generally without having to test/document a bunch of corners where it doesn't work.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 25, 2024
This just addes the UpdateModeInPlaceOrRecreate mode to the types so we
can use it. I did not add InPlaceOnly, as that seemed contentious and it
didn't seem like we had a good use case for it yet.
With the InPlacePodVerticalScaling feature, we can now update the
resource requests and limits of pods in-place rather than having to
evict the pod to rewrite it.

We do have to make sure though ( because apps could react badly to an
update or require container restarts ) that we limit the amount of
disruption we can introduce at once, so we limit our updates only to the
ones that the updater has okayed.

(And then, over in the updater we're going to meter them so they don't
get sent to the admission-controller all at once)

This commit:
- allows the admission-controller to monitor update operations
- adds the new UpdateOrRecreate update mode to the list of possible
  update modes
- makes sure the admission-controller only patches pod update requests
  that were triggered by the updater (by using a special annotation)
- makes sure the admission-controller removes the annotation upon
  patching to signify that it is done
So because of InPlacePodVerticalScaling, we can have a pod object whose
resource spec is correct, but whose status is not, because that pod may
have been updated in-place after the original admission.

This would have been ignored until now because "the spec looks correct",
but we need to take the status into account as well if a resize is in
progress.

This commit:
- takes status resources into account for pods/containers that are being
  in-place resized
- makes sure that any pods that are "stuck" in-place updating (i.e. the
node doesn't have enough resources either temporarily or permanently)
will still show up in the list as having "wrong" resources so they can
still get queued for eviction and be re-assigned to nodes
that do have enough resources
This commit makes the eviction restrictor in-place update aware. While
this possibly could be a separate restrictor or refactored into a shared
"disruption restrictor", I chose not to do that at this time.

I don't think eviction/in-place update can be completely separate as
they can both cause disruption (albeit in-place less so) -- they both
need to factor in the total disruption -- so I just hacked the in-place
update functions into the existing evictor and added some additional
counters for disruption tracking.

While we have the pod lifecycle to look at to figure out "where we are"
in eviction, we don't have that luxury with in-place, so that's why we
need the additional "IsInPlaceUpdating" helper.
The updater logic wasn't in-place aware, so I tried to make it so.

The thought here is that we try to in-place update if we can, if we
can't or if it gets stuck/can't satisfy the recommendation, then we
fall back to eviction.

I tried to keep the "blast radius" small by stuffing the in-place logic
in its own function and then falling back to eviction if it's not
possible.

It would be nice if we had some sort of "can the node support an
in-place resize with the current recommendation" but that seemed like a
whole other can of worms and math.
We might want to add a few more that are combined disruption counters,
e.g. in-place + eviction totals, but for now just add some separate
counters to keep track of what in-place updates are doing.
For now, this just updates the mock with the new functions I added to
the eviction interface. We need some in-place test cases.
TODO(jkyros): come back here and look at this after you get it working
The updater now needs to be able to update pods. In the  current
approach, it's so it can add an annotation marking the pod as needing an
in-place update. The admission controller is still doing the resource
updating as part of patching ,the updater is not updating resources
directly. I wonder if it should?
So far this is just:
- Make sure it scales when it can

But we still need a bunch of other ones like
- Test fallback to eviction
- Test timeout/eviction when it gets stuck, etc
In the event that we can't perform the whole update, this calculates a
set of updates that should be disruptionless and only queues that
partial set, omitting the parts that would cause disruption.
go get -u k8s.io/autoscaler/vertical-pod-autoscaler
go mod tidy
go mod vendor
@jkyros jkyros force-pushed the vpa-implement-in-place-updates-support branch from 6be1549 to 6d16d41 Compare November 20, 2024 16:24
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vertical-pod-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support in-place Pod vertical scaling in VPA
3 participants