Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add namespace label selector #1501

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

RomanenkoDenys
Copy link

Added namespace label selector for filtering pods in the DefaultEvictor plugin.
This is useful when descheduler uses only, for example, in the stage or dev namespaces.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign ingvagabund for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Aug 25, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @RomanenkoDenys!

It looks like this is your first PR to kubernetes-sigs/descheduler 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/descheduler has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Aug 25, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @RomanenkoDenys. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Aug 25, 2024
@a7i
Copy link
Contributor

a7i commented Aug 29, 2024

/ok-to-test

would you please squash your commits?

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Aug 29, 2024
@a7i
Copy link
Contributor

a7i commented Aug 29, 2024

@RomanenkoDenys great feature! would you be open to adding an e2e test for this?

}

if err := nsInformer.AddIndexers(cache.Indexers{
indexName: func(obj interface{}) ([]string, error) {
Copy link
Contributor

@ingvagabund ingvagabund Aug 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether it would be computationally less expensive to get a namespace object from the cache and check whether defaultEvictorArgs.NamespaceLabelSelector matches the namespace. Compared to adding a new index.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've simplified the code (use Lister instead of Indexer).

@@ -157,6 +159,25 @@ func New(args runtime.Object, handle frameworktypes.Handle) (frameworktypes.Plug
})
}

// check pod by namespace label filter
if defaultEvictorArgs.NamespaceLabelSelector != nil {
Copy link
Contributor

@ingvagabund ingvagabund Aug 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

E.g. LowNodeUtilization plugin will not like it:

. The plugin needs to evaluate all the pods so it can properly compute the resource utilization. There's EvictableNamespaces complement field which matches the namespaces right before eviction. Putting the namespace filtering here would break the plugin's functionality. The same for HighNodeUtilization or any plugin that balances pods on the cluster scope.

This needs to go under PreEvictionFilter extension point. In the case of having each plugin to decide whether it wants to perform namespace filtering before balancing/descheduling pods the filtering needs to be added in each plugin's New function. Or in case namespaces are iterated explicitly in the corresponding methods (if it can not be done as part of pod filtering). E.g.

var includedNamespaces, excludedNamespaces sets.Set[string]
if d.args.Namespaces != nil {
includedNamespaces = sets.New(d.args.Namespaces.Include...)
excludedNamespaces = sets.New(d.args.Namespaces.Exclude...)
}
.

Copy link
Author

@RomanenkoDenys RomanenkoDenys Aug 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I think you are right. But i have one question. In front of my code there is a constraint for podLabelSelector. And this selector theoretically does not allow to compute all pods, because we can label pods and remove them from the computation using pod label selector.

Icluding/excluding namespaces by name list is not acceptable for us, because in this case we cannot work with dynamically created namespaces (e.g. for dev environments). Using a label selector is our preferred method.

Copy link
Contributor

@ingvagabund ingvagabund Aug 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this part of #1499? Or, is your PR a parallel/independent effort?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But i have one question.

Which question is it? I see only statements.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's a part of #1499. The question is, if you can select pods using a label selector, how will the computational resources of all pods work? And why selecting pods by namespace label selector differs from pod label selector in that case ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how will the computational resources of all pods work?

I did not get this part. Can you more elaborate on this? Are you asking how each plugin gets a list of all pods in the cheapest way possible? Not sure if this is related to your question but the default evictor's label selector is used for rejecting pods that are not expected to be evicted. So there's no direct relation between the (pod) label selector and listing all the pods.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your reply. When reading the code I got the same results.
The Low node utilization plugin lists all pods for a node and calls the pod eviction filter function.
I want to restrict evicted pods by namespace. For my use case, restricting namespaces by include/exclude is not suitable because namespaces are created dynamically. So I added a namespace label selector. This selector restricts the list of evicted namespaces in the same way as the pod label selector.

Do you think this is the wrong way?

Copy link
Contributor

@ingvagabund ingvagabund Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the namespace matching (either through included/excluded list or a label selector) can be done at two stages. The first (through Filter extension point) when a plugin constructs a list of pods it wants to process (the more pods are excluded at the beginning the easier/faster a plugin can operate). And when (through PreEvictionFilter extension point) an evictor (currently we have only one) checks which pods are allowed to be evicted. Some plugins like LowNodeUtilization or HighNodeUtilization can not exclude pods at the beginning since they first need to compute the overall resource utilization. So only the pod matching during the eviction step makes sense.

Back to your question. Yes, this is the right way for LowNodeUtilization. The namespace label selector needs to be implemented through the PreEvictionFilter extension point. Every plugin can use the new namespace label selector this way. In addition, there are plugins like PodLifeTime that can perform the namespace matching sooner. Which is what is preferable. However, when done sooner each plugin take responsibility for making sure the matching is done in a way that does not break the plugin's intention.

Copy link
Author

@RomanenkoDenys RomanenkoDenys Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, thanks, i'll move the label selector code to the PreEvictionFilter. Now i understand your logic. It is a pity that this is not also explained in detail in the documentation ).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code moved to the preEvictionFilter. Please check it again.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 2, 2024
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 4, 2024
@k8s-ci-robot
Copy link
Contributor

@RomanenkoDenys: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-descheduler-verify-master 1b6910e link true /test pull-descheduler-verify-master

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@RomanenkoDenys RomanenkoDenys marked this pull request as draft September 4, 2024 12:26
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 4, 2024
@RomanenkoDenys
Copy link
Author

/ok-to-test

would you please squash your commits?

Yes, when i stabilize code, i will squash commits. Thank you !

NodeFit bool `json:"nodeFit,omitempty"`
MinReplicas uint `json:"minReplicas,omitempty"`
MinPodAge *metav1.Duration `json:"minPodAge,omitempty"`
NodeSelector string `json:"nodeSelector"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please keep omitempty as it was.

return ret, err
}

if len(ns) == 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This path will get executed every time the selector does not match any namespace. Which will produce a lot of API requests to the apiserver. Something we want to avoid. What case are you addressing by this? All the informers are expected to be synced with the apiserver at this point.

Instead, if a namespace label selector is provided/enabled but the list of matched namespace is zero we should error. So a user can either check whether there's some transient error or to disable the namespace selector or label namespaces or a different action. To avoid cases where the preEviction protection does not work properly and pods are unexpectedly evicted.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good points, thanks. I'll rewrite code soon.

@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants