Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Want option for kind delete cluster to not update kubeconfig #3781

Open
MikeSpreitzer opened this issue Nov 7, 2024 · 4 comments
Open

Want option for kind delete cluster to not update kubeconfig #3781

MikeSpreitzer opened this issue Nov 7, 2024 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/feature Categorizes issue or PR as related to a new feature.

Comments

@MikeSpreitzer
Copy link

MikeSpreitzer commented Nov 7, 2024

What would you like to be added:
I would like an option on the kind delete cluster command to skip any updates to the kubeconfig.

Why is this needed:
I want to delete multiple clusters concurrently. When I try it (see kubestellar/kubestellar#2558 (comment); I used kind v0.22.0 go1.22.0 darwin/arm64), the clusters get deleted but all but one of the kind commands fail because the succeeding one is holding the kubeconfig lock file.

I can kubectl config delete-context myself. Or, in the case just referenced, not even care because a subsequent kind create cluster is going to write the context.

Another way to solve my problem would be for kind delete cluster to have a wait-and-retry loop around acquiring that kubeconfig lock.

Another way to solve my problem would be for kind create cluster to have an option to overwrite the pre-existing cluster of the right name if it happens to already exist. In other words, an option on kind create cluster that means "first kind delete cluster (but do not bother with the kubeconfig update) if the given cluster already exists". In still other words, an option that indicates that I want a freshly created cluster with the given name regardless of whether there was already an eixsting cluster with that name.

@MikeSpreitzer MikeSpreitzer added the kind/feature Categorizes issue or PR as related to a new feature. label Nov 7, 2024
@stmcginnis
Copy link
Contributor

Having a retry loop seems reasonable.

There may be a workaround you can use for this. But kind create cluster and kind delete cluster have a --kubeconfig argument. This is useful to keep kind cluster settings separate from the default kubeconfig, or to help organize kubeconfigs into individual files that are explicitly used for connecting to different clusters.

You could use that for the creation to keep these clusters separate. But even if you don't use it with kind create cluster I believe you should be able to do something like kind delete cluster --name test123 --kubeconfig $(mktemp).

@aojea
Copy link
Contributor

aojea commented Nov 7, 2024

wow, this look wrong, I expect delete to actually delete the context

@BenTheElder you are the more familiar with the kubectl config code, can you please take a look?

@aojea aojea added the kind/bug Categorizes issue or PR as related to a bug. label Nov 7, 2024
@BenTheElder
Copy link
Member

Set KUBECONFIG=/dev/null ?

we could add retries, but it would still have to be time bounded and the caller may have to call it again

kind delete cluster is idempotent and reentrant so you can call it again

@BenTheElder
Copy link
Member

The current implementation of interacting with KUBECONFIG closely mirrors client-go, We need to be careful about that part. But we can add bounded retry around the entire KUBECONFIG edit call when we get a lock failure.

that said, if you want to guarantee the call succeeds, your immediate options are:

  • set KUBECONFIG to a no op location when calling this
  • retry calling delete cluster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

4 participants