Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm init “--apiserver-advertise-address=publicIP” not working, private IP works 1.13 version #1390

Closed
phanikumarp opened this issue Feb 6, 2019 · 42 comments
Labels
priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@phanikumarp
Copy link

phanikumarp commented Feb 6, 2019

BUG REPORT

Versions

kubeadm version : v.1.13
Environment: Ubuntu-16.04.5 LTS (Xenial Xerus)

  • Kubernetes version "v1.13.3
  • Cloud provider or hardware configuration: GCP
  • OS (e.g. from /etc/os-release): ubuntu-16.04
  • Kernel (e.g. uname -a): Linux ubuntu 4.15.0-1026-gcp Permissions #27~16.04.1-Ubuntu

What happened?

"sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=32.xxx.xx.xxx(PublicIP)"
when u we using Public IP got error , when using Private IP , got success.

What you expected to happen?

expected success when we give Public IP, but got failed.

How to reproduce it (as minimally and precisely as possible)?

after installed "apt-get install -y kubelet kubeadm kubectl" , trying to get single node cluster with kubadm

Anything else we need to know?

After installing Docker version 18.09.1, i tried single node cluster.

@neolit123
Copy link
Member

this hints a bout a problem with your networking setup.
do you have connectivity to the remote?

what is the output of kubeadm if you add --v=2

@neolit123 neolit123 added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Feb 6, 2019
@phanikumarp
Copy link
Author

phanikumarp commented Feb 7, 2019

this hints a bout a problem with your networking setup.
do you have connectivity to the remote?

what is the output of kubeadm if you add --v=2

No, not connecting remotely, using one of GCP VM directly. you said issue in network setup.
added my log below. how to reslove this issue?
https://pastebin.com/nNstHxvx

my network route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.140.0.1 0.0.0.0 UG 0 0 0 ens4
10.140.0.1 * 255.255.255.255 UH 0 0 0 ens4
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0

@neolit123
Copy link
Member

i cannot see the log for some reason.
but if you can't connect using tools like curl and ping then this is not a kubeadm bug.

@phanikumarp
Copy link
Author

phanikumarp commented Feb 7, 2019 via email

@neolit123
Copy link
Member

/sig network

@k8s-ci-robot k8s-ci-robot added the sig/network Categorizes an issue or PR as relevant to SIG Network. label Feb 7, 2019
@spockmang
Copy link

Hi,
I have reproduced your issue and found the root cause. Please find my analysis below.

During kubeadm init, kubelet will try to create kube-proxy, kube-apiserver, kube-controller, and kube-scheduller. It will try to bind all these services with the public IP address(GCP assigned) of the VM.

But the problem in GCP is, the public IP address does not rest on the VM, but it rather a NAT function. The tricky part here is to understand is, if you are sending a packet to the NAT it will forward the packet to the VM and vice-versa. But your process/application cannot bind to that NAT IP address. The IP address with which you intend to create the cluster has to reside on the VM.

That is why it is working with internal IP address but not with public IP address.

You can verify this by checking 'tail -f /var/log/syslog' while creating the cluster.

Please let me know if this addressed your issue.

-M

@sensre
Copy link

sensre commented Feb 18, 2019

thanks, @spockmang for the insight. do you have the recommendation or step to make it work with public IP add ?

@phanikumarp
Copy link
Author

what is final solution for getting Public IP fof kubernetes

@bcheerla4509
Copy link

Hi,
Even i have the exact problem with trying on Azure VMs (Redhat 7 OS). Anyone has suggestions to resolve the issue?

This is what i get every time i do kubeadm init .. I tried re-installing kubelet and kubeadm.. tried yum update and restarted VM.

[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

Thanks
Sagar

@phanisowjanyavutukuri
Copy link

Even am facing the same issue
kubeadm init is failing with public ip and saying that kubelet is misconfigured
This error is likely caused by:

  • The kubelet is not running
  • The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

@fabriziopandini
Copy link
Member

I'm for closing the issue here because this is not a kubeadm problem

The IP address with which you intend to create the cluster has to reside on the VM.

Probably the best option to get a suggestion here is to reopen the issue in k/k and tag SIG network and SIG cloud provider
@neolit123 opinions?

@neolit123
Copy link
Member

its definitely not a kubeadm problem, because kubeadm does not manage the node networking.

for the GCP case, this seems to be the root of the problem as explained by @spockmang:

But your process/application cannot bind to that NAT IP address. The IP address with which you intend to create the cluster has to reside on the VM

i think we should not track this issue here but rather in the kubernetes/kubernetes tracker.
also tag /sig gcp, /sig aws, /sig network.

@phanikumarp please re-open the ticket in k/k if you'd like and reference this one.
thank you.

@bobbydeveaux
Copy link

having same issue @phanikumarp - did you find a solution to use the external IP?

@phanikumarp
Copy link
Author

phanikumarp commented Mar 27, 2019 via email

@swapblue
Copy link

Thank you @spockmang for the explanation. Just for the record, i faced the same issue on Openstack.

@x22n
Copy link

x22n commented Apr 14, 2019

@phanikumarp What was the solution?

Experiencing this issue on Scaleway too.

@x22n
Copy link

x22n commented Apr 17, 2019

I found a solution.

Instead of using --apiserver-advertise-address=publicIp use --apiserver-cert-extra-sans=publicIp

Don't forget to replace the private ip for the public ip in your .kube/config if you use kubectl from remote.

@ASLanin
Copy link

ASLanin commented May 28, 2019

--apiserver-cert-extra-sans=publicIp do not solve the problem.
Yes, it adds public IP to the Certs, but do not affects connection procedure.
The worker nodes will look for apiserver-advertise-address during join. So they will not connect to the ptivateIP, if no route exist.
Api-server itself have two params
--advertise-address ip and --bind-address ip. It looks reasonable. But, how can this addresses configured during kubeadm init?

@pankajcheema
Copy link

I am trying on different cloud provider but no luck . Any suggestion or solution ?

@atshakil
Copy link

atshakil commented Jul 2, 2019

I found a solution.

Instead of using --apiserver-advertise-address=publicIp use --apiserver-cert-extra-sans=publicIp

Don't forget to replace the private ip for the public ip in your .kube/config if you use kubectl from remote.

@x22n Your solution results in a verification error.

$ kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

Update

It turns out that the verification issue is due to leftover credentials (in $HOME/.kube) from the last master, before kubeadm reset was performed.

This can be resolved by,

mv  $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

And, the following seems to be a clean solution.

kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=<PRIVATE_IP>[,<PUBLIC_IP>,...]

I just tested the solution above on Compute Engine.

@asymness
Copy link

I solved this problem by forwarding the private IP of the master node to the public IP of the master node on the worker node. Specifically, this was the command that I ran on worker node before running kubeadm join:
sudo iptables -t nat -A OUTPUT -d <Private IP of master node> -j DNAT --to-destination <Public IP of master node>

@Noobgam
Copy link

Noobgam commented Jul 20, 2019

sudo iptables -t nat -A OUTPUT -d <Private IP of master node> -j DNAT --to-destination <Public IP of master node>

Thanks a lot, solved issue for me. But keep in mind that you'll also have to forward worker private ips the same way on the master node to make everything work correctly (if they suffer from the same issue of being covered by cloud provider NAT)

@KevinWang15
Copy link

Using iptables seems to be promising, but I still cannot get kube-proxy to work, and I cannot use ClusterIP services that way

@kioyong
Copy link

kioyong commented Aug 29, 2019

I solved this problem by forwarding the private IP of the master node to the public IP of the master node on the worker node. Specifically, this was the command that I ran on worker node before running kubeadm join:
sudo iptables -t nat -A OUTPUT -d <Private IP of master node> -j DNAT --to-destination <Public IP of master node>

thanks for your solution, i solved this issue~

@Zhang21
Copy link

Zhang21 commented Sep 24, 2019

@x22n

I found a solution.
Instead of using --apiserver-advertise-address=publicIp use --apiserver-cert-extra-sans=publicIp
Don't forget to replace the private ip for the public ip in your .kube/config if you use kubectl from remote.

This soltuion is just for kubeclt uses ~.kube/config, but not for nodes kube join cross VPC.


@asymness Iptables also a temporary solution.

# on node
sudo iptables -t nat -A OUTPUT -d <Private IP of master> -j DNAT --to-destination <Public IP of master>


# on master
sudo iptables -t nat -A OUTPUT -d <Private IP of WAN node> -j DNAT --to-destination <Public IP of WAN node>

There is no final solvtion?

Thank you!

@Zhang21
Copy link

Zhang21 commented Sep 24, 2019

But it also cause a new issue, the flannel on WAN node create faile, can't connect to kubernest svc(10.96.0.1)。

What should I do?

@janjangao
Copy link

But it also cause a new issue, the flannel on WAN node create faile, can't connect to kubernest svc(10.96.0.1)。

What should I do?

same problem,flannel pod always error,dial tcp 10.96.0.1:443: i/o timeout

@Zhang21
Copy link

Zhang21 commented Sep 27, 2019

@hayond
Creating a k8s cluster which nodes over WAN is diffcult, I just give it up. So I just create a k8s cluster which nodes over LAN, but can use kubectl(config) from remote client.

@janjangao
Copy link

janjangao commented Oct 9, 2019

@Zhang21
after a lot of day and night research... i finally found a way to let k8s work over WAN.

short answer

  1. --apiserver-advertise-address=publicIP is necessary, this tell k8s worker communicate to master over publicIP. default is privateIP, will lead to 10.96.0.1:443: i/o timeout.
  2. node annotation flannel.alpha.coreos.com/public-ip-overwrite=publicIP is necessary,this set flannel pod node-ip to publicIP

full answer

  1. first, run ifconfig on master, check whether there have publicIP info on master interfaces. some cloud provider use Elastic IP, they dont have any publicIP interface info. you must add publicIP interface info yourself. follow Cloud_floating_IP_persistent. if you dont add publicIP interface, kubeadm init --apiserver-advertise-address=publicIP will not success.
  2. kubeadm init with --apiserver-advertise-address=publicIP, i use --control-plane-endpoint=publicIP --upload-certs --apiserver-advertise-address=publicIP for my own. i think just apiserver-advertise-address also will be ok.
  3. apply flannel.yaml and join worker node.
  4. on worker node,add kubelet args --node-ip
vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --node-ip=publicIP
systemctl restart kubelet
  1. set worker node public-ip-overwrite annotate, and restart worker kube-flannel pod
kubectl annotate node NODE_NAME flannel.alpha.coreos.com/public-ip-overwrite=publicIP
# on worker node
systemctl restart docker

@heangratha
Copy link

Hi,
I have tried this on AWS EC2

  1. on master:
    iptables -t nat -A OUTPUT -d <Private IP of node> -j DNAT --to-destination <Public IP of node>
  2. on node:
    iptables -t nat -A OUTPUT -d <Private IP of master> -j DNAT --to-destination <Public IP of master>
  3. kube join
    Everything working well I can see master and node
`ubuntu@ip-200-200-20-10:~$ kubectl get nodes
NAME               STATUS   ROLES    AGE     VERSION
ip-200-200-20-10   Ready    master   3m29s   v1.18.1
node-ohio          Ready    <none>   66s     v1.18.1
`

But when I install calico and ingress, both STATUS ContainerCreating

`ubuntu@ip-200-200-20-10:~$ kubectl get po --all-namespaces -o wide
NAMESPACE     NAME                                       READY   STATUS           
kube-system   calico-kube-controllers-5b8b769fcd-dlncq   0/1     ContainerCreating
kube-system   calico-node-94xr2                          1/1     Running          
kube-system   calico-node-p9gnr                          0/1     CrashLoopBackOff 
kube-system   coredns-66bff467f8-26fc4                   1/1     Running          
kube-system   coredns-66bff467f8-4l2tb                   1/1     Running          
kube-system   etcd-ip-200-200-20-10                      1/1     Running          
kube-system   kube-apiserver-ip-200-200-20-10            1/1     Running          
kube-system   kube-controller-manager-ip-200-200-20-10   1/1     Running          
kube-system   kube-proxy-85l2s                           1/1     Running          
kube-system   kube-proxy-r9zmv                           1/1     Running          
kube-system   kube-scheduler-ip-200-200-20-10            1/1     Running          
kube-system   traefik-ingress-controller-zlxst           0/1     ContainerCreating
`

And in the long
81" network for pod "traefik-ingress-controller-58nmh": networkPlugin cni failed to set up pod "traefik-ingress-controller-58nmh_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Warning FailedCreatePodSandBox 21m kubelet, node-ohio Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "edd1f8697da91c99e9498cee624f6ff0343cd469b66e363224527696b069769d" network for pod "traefik-ingress-controller-58nmh": networkPlugin cni failed to set up pod "traefik-ingress-controller-58nmh_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/ Normal SandboxChanged 6m26s (x508 over 21m) kubelet, node-ohio Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox 86s (x747 over 21m) kubelet, node-ohio (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fbbb8840cee381f881ffddfb0c3afeb6f13a606cfe7b75022a214339e926e36a" network for pod "traefik-ingress-controller-58nmh": networkPlugin cni failed to set up pod "traefik-ingress-controller-58nmh_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/

@s977120
Copy link

s977120 commented Apr 20, 2020

its definitely not a kubeadm problem, because kubeadm does not manage the node networking.

for the GCP case, this seems to be the root of the problem as explained by @spockmang:

But your process/application cannot bind to that NAT IP address. The IP address with which you intend to create the cluster has to reside on the VM

i think we should not track this issue here but rather in the kubernetes/kubernetes tracker.
also tag /sig gcp, /sig aws, /sig network.

@phanikumarp please re-open the ticket in k/k if you'd like and reference this one.
thank you.

Please reopen this issue, binding to public ip is not a kubeadm problem, But there is still a problem with binding to 0.0.0.0.
Perhaps by adding new options, distinguish between bind ip and advertise ip

@saarram
Copy link

saarram commented Jun 3, 2020

@phanikumarp - What's the solution you found for this problem ? We are facing same issue and need a way to advertise external IP for kube-api server.

@hocgin
Copy link

hocgin commented Sep 30, 2021

@Zhang21 after a lot of day and night research... i finally found a way to let k8s work over WAN.

short answer

  1. --apiserver-advertise-address=publicIP is necessary, this tell k8s worker communicate to master over publicIP. default is privateIP, will lead to 10.96.0.1:443: i/o timeout.
  2. node annotation flannel.alpha.coreos.com/public-ip-overwrite=publicIP is necessary,this set flannel pod node-ip to publicIP

full answer

  1. first, run ifconfig on master, check whether there have publicIP info on master interfaces. some cloud provider use Elastic IP, they dont have any publicIP interface info. you must add publicIP interface info yourself. follow Cloud_floating_IP_persistent. if you dont add publicIP interface, kubeadm init --apiserver-advertise-address=publicIP will not success.
  2. kubeadm init with --apiserver-advertise-address=publicIP, i use --control-plane-endpoint=publicIP --upload-certs --apiserver-advertise-address=publicIP for my own. i think just apiserver-advertise-address also will be ok.
  3. apply flannel.yaml and join worker node.
  4. on worker node,add kubelet args --node-ip
vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --node-ip=publicIP
systemctl restart kubelet
  1. set worker node public-ip-overwrite annotate, and restart worker kube-flannel pod
kubectl annotate node NODE_NAME flannel.alpha.coreos.com/public-ip-overwrite=publicIP
# on worker node
systemctl restart docker

⚠️ I tried this solution.
It can solve the apiserver timeout problem.
but, It will prevent flannel packets from being sent to the destination in Alibaba Cloud.
I suggest not to try [Cloud_floating_IP_persistent].

@adamwithit
Copy link

@Zhang21 after a lot of day and night research... i finally found a way to let k8s work over WAN.
short answer

  1. --apiserver-advertise-address=publicIP is necessary, this tell k8s worker communicate to master over publicIP. default is privateIP, will lead to 10.96.0.1:443: i/o timeout.
  2. node annotation flannel.alpha.coreos.com/public-ip-overwrite=publicIP is necessary,this set flannel pod node-ip to publicIP

full answer

  1. first, run ifconfig on master, check whether there have publicIP info on master interfaces. some cloud provider use Elastic IP, they dont have any publicIP interface info. you must add publicIP interface info yourself. follow Cloud_floating_IP_persistent. if you dont add publicIP interface, kubeadm init --apiserver-advertise-address=publicIP will not success.
  2. kubeadm init with --apiserver-advertise-address=publicIP, i use --control-plane-endpoint=publicIP --upload-certs --apiserver-advertise-address=publicIP for my own. i think just apiserver-advertise-address also will be ok.
  3. apply flannel.yaml and join worker node.
  4. on worker node,add kubelet args --node-ip
vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --node-ip=publicIP
systemctl restart kubelet
  1. set worker node public-ip-overwrite annotate, and restart worker kube-flannel pod
kubectl annotate node NODE_NAME flannel.alpha.coreos.com/public-ip-overwrite=publicIP
# on worker node
systemctl restart docker

warning I tried this solution. It can solve the apiserver timeout problem. but, It will prevent flannel packets from being sent to the destination in Alibaba Cloud. I suggest not to try [Cloud_floating_IP_persistent].

I spend a lot of time in this solution. It work great in the beginning, I can create nodes in two vm (which do not have public ip, all rely on port forwarding ) across two wan, flannel and ingress create successfully. however, when come to udp connection between two node, udp package seen like dropped by router because of the src ip is a WAN IP. so I believe unless you have two machine directly connect to wan or you can counter the anti spoofing rule in your router, this is not a good attend in this topic. I also wonder if there anyway we can send a udp package with WAN src ip from a machine in LAN. Any trick?

@janjangao
Copy link

janjangao commented Oct 12, 2021

@Zhang21 after a lot of day and night research... i finally found a way to let k8s work over WAN.
short answer

  1. --apiserver-advertise-address=publicIP is necessary, this tell k8s worker communicate to master over publicIP. default is privateIP, will lead to 10.96.0.1:443: i/o timeout.
  2. node annotation flannel.alpha.coreos.com/public-ip-overwrite=publicIP is necessary,this set flannel pod node-ip to publicIP

full answer

  1. first, run ifconfig on master, check whether there have publicIP info on master interfaces. some cloud provider use Elastic IP, they dont have any publicIP interface info. you must add publicIP interface info yourself. follow Cloud_floating_IP_persistent. if you dont add publicIP interface, kubeadm init --apiserver-advertise-address=publicIP will not success.
  2. kubeadm init with --apiserver-advertise-address=publicIP, i use --control-plane-endpoint=publicIP --upload-certs --apiserver-advertise-address=publicIP for my own. i think just apiserver-advertise-address also will be ok.
  3. apply flannel.yaml and join worker node.
  4. on worker node,add kubelet args --node-ip
vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --node-ip=publicIP
systemctl restart kubelet
  1. set worker node public-ip-overwrite annotate, and restart worker kube-flannel pod
kubectl annotate node NODE_NAME flannel.alpha.coreos.com/public-ip-overwrite=publicIP
# on worker node
systemctl restart docker

warning I tried this solution. It can solve the apiserver timeout problem. but, It will prevent flannel packets from being sent to the destination in Alibaba Cloud. I suggest not to try [Cloud_floating_IP_persistent].

I spend a lot of time in this solution. It work great in the beginning, I can create nodes in two vm (which do not have public ip, all rely on port forwarding ) across two wan, flannel and ingress create successfully. however, when come to udp connection between two node, udp package seen like dropped by router because of the src ip is a WAN IP. so I believe unless you have two machine directly connect to wan or you can counter the anti spoofing rule in your router, this is not a good attend in this topic. I also wonder if there anyway we can send a udp package with WAN src ip from a machine in LAN. Any trick?

yes... still have problem... I am tired, It's not worth to speed lot of time for it, I finally buy an another machine under the lan, I found spending money makes me happy.

@janjangao
Copy link

@Zhang21 after a lot of day and night research... i finally found a way to let k8s work over WAN.
short answer

  1. --apiserver-advertise-address=publicIP is necessary, this tell k8s worker communicate to master over publicIP. default is privateIP, will lead to 10.96.0.1:443: i/o timeout.
  2. node annotation flannel.alpha.coreos.com/public-ip-overwrite=publicIP is necessary,this set flannel pod node-ip to publicIP

full answer

  1. first, run ifconfig on master, check whether there have publicIP info on master interfaces. some cloud provider use Elastic IP, they dont have any publicIP interface info. you must add publicIP interface info yourself. follow Cloud_floating_IP_persistent. if you dont add publicIP interface, kubeadm init --apiserver-advertise-address=publicIP will not success.
  2. kubeadm init with --apiserver-advertise-address=publicIP, i use --control-plane-endpoint=publicIP --upload-certs --apiserver-advertise-address=publicIP for my own. i think just apiserver-advertise-address also will be ok.
  3. apply flannel.yaml and join worker node.
  4. on worker node,add kubelet args --node-ip
vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --node-ip=publicIP
systemctl restart kubelet
  1. set worker node public-ip-overwrite annotate, and restart worker kube-flannel pod
kubectl annotate node NODE_NAME flannel.alpha.coreos.com/public-ip-overwrite=publicIP
# on worker node
systemctl restart docker

⚠️ I tried this solution. It can solve the apiserver timeout problem. but, It will prevent flannel packets from being sent to the destination in Alibaba Cloud. I suggest not to try [Cloud_floating_IP_persistent].

still have problem in alibaba cloud, I gave up, finally buy an another machine under the lan ^_^, I found I should spend more time in my life, rather than this stupid damn k8s.

@neolit123
Copy link
Member

neolit123 commented Oct 12, 2021 via email

@adamwithit
Copy link

adamwithit commented Nov 3, 2021

finally I make it work, first I follow @hayond step to init two node across two WAN, after init and apply flannel, I change the interface back to LAN ip address eg. 192.168.x.x (using netplan) and do the iptables trick to route the traffic to actual WAN IP. That make udp package sent correctly with flannel. now my flannel using LAN ip address and work perfect so I wonder It write down the interface IP during kubeadm init process somewhere and use it in health check but using the current interface ip for udp VXLAN communication.

@mk2134226
Copy link

@adamwithit Did your setting kept working or caused some other issues down the track.

@Ryder05
Copy link

Ryder05 commented Sep 17, 2022

I manged to solve the problem by enabling inbound traffic in port 6443 and used the flag --control-plane-endpoint instead of --apiserver-advertise-address

@HairukanLin
Copy link

I manged to solve the problem by enabling inbound traffic in port 6443 and used the flag --control-plane-endpoint instead of --apiserver-advertise-address

Thank you. This helped me. I am using a gcp virtual machine and your instructions helped. Hope other people will find this as well.

@Tuanshu
Copy link

Tuanshu commented Nov 1, 2023

I manged to solve the problem by enabling inbound traffic in port 6443 and used the flag --control-plane-endpoint instead of --apiserver-advertise-address

Thank you a lot. this solved my problem as well. Hope this can be better documented in tutorial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests