-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm init “--apiserver-advertise-address=publicIP” not working, private IP works 1.13 version #1390
Comments
this hints a bout a problem with your networking setup. what is the output of kubeadm if you add |
No, not connecting remotely, using one of GCP VM directly. you said issue in network setup. my network route |
i cannot see the log for some reason. |
Iam able to access curl and ping.
…On Thu 7 Feb, 2019, 7:37 PM Lubomir I. Ivanov ***@***.*** wrote:
i cannot see the log for some reason.
but if you can't connect using tools like curl and ping then this is not
a kubeadm bug.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1390 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AINnY4CcC0fNGf4oX4vf4r4Q-OpuToR4ks5vLDMugaJpZM4alDvn>
.
|
/sig network |
Hi, During kubeadm init, kubelet will try to create kube-proxy, kube-apiserver, kube-controller, and kube-scheduller. It will try to bind all these services with the public IP address(GCP assigned) of the VM. But the problem in GCP is, the public IP address does not rest on the VM, but it rather a NAT function. The tricky part here is to understand is, if you are sending a packet to the NAT it will forward the packet to the VM and vice-versa. But your process/application cannot bind to that NAT IP address. The IP address with which you intend to create the cluster has to reside on the VM. That is why it is working with internal IP address but not with public IP address. You can verify this by checking 'tail -f /var/log/syslog' while creating the cluster. Please let me know if this addressed your issue. -M |
thanks, @spockmang for the insight. do you have the recommendation or step to make it work with public IP add ? |
what is final solution for getting Public IP fof kubernetes |
Hi, This is what i get every time i do kubeadm init .. I tried re-installing kubelet and kubeadm.. tried yum update and restarted VM. [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: This error is likely caused by: If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: Additionally, a control plane component may have crashed or exited when started by the container runtime. Thanks |
Even am facing the same issue
|
I'm for closing the issue here because this is not a kubeadm problem
Probably the best option to get a suggestion here is to reopen the issue in k/k and tag SIG network and SIG cloud provider |
its definitely not a kubeadm problem, because kubeadm does not manage the node networking. for the GCP case, this seems to be the root of the problem as explained by @spockmang:
i think we should not track this issue here but rather in the kubernetes/kubernetes tracker. @phanikumarp please re-open the ticket in k/k if you'd like and reference this one. |
having same issue @phanikumarp - did you find a solution to use the external IP? |
I have solution ....for fixing this issue
…On Wed, Mar 27, 2019 at 3:25 AM Bobby DeVeaux ***@***.***> wrote:
having same issue @phanikumarp <https://github.com/phanikumarp> - did you
find a solution to use the external IP?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1390 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AINnY7-sbUIsrF24brq-MJ2KQG5mcQ4cks5vapdBgaJpZM4alDvn>
.
|
Thank you @spockmang for the explanation. Just for the record, i faced the same issue on Openstack. |
@phanikumarp What was the solution? Experiencing this issue on Scaleway too. |
I found a solution. Instead of using --apiserver-advertise-address=publicIp use --apiserver-cert-extra-sans=publicIp Don't forget to replace the private ip for the public ip in your .kube/config if you use kubectl from remote. |
|
I am trying on different cloud provider but no luck . Any suggestion or solution ? |
@x22n Your solution results in a verification error.
UpdateIt turns out that the verification issue is due to leftover credentials (in This can be resolved by, mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config And, the following seems to be a clean solution. kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=<PRIVATE_IP>[,<PUBLIC_IP>,...] I just tested the solution above on Compute Engine. |
I solved this problem by forwarding the private IP of the master node to the public IP of the master node on the worker node. Specifically, this was the command that I ran on worker node before running |
Thanks a lot, solved issue for me. But keep in mind that you'll also have to forward worker private ips the same way on the master node to make everything work correctly (if they suffer from the same issue of being covered by cloud provider NAT) |
Using iptables seems to be promising, but I still cannot get kube-proxy to work, and I cannot use ClusterIP services that way |
thanks for your solution, i solved this issue~ |
This soltuion is just for kubeclt uses @asymness Iptables also a temporary solution. # on node
sudo iptables -t nat -A OUTPUT -d <Private IP of master> -j DNAT --to-destination <Public IP of master>
# on master
sudo iptables -t nat -A OUTPUT -d <Private IP of WAN node> -j DNAT --to-destination <Public IP of WAN node> There is no final solvtion? Thank you! |
But it also cause a new issue, the flannel on WAN node create faile, can't connect to kubernest svc( What should I do? |
same problem,flannel pod always error,dial tcp 10.96.0.1:443: i/o timeout |
@hayond |
@Zhang21 short answer
full answer
|
Hi,
But when I install calico and ingress, both STATUS
And in the long |
Please reopen this issue, binding to public ip is not a kubeadm problem, But there is still a problem with binding to 0.0.0.0. |
@phanikumarp - What's the solution you found for this problem ? We are facing same issue and need a way to advertise external IP for kube-api server. |
|
I spend a lot of time in this solution. It work great in the beginning, I can create nodes in two vm (which do not have public ip, all rely on port forwarding ) across two wan, flannel and ingress create successfully. however, when come to udp connection between two node, udp package seen like dropped by router because of the src ip is a WAN IP. so I believe unless you have two machine directly connect to wan or you can counter the anti spoofing rule in your router, this is not a good attend in this topic. I also wonder if there anyway we can send a udp package with WAN src ip from a machine in LAN. Any trick? |
yes... still have problem... I am tired, It's not worth to speed lot of time for it, I finally buy an another machine under the lan, I found spending money makes me happy. |
still have problem in alibaba cloud, I gave up, finally buy an another machine under the lan ^_^, I found I should spend more time in my life, rather than this stupid damn k8s. |
Alternatively try another cni plugin. The kubeadm team no longer recommends
flannel due to bugs in the past.
Also passing a custom ip to all components is difficult to maintain and
instead the host network / default route can be customized...but there are
no docs for that in the k8s.io website yet.
|
finally I make it work, first I follow @hayond step to init two node across two WAN, after init and apply flannel, I change the interface back to LAN ip address eg. 192.168.x.x (using netplan) and do the iptables trick to route the traffic to actual WAN IP. That make udp package sent correctly with flannel. now my flannel using LAN ip address and work perfect so I wonder It write down the interface IP during kubeadm init process somewhere and use it in health check but using the current interface ip for udp VXLAN communication. |
@adamwithit Did your setting kept working or caused some other issues down the track. |
I manged to solve the problem by enabling inbound traffic in port 6443 and used the flag --control-plane-endpoint instead of --apiserver-advertise-address |
Thank you. This helped me. I am using a gcp virtual machine and your instructions helped. Hope other people will find this as well. |
Thank you a lot. this solved my problem as well. Hope this can be better documented in tutorial. |
BUG REPORT
Versions
kubeadm version : v.1.13
Environment: Ubuntu-16.04.5 LTS (Xenial Xerus)
uname -a
): Linux ubuntu 4.15.0-1026-gcp Permissions #27~16.04.1-UbuntuWhat happened?
"sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=32.xxx.xx.xxx(PublicIP)"
when u we using Public IP got error , when using Private IP , got success.
What you expected to happen?
expected success when we give Public IP, but got failed.
How to reproduce it (as minimally and precisely as possible)?
after installed "apt-get install -y kubelet kubeadm kubectl" , trying to get single node cluster with kubadm
Anything else we need to know?
After installing Docker version 18.09.1, i tried single node cluster.
The text was updated successfully, but these errors were encountered: