Versions
kubeadm version (use kubeadm version
):
kubeadm version: &version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.5", GitCommit:"59755ff595fa4526236b0cc03aa2242d941a5171", GitTreeState:"clean", BuildDate:"2024-05-14T10:44:51Z", GoVersion:"go1.21.9", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Kubernetes version (use
kubectl version
):
Client Version: v1.29.5
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.4
- Cloud provider or hardware configuration:
Bare Metal Server - OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
- Kernel (e.g.
uname -a
):
Linux changsha-master02 5.4.0-186-generic #206-Ubuntu SMP Fri Apr 26 12:31:10 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
- Container runtime (CRI) (e.g. containerd, cri-o):
docker, cri-docker - Container networking plugin (CNI) (e.g. Calico, Cilium):
calico - Others:
Highly available virtual IP provided through kube vip
What happened?
When I used the kubeadm join command to join the control panel, it did not send a request to the virtual IP to obtain the configuration and mistakenly sent a request to the node to be joined
virtual IP:10.10.2.243 node:10.10.2.192
I originally did not use a virtual IP for a single control panel, but later added a virtual IP to add multiple control panels
kubeadm join 10.10.2.243:6443 --token 8fpq7x.pak0z6qw5woh156r --discovery-token-ca-cert-hash sha256:8fc5d90922c8b6b5d9851a280c5ee50a07b284ee6d6e7cb481f2c6ee874d7042 --apiserver-advertise-address 10.10.2.192 --apiserver-bind-port 6443 --control-plane --node-name changsha-master01 --cri-socket unix:///var/run/cri-dockerd.sock --certificate-key a231bbfceaef39a7f6f5cbfaa9e0a45b28d0146d1869f374eaa07814f901e602
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get "https://10.10.2.192:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 10.10.2.192:6443: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
Here are the details
kubeadm join 10.10.2.243:6443 --token 9s9wem.rtrd6h045qnswtfh --discovery-token-ca-cert-hash sha256:8fc5d90922c8b6b5d9851a280c5ee50a07b284ee6d6e7cb481f2c6ee874d7042 --apiserver-advertise-address 10.10.2.192 --apiserver-bind-port 6443 --control-plane --node-name changsha-master01 --cri-socket unix:///var/run/cri-dockerd.sock -v 5
[preflight] Running pre-flight checks
I0626 11:59:46.027967 4077 preflight.go:93] [preflight] Running general checks
I0626 11:59:46.030949 4077 checks.go:280] validating the existence of file /etc/kubernetes/kubelet.conf
I0626 11:59:46.030991 4077 checks.go:280] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0626 11:59:46.031011 4077 checks.go:104] validating the container runtime
I0626 11:59:46.066893 4077 checks.go:639] validating whether swap is enabled or not
I0626 11:59:46.067032 4077 checks.go:370] validating the presence of executable crictl
I0626 11:59:46.067087 4077 checks.go:370] validating the presence of executable conntrack
I0626 11:59:46.067117 4077 checks.go:370] validating the presence of executable ip
I0626 11:59:46.067141 4077 checks.go:370] validating the presence of executable iptables
I0626 11:59:46.067166 4077 checks.go:370] validating the presence of executable mount
I0626 11:59:46.067189 4077 checks.go:370] validating the presence of executable nsenter
I0626 11:59:46.067212 4077 checks.go:370] validating the presence of executable ebtables
I0626 11:59:46.067239 4077 checks.go:370] validating the presence of executable ethtool
I0626 11:59:46.067261 4077 checks.go:370] validating the presence of executable socat
I0626 11:59:46.067284 4077 checks.go:370] validating the presence of executable tc
I0626 11:59:46.067305 4077 checks.go:370] validating the presence of executable touch
I0626 11:59:46.067328 4077 checks.go:516] running all checks
I0626 11:59:46.081491 4077 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0626 11:59:46.081903 4077 checks.go:605] validating kubelet version
I0626 11:59:46.148757 4077 checks.go:130] validating if the "kubelet" service is enabled and active
I0626 11:59:46.164437 4077 checks.go:203] validating availability of port 10250
I0626 11:59:46.164638 4077 checks.go:430] validating if the connectivity type is via proxy or direct
I0626 11:59:46.164682 4077 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0626 11:59:46.164741 4077 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0626 11:59:46.164782 4077 join.go:532] [preflight] Discovering cluster-info
I0626 11:59:46.164822 4077 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "10.10.2.243:6443"
I0626 11:59:46.175968 4077 token.go:118] [discovery] Requesting info from "10.10.2.243:6443" again to validate TLS against the pinned public key
I0626 11:59:46.185048 4077 token.go:135] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.10.2.243:6443"
I0626 11:59:46.185102 4077 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0626 11:59:46.185119 4077 join.go:546] [preflight] Fetching init configuration
I0626 11:59:46.185137 4077 join.go:592] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
Get "https://10.10.2.192:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 10.10.2.192:6443: connect: connection refused
failed to get config map
k8s.io/kubernetes/cmd/kubeadm/app/util/config.getInitConfigurationFromCluster
k8s.io/kubernetes/cmd/kubeadm/app/util/config/cluster.go:75
k8s.io/kubernetes/cmd/kubeadm/app/util/config.FetchInitConfigurationFromCluster
k8s.io/kubernetes/cmd/kubeadm/app/util/config/cluster.go:56
k8s.io/kubernetes/cmd/kubeadm/app/cmd.fetchInitConfiguration
k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:623
k8s.io/kubernetes/cmd/kubeadm/app/cmd.fetchInitConfigurationFromJoinConfiguration
k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:593
k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*joinData).InitCfg
k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:547
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runPreflight
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/preflight.go:98
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:180
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:1068
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
runtime/proc.go:267
runtime.goexit
runtime/asm_amd64.s:1650
unable to fetch the kubeadm-config ConfigMap
k8s.io/kubernetes/cmd/kubeadm/app/cmd.fetchInitConfiguration
k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:625
k8s.io/kubernetes/cmd/kubeadm/app/cmd.fetchInitConfigurationFromJoinConfiguration
k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:593
k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*joinData).InitCfg
k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:547
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runPreflight
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/preflight.go:98
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:180
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:1068
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
runtime/proc.go:267
runtime.goexit
runtime/asm_amd64.s:1650
error execution phase preflight
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:180
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:1068
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
runtime/proc.go:267
runtime.goexit
runtime/asm_amd64.s:1650
What you expected to happen?
I hope it should obtain configuration information through virtual IP, rather than through nodes that have not yet been joined
How to reproduce it (as minimally and precisely as possible)?
Deploy a single control panel k8s cluster without using virtual IP first, and then upgrade it to a k8s cluster with multiple control panels
I hope it should obtain configuration information through virtual IP, rather than through nodes that have not yet been joined
limy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.