First of all, I hope you understand even if my English is not correct.
Let me explain the situation I’m going through.
I created three virtual machines using KVM on one physical server.
At this time, the physical server was using a NIC called enp11s0, and I wanted to use the bridge method rather than NAT to communicate with virtual machines.
So, I created a virtual bridge, br0, and connected the IP originally attached to enp11s0 to br0.
host server: 192.168.0.45
vm1 (master): 192.168.0.46
vm2 (worker1): 192.168.0.47
vm3 (worker2): 192.168.0.48
And then we proceeded to configure the Kubernetes cluster in virtual machines.
And I installed calico for my Kubernetes network.
But a problem occurred here.
When checking the status of the pod, calico-node (pod name), which should be applied to worker nodes, is crashLookBackOff.
When I checked the pod’s describe, the following event (log) appeared.
Warning Unhealthy 37s (x1141 over 3h31m) kubelet (combined from similar events): Readiness probe failed: 2024-05-08 04:01:30.042 [INFO][38202] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 192.168.0.47,192.168.0.48
I searched the log and it says I need to disable the firewall on port 179 used by calico, but my nodes (virtual machines) have ufw disabled. (The actual physical server has ufw activated)
I thought it might be a problem of not being able to find the nic through additional searches, but I don’t know what to do about this.
I downloaded the calico.yaml file, added IP AUTODETECTION_METHOD as shown in the picture below, and redistributed it, but the calico pods are still in the crashedBackOff state.
What steps can I take to restore calico? please help me Please do me a favor.
I’ll give you some additional information about nic that might be helpful.
[master Node – ip a]
[worker1 Node – ip a]
[worker2 Node – ip a]