I’m deploying keepalived + haproxy as loadbalancers and I’m struggling with some connections issues.
The architecture is the following:
- Server-A is a logserver that will forward logs to LBR-VIP
- LBR-1 has keepalived configured in Master mode and haproxy.
- LBR-2 has keepalived configured in Backup mode and haproxy.
- Both loadbalancer are listening in the port 1518 TCP and they forward to 2 backend servers listening in the same port.
When I try to do a telnet from server-A to LBR-VIP, telnet cannot connect, but with a tcpdump in LBR-1 I can see that there is incoming and outgoing traffic.
This is my keepalived configuration file in LBR-1.
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 101
priority 101
advert_int 1
accept
garp_master_refresh 5
garp_master_refresh_repeat 1
unicast_src_ip 10.100.23.79 #this is the IP of LBR-1
unicast_peer {
10.100.23.80 #this is the IP of LBR-2
}
virtual_ipaddress {
10.100.23.100 #This is the VIP
}
}
This is my haproxy configuration:
frontend f_logstash_1518_tcp
bind 10.100.23.100:1518
default_backend b_logstash_1518_tcp
backend b_logstash_1518_tcp
server logstash1 10.100.23.72:1518 check
server logstash2 10.100.23.73:1518 check
I can see that the VIP and the listening ports are working properly:
# netstat -tanpu | grep 1518
tcp 0 0 10.100.23.100:1518 0.0.0.0:* LISTEN 427840/haproxy
# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:1d:d8:8c:60:2a brd ff:ff:ff:ff:ff:ff
inet 10.100.23.79/26 metric 100 brd 10.100.23.127 scope global eth0
valid_lft forever preferred_lft forever
inet 10.100.23.100/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::21d:d8ff:fe8c:602a/64 scope link
valid_lft forever preferred_lft forever
These are all tests I have done:
- Telnet from LBR-1 to LBR-VIP port 1518 = YES
- Telnet from SERVER-A to LBR-1 port 1518 = YES (With a change in haproxy)
- Telnet from SERVER-A to LBR-VIP port 1518 = NO
3.1 I tried with keepalived in active-active
3.2 I tried with bind transparent mode in haproxy
3.3 I tried with net.ipv4.ip_nonlocal_bind=1 in /etc/sysctl.conf
The thing is, when I to the telnet from SERVER-A to LBR-VIP, if I do a tcpdump in LBR-1, I see traffic:
tcpdump -nn -i any port 1518 and src 10.28.12.5 or dst 10.28.12.5
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
14:02:36.176820 eth0 In IP 10.28.12.5.40874 > 10.100.23.100.1518: Flags [S], seq 2581026039, win 64240, options [mss 1418,sackOK,TS val 3210703685 ecr 0,nop,wscale 7], length 0
14:02:36.176841 eth0 Out IP 10.100.23.100.1518 > 10.28.12.5.40874: Flags [R.], seq 0, ack 2581026040, win 0, length 0
14:02:37.186300 eth0 In IP 10.28.12.5.40874 > 10.100.23.100.1518: Flags [S], seq 2581026039, win 64240, options [mss 1418,sackOK,TS val 3210704695 ecr 0,nop,wscale 7], length 0
14:02:37.186340 eth0 Out IP 10.100.23.100.1518 > 10.28.12.5.40874: Flags [R.], seq 0, ack 1, win 0, length 0
Some considerations:
- Servers are Ubuntu 22.04
- There is only one interface: eth0
- Server A is in Azure Cloud
- LBRs are in Azure Stack Edge (onPrem)
Summary: Telnet to VIP in loadbalancer doesnt work. However, I can see incoming and outgoing traffic in the destination.
I ran out of ideas, hope you can bring some light.
Thanks very much
Regards.