I’m trying to get up and running an AKS cluster with vNet integration in a hub-spoke environment. AKS running in a spoke, with a dedicated subnet for API. Firewall is running in one of hub’s subnets. When I configure default route 0.0.0.0/0 for API subnet with a firewall as a next hop, K8Ss stopped working. The only way how I can make this work is to use narrower route, hub’s virtual network CIDR with a firewall as a next hop. Then everything magically started to work. 0 packets dropped on firewall, basically nothing should leave spoke over peering, as those subnets are local in a routing table.
- tried every CNI available – nothing changed, it looks like it wont affect the behavior in any way
- examined all logs on K8S system node pool – local block device crashed during the route change, I’m not sure if this is a cause or result
- removed all NSG
- tried single vs multiple route tables, only api subnet is affected by this problem, if I configure such a route on a pool subnets, they work just fine
- deploy cluster over az cli, to confirm that it’s not caused by terraform