I was wondering if someone could help me with a complex Kubernetes ingress routing question.
The high level objective is to receive Unicast UDP traffic from outside the Kb8s cluster to a pod’s multus network interface.
I deploy a Multus network interface to a pod and am looking to direct UDP unicast traffic via my Nginx ingress controller using either a NodePort or HostPort.
Outbound unicast UDP traffic from inside the Pod to outside the k8s cluster works just fine, the current issue i’m facing is with ingress Unicast traffic to my pod via a nodePort or HostPort.
Please see below for details:
Kubernetes distro: Tried Kubeadm and Microk8s
Kubernetes version: 1.29
Kubernetes worker IP: 192.168.33.163
OS version: Ubuntu 20.04
Kernel version: 5.15.0-102-generic
Networking: ens2f0np0 is connected to a separate physical network to default nic eno1 and uses all Layer 2 switching. The host level network is 192.168.33.0/24 and the nat'ed multus net is 10.10.5.0/24
See example of attempt to expose using NodePort:
net-attach-def.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: unicast-ipvlan
namespace: default
spec:
config: '{
"cniVersion": "0.3.0",
"type": "ipvlan",
"master": "ens2f0np0",
"mode": "l2",
"ipam": {
"type": "host-local",
"subnet": "10.10.5.0/24",
"rangeStart": "10.10.5.2",
"rangeEnd": "10.10.5.80",
"gateway": "10.10.5.1",
"routes": [
{
"dst": "0.0.0.0/0"
},
{ "dst": "192.168.33.0/24", "gw": "10.10.5.1" }
]
}
}'
pod-and-service.yaml
apiVersion: v1
kind: Pod
metadata:
name: ipvlan-test
namespace: default
annotations:
k8s.v1.cni.cncf.io/networks: '[{ "name": "unicast-ipvlan", "ips": [ "10.10.5.2/32" ] }]'
labels:
app: net-tools
spec:
containers:
- image: george7522/net-tools:ubuntu
command: [ "/bin/sh", "-c" ]
args: ["sleep 1000000"]
imagePullPolicy: Always
name: net-tools
securityContext:
privileged: true
securityContext:
privileged: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: net-tools-test
namespace: default
spec:
type: NodePort
ports:
- port: 30003
targetPort: 30003
protocol: UDP
nodePort: 30003
selector:
app: net-tools
---
apiVersion: v1
kind: Endpoints
metadata:
name: sriov-test
namespace: hpsample
subsets:
- addresses:
- ip: "10.10.5.2" # This is IP of the secondary multus interface of ipvlan-test
ports:
- port: 30003
protocol: UDP
Here’s the output of various iptables commands on the host;
- kube-proxy nat rules
$ sudo iptables -t nat -L -n -v | grep -i 30003
1 36 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* hpsample/sriov-test */ udp to:10.10.5.2:30003
1 36 KUBE-SEP-GIVWTBWIZR56HMIN all -- * * 0.0.0.0/0 0.0.0.0/0 /* hpsample/sriov-test -> 10.10.5.2:30003 */
- unicast routes
$ ip route
default via 192.168.42.254 dev eno1 proto static
default via 192.168.33.254 dev ens2f0np0 proto static metric 101
10.10.5.0/24 via 192.168.33.254 dev ens2f0np0
10.96.0.0/12 dev cni0 proto kernel scope link src 10.96.0.1
I’ve tried:
-
HostPort instead of NodePort
-
mode=l3 instead of mode=l2
-
What works?
I can successfully send UDP packets from another host on the 192.168.33.0/24 subnet to my kubernetes worker host 192.168.33.163, so I know networking is correct on the host level.
- How am I testing the pod networking?
Setup a listener inside the pod to listen on the multus nic:
$ kubectl exec -it -n hpsample sriov-test bash
$ socat UDP4-RECVFROM:30003,bind=10.10.5.2,fork STDOUT
Connect to another physical server on the same network, add a unicast route and send a UDP packet.
ssh [email protected]
$ sudo ip route add 10.10.5.0/24 via 192.168.33.254 dev ens34
$ echo "Hello unicast" | socat - UDP4-DATAGRAM:10.10.5.2:30003,bind=192.168.33.111
Cogito Group is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.