I’m using AWS Network Load Balancer as a k8s service for my deployment. The reason I use NLB instead of ALB is that I need a static host to expose my service inside my Elastic Kubernetes Service(EKS). The NLB I delpoied uses such yaml:
apiVersion: v1
kind:Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
name: my-nlb
namespace: my-ns
spec:
ports:
- name: webport
port: 80
protocol: TCP
targetPort: 80
selector:
app: my-app
type: LoadBalancer
The problem is that, when I made a deployment with 2 replicas, and checked they are selected by NLB. I do some test like:
ab -c 100 -n 3000 http://AWS-NLB-Hostname
It seems that all the requests went to the same pod.
I know NLB is using hash for load balancing, does it mean it will load all requests from the same IP to the same target? If so (or not), what should I do to fix that?
Because my service is stafulless, I hope that all the requests are balanced across all the pods.