Why is a workload running EKS Fargate unable to resolve DNS names, while the same workload when run on an EC2 Node can resolve the names? In Fargate, the workload can’t resolve a service running in the cluster, a DNS name in a private hosted zone, or a public DNS name.
/ $ nslookup my-api.my-namespace.svc.cluster.loca
;; connection timed out; no servers could be reached
/ $ nslookup my-record.my-domain.com
;; connection timed out; no servers could be reached
/ $ nslookup www.google.co.uk
;; connection timed out; no servers could be reached
/ $
- The cluster is private, and the EC2 Nodes are within the same VPC subnets as the Fargate profile.
- DNS resolution is turned on for the VPC
- The private subnets are in a VPC that is shared with the private hosted zone.
I’ve looked at the AWS Fargate Considerations, and I don’t believe any of them are the issue.
Is anyone able to suggest why EKS Fargate is unable to resolve DNS names?
In case it’s relevant, here is an example manifest I’m using to deploy a Job that will be scheduled in Fargate.
apiVersion: batch/v1
kind: Job
metadata:
name: test-fargate
spec:
backoffLimit: 0
ttlSecondsAfterFinished: 600
template:
spec:
restartPolicy: Never
containers:
- name: test-import-aixm
image: my-container/image:latest
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]