I’m using Helm + Kubernetes and trying to setup a local cluster to have one PostgreSQL database. However it seems that the giving DNS is not working to connect from any pod in the same namespace.
I’m using this for PostgreSQL: https://artifacthub.io/packages/helm/bitnami/postgresql
Some short findings:
- Pod connecting to the Cluster IP of the postgresql service – do not work
- Pod connecting to the postgresql pod IP – works
These are my findings so far and how I set it up below.
helm upgrade -i postgresql bitnami/postgresql
--create-namespace
--namespace postgres
--values ./postgresql.yaml
My postgresql.yaml (values file):
image:
registry: docker.io
repository: bitnami/postgresql
tag: 16.2.0-debian-12-r15
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
debug: false
global:
postgresql:
service:
ports:
postgresql: "5432"
nameOverride: "postgresql"
fullnameOverride: "postgresql"
service:
type: ClusterIP
ports:
postgresql: 5432
auth:
enablePostgresUser: true
postgresPassword: "postgres"
username: "root"
password: "root"
database: "test_db"
replicationUsername: repl
replicationPassword: "repl"
secretKeys:
adminPasswordKey: postgres-password
userPasswordKey: password
usePasswordFiles: false
architecture: standalone
containerPorts:
postgresql: 5432
audit:
logHostname: false
logConnections: false
logDisconnections: false
pgAuditLog: ""
pgAuditLogCatalog: "off"
clientMinMessages: error
logLinePrefix: ""
logTimezone: ""
postgresqlDataDir: /bitnami/postgresql/data
resources:
limits:
cpu: 1
memory: 1000Mi
requests:
cpu: 500m
memory: 500Mi
metrics:
enabled:
I then installed dnsutils to test it out:
kubectl apply -f dns-test.yaml
With the dns-test.yaml:
apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: postgres
spec:
containers:
- name: dnsutils
image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
command:
- sleep
- "infinity"
imagePullPolicy: IfNotPresent
restartPolicy: Always
I then tried using the dig command inside the dnsutils pod (within the postgres namespace):
kubectl exec -n postgres -i -t dnsutils — dig postgresql.wikijs.svc.cluster.local
Result:
; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> postgresql.wikijs.svc.cluster.local
;; global options: +cmd
;; connection timed out; no servers could be reached
command terminated with exit code 9
I checked the dnsutils resolve.conf to compare to postgresql:
kubectl exec -ti dnsutils -n postgres -- cat /etc/resolv.conf
Result:
nameserver 10.96.0.10
search postgres.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Resolve.conf in postgresql pod:
nameserver 10.96.0.10
search postgres.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
I then checked the endpoints for postgresql pod:
NAME ENDPOINTS AGE
postgresql 10.1.0.120:5432 107m
So it seems the port is exposed.
So two questions:
- Why can’t I connect to the service using the Cluster IP address to reach the pod(s) that is used for it?
- Why will the DNS not apply to the Cluster IP for the service? Currently nothing is added as far as I understand.