We have recently added a worker node to our production Kubernetes cluster. One of the pods in a namespace requires NFS mounts to be available. We have defined a Persistent Volume (PV) with the information of the mounts, and a Persistent Volume Claim (PVC) was created and bound. On all the worker nodes, the NFS was mounted locally, but on the new worker node, it was not mounted. Still, the pod that requires this mount is working as expected on the new node. When we execute into the pod, we see that the NFS share is mounted inside the pod, even though it is not available locally on the worker node.
How is this happening?
Does it pose any issues?
If there is no requirement for the mount to be present locally, why are we mounting it on the other worker nodes?
We have another namespace with pods that will hang in a pending state, complaining about the NFS mount not being present locally on the worker node.
We are yet to mount the NFS on the new worker node, but still the application is working as expected.
GSLP is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.