AKS Windows based node Pod is stuck in terminating state during deletion/replace of pod whenever I add two volume mounts with subpath. This does not happen if I add single volume mount with subpath.
Steps to reproduce the behavior:
Create a deployment with below volume and volumemounts. Make sure to create the configmap as well.
volumeMounts:
- name: configs
mountPath: C:inetpubwwwrootweb.config
subPath: web.config
- name: cloudconfig
mountPath: C:CloudConfig
subPath: credentials
volumes:
- name: configs
configMap:
name: test-report-executor-cm
- name: cloudconfig
configMap:
name: cloud-config
Try running below command:
kubectl replace -n namespace -f filename.yaml --force
Expected behavior
pod should not be stuck in terminating state. The pod is stuck at terminating state for more than 5d now.
Environment (please complete the following information):
CLI Version – v1.27.7
Kubernetes version – 1.27.7
Additional context
Tried below command to check kubelet logs, but it does not show any
logs just a bunch of info about kubelet
more C:kkubelet.log