Ive some trouble mounting a single disk Premium_LRS in Azure Kubernetes (AKS).
In the past I used the FileShare to mount the same volume to different pods so they could access the same drive/folder from within different pods. Now, this is very slow to write and do file operations on so I try to switch to a shared mounted disk but I can not access it in the pods. According to the pods events the disk is mounted but I cant access/write to this disk.
My setup:
First Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: xx-import
namespace: ns
spec:
replicas: 1
selector:
matchLabels:
app: xx-import
template:
metadata:
labels:
app: xx-import
spec:
containers:
- name: xx-import
image: xxx
imagePullPolicy: Always
ports:
- containerPort: 8083
protocol: TCP
volumeDevices:
- name: my-data
devicePath: /dev/xvda
volumes:
- name: my-data
persistentVolumeClaim:
claimName: my-pvc
Second Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: yy-import
namespace: ns
spec:
replicas: 1
selector:
matchLabels:
app: yy-import
template:
metadata:
labels:
app: yy-import
spec:
containers:
- name: yy-import
image: yyy
imagePullPolicy: Always
ports:
- containerPort: 8083
protocol: TCP
volumeDevices:
- name: my-data
devicePath: /dev/xvda
volumes:
- name: my-data
persistentVolumeClaim:
claimName: my-pvc
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: ns
name: my-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: premium_disk_shares
volumeMode: Block
resources:
requests:
storage: 20Gi
StorageClass
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: premium_disk_shares
parameters:
skuName: Premium_LRS
maxShares: "2"
cachingMode: None #
provisioner: disk.csi.azure.com
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
When I apply this, i get an ok message for both pods:
SuccessfulMountVolume
Normal
xx-import-558b79994c-m28vg
Pod
MapVolume.MapPodDevice succeeded for volume "pvc-001a4e22-41fb-414e-9e03-e0f561b583b6" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-001a4e22-41fb-414e-9e03-e0f561b583b6/dev"
12s
SuccessfulMountVolume
Normal
xx-import-558b79994c-m28vg
Pod
MapVolume.MapPodDevice succeeded for volume "pvc-001a4e22-41fb-414e-9e03-e0f561b583b6" volumeMapPath "/var/lib/kubelet/pods/de7bac8d-1585-4e6f-8f23-77ca2a56fb77/volumeDevices/kubernetes.io~csi"
But how can I now write to the disk so I see the files from within pod xx inside pod yy and vice versa? Is this even possible with an SSD drive?
I already checked: https://github.com/kubernetes-sigs/azuredisk-csi-driver/tree/master/deploy/example/sharedisk
Thanks a lot for your help!
1