'Mounting volume for two nodes in minikube
I am mounting a filesystem on minikube:
minikube mount /var/files/:/usr/share/ -p multinode-demo
But I found two complications:
- My cluster has two nodes. The pods in the first node are able to access the host files at
/var/files/, but the pods in the second node are not. What can be the reason for that? - I have to mount the directory before the pods have been created. If I
applymy deployment first, and then do themount, the pods never get the filesystem. Is Kubernetes not able to apply the mounting later, over an existing deployment that required it?
Solution 1:[1]
As mentioned in the comments section, I believe your problem is related to the following GitHub issues: Storage provisioner broken for multinode mode and hostPath permissions wrong on multi node.
In my opinion, you might be interested in using NFS mounts instead, and I'll briefly describe this approach to illustrate how it works.
First we need to install the NFS Server and create the NFS export directory on our host:
NOTE: I'm using Debian 10 and your commands may be different depending on your Linux distribution.
$ sudo apt install nfs-kernel-server -y
$ sudo mkdir -p /mnt/nfs_share && sudo chown -R nobody:nogroup /mnt/nfs_share/
Then, grant permissions for accessing the NFS server and export the NFS share directory:
$ cat /etc/exports
/mnt/nfs_share *(rw,sync,no_subtree_check,no_root_squash,insecure)
$ sudo exportfs -a && sudo systemctl restart nfs-kernel-server
We can use the exportfs -v command to display the current export list:
$ sudo exportfs -v
/mnt/nfs_share <world>(rw,wdelay,insecure,no_root_squash,no_subtree_check,sec=sys,rw,insecure,no_root_squash,no_all_squash)
Now it's time to create a minikube cluster:
$ minikube start --nodes 2
$ kubectl get nodes
NAME STATUS ROLES VERSION
minikube Ready control-plane,master v1.23.1
minikube-m02 Ready <none> v1.23.1
Please note, we're going to use the standard StorageClass:
$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
standard (default) k8s.io/minikube-hostpath Delete Immediate false
Additionally, we need to find the Minikube gateway address:
$ minikube ssh
docker@minikube:~$ ip r | grep default
default via 192.168.49.1 dev eth0
Let's create PersistentVolume and PersistentVolumeClaim which will use the NFS share:
NOTE: The address 192.168.49.1 is the Minikube gateway.
$ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
nfs:
server: 192.168.49.1
path: "/mnt/nfs_share"
$ kubectl apply -f pv.yaml
persistentvolume/nfs-volume created
$ cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-claim
namespace: default
spec:
storageClassName: standard
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
$ kubectl apply -f pvc.yaml
persistentvolumeclaim/nfs-claim created
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs-volume 1Gi RWX Retain Bound default/nfs-claim standard 71s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nfs-claim Bound nfs-volume 1Gi RWX standard 56s
Now we can use the NFS PersistentVolume - to test if it works properly I've created app-1 and app-2 Deployments:
NOTE: The app-1 will be deployed on different node than app-2 (I've specified nodeName in the PodSpec).
$ cat app-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: app-share
mountPath: /mnt/app-share
nodeName: minikube
volumes:
- name: app-share
persistentVolumeClaim:
claimName: nfs-claim
$ kubectl apply -f app-1.yaml
deployment.apps/app-1 created
$ cat app-2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-2
name: app-2
spec:
replicas: 1
selector:
matchLabels:
app: app-2
template:
metadata:
labels:
app: app-2
spec:
nodeName: minikube
containers:
- image: nginx
name: nginx
volumeMounts:
- name: app-share
mountPath: /mnt/app-share
nodeName: minikube-m02
volumes:
- name: app-share
persistentVolumeClaim:
claimName: nfs-claim
$ kubectl apply -f app-2.yaml
deployment.apps/app-2 created
$ kubectl get deploy,pods -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/app-1 1/1 1 1 24s nginx nginx app=app-1
deployment.apps/app-2 1/1 1 1 21s nginx nginx app=app-2
NAME READY STATUS RESTARTS AGE NODE
pod/app-1-7874b8d7b6-p9cb6 1/1 Running 0 23s minikube
pod/app-2-fddd84869-fjkrw 1/1 Running 0 21s minikube-m02
To verify that our NFS share works as expected, we can create a file in the app-1 and then check that we can see that file in the app-2 and on the host:
app-1:
$ kubectl exec -it app-1-7874b8d7b6-p9cb6 -- bash
root@app-1-7874b8d7b6-p9cb6:/# df -h | grep "app-share"
192.168.49.1:/mnt/nfs_share 9.7G 7.0G 2.2G 77% /mnt/app-share
root@app-1-7874b8d7b6-p9cb6:/# touch /mnt/app-share/app-1 && echo "Hello from the app-1" > /mnt/app-share/app-1
root@app-1-7874b8d7b6-p9cb6:/# exit
exit
app-2:
$ kubectl exec -it app-2-fddd84869-fjkrw -- ls /mnt/app-share
app-1
$ kubectl exec -it app-2-fddd84869-fjkrw -- cat /mnt/app-share/app-1
Hello from the app-1
host:
$ ls /mnt/nfs_share/
app-1
$ cat /mnt/nfs_share/app-1
Hello from the app-1
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | matt_j |
