NFS volumes are not persistent in kubernetes
kubernetes nfs mount options
kubernetes copy files to persistent volume
kubernetes delete persistent volume
kubernetes nfs volume example
kubernetes shared persistent volume between pods
kubernetes external nfs
I'm trying to mount mongo
/data directory on to a NFS volume in my kubernetes master machine for persisting mongo data. I see the volume is mounted successfully but I can see only
db dirs but not their subdirectories. And I see the data is not even persisting in the volume. when I
kubectl describe <my_pv> it shows
NFS (an NFS mount that lasts the lifetime of a pod)
Why is that so?
I see in kubernetes docs stating that:
An nfs volume allows an existing NFS (Network File System) share to be mounted into your pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of an nfs volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated with data, and that data can be "handed off" between pods. NFS can be mounted by multiple writers simultaneously.
I'm using kubernetes version 1.8.3.
apiVersion: apps/v1beta2 kind: Deployment metadata: name: mongo labels: name: mongo app: mongo spec: replicas: 3 selector: matchLabels: name: mongo app: mongo template: metadata: name: mongo labels: name: mongo app: mongo spec: containers: - name: mongo image: mongo:3.4.9 ports: - name: mongo containerPort: 27017 protocol: TCP volumeMounts: - name: mongovol mountPath: "/data" volumes: - name: mongovol persistentVolumeClaim: claimName: mongo-pvc
apiVersion: v1 kind: PersistentVolume metadata: name: mongo-pv labels: type: NFS spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: "/mongodata" server: 172.20.33.81
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mongo-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi storageClassName: slow selector: matchLabels: type: NFS
The way I mounted my nfs share on my kubernetes master machine:
1) apt-get install nfs-kernel-server 2) mkdir /mongodata 3) chown nobody:nogroup -R /mongodata 4) vi /etc/exports 5) added the line "/mongodata *(rw,sync,all_squash,no_subtree_check)" 6) exportfs -ra 7) service nfs-kernel-server restart 8) showmount -e ----> shows the share
I logged into the bash of my pod and I see the directory is mounted correctly but data is not persisting in my nfs server (kubernetes master machine).
Please help me see what I am doing wrong here.
It's possible that pods don't have permission to create files and directories. You can
exec to your pod and try to
touch a file in NFS share if you get permission error you can ease up permission on file system and
exports file to allow write access.
It's possible to specify
GID in PV object to avoid permission denied issues.
Kubernetes Persistent Volumes, has the following attributes. It is provisioned either dynamically or by an administrator. To solve this, Kubernetes has volumes. Volumes let your pod write to a filesystem that exists as long as the pod exists. Volumes also let you share data between containers in the same pod. But, data in that volume will be destroyed when the pod is restarted. To solve this, Kubernetes has persistent volumes. Persistent volumes are long-term
I see you did a
chown nobody:nogroup -R /mongodata.
Make sure that the application on your pod runs as
Kubernetes Volumes Guide – Examples for NFS and Persistent , How are persistent volumes different from the volumes used by containers? A Kubernetes administrator can specify additional mount options for when a Persistent Volume is mounted on a node. Note: Not all Persistent Volume types support mount options. The following volume types support mount options:
Add the parameter
mountOptions: "vers=4.1" to your StorageClass config, this should fix your issue.
See this Github comment for more info:
What is the difference between persistent volume (PV) and , What is the difference between persistent volume and persistent volume claim? Kubernetes allows mounting NFS (Network File System) share drive in containers as persistent volume (PV). You want to do this when you need to preserve the contents of a volume through several volume mounting and container reboot or re-creation.
Creating a Persistent Volume Claim, (PVC) is a request for the platform to create a PV for you, and you attach PVs to your pods via a PVC. Kubernetes persistent volumes are user-provisioned storage volumes assigned to a Kubernetes cluster. Persistent volumes’ life-cycle is independent from any pod using it. Thus, persistent volumes are perfect for use cases in which you need to retain data regardless of the unpredictable life process of Kubernetes pods.
Configuring NFS Storage for Kubernetes, The NFS must already exist – Kubernetes doesn't run the NFS, pods in just access it. An NFS is useful for two reasons. One, what's already stored in the NFS is not deleted when a pod is destroyed. Data is persistent. As we have checked all data is getting store in NFS directory. So, we have properly done lab of Kubernetes persistence volumes. All about Kubernetes persistent volumes. In this tutorial we have done practical lab of how to mount Kubernetes persistent volumes automatically with new deployed pods. We have also learnt it’s use case and advantages.
Volumes, It's possible that pods don't have permission to create files and directories. You can exec to your pod and try to touch a file in NFS share if you A Kubernetes administrator can specify additional mount options for when a Persistent Volume is mounted on a node. Note: Not all Persistent Volume types support mount options. The following volume types support mount options: