Facing an issue with attaching EFS volume to Kubernetes pods

Related searches

I am running my docker containers with the help of kubernetes cluster on AWS EKS. Two of my docker containers are using shared volume and both of these containers are running inside two different pods. So I want a common volume which can be used by both the pods on aws.

I created an EFS volume and mounted. I am following link to create PersistentVolumeClaim. But I am getting timeout error when efs-provider pod trying to attach mounted EFS volume space. VolumeId, region are correct only.

Detailed Error message for Pod describe:

timeout expired waiting for volumes to attach or mount for pod "default"/"efs-provisioner-55dcf9f58d-r547q". list of unmounted volumes=[pv-volume]. list of unattached volumes=[pv-volume default-token-lccdw] MountVolume.SetUp failed for volume "pv-volume" : mount failed: exit status 32


Unable to mount to EFS file system · Issue #139 · kubernetes , Failed mount Unable to mount volumes for pod "efs-provisioner-2637432370-​rc4mm_kristapstesting( I'm experiencing this issue now as well. What happened: The deployment contains pods that use AWS EFS as persistent volume. On the deployment process, on the creation of a new pods process stuck with attaching volumes to some of them.


The problem for me was that I was specifying a different path in my PV than /. And the directory on the NFS server that was referenced beyond that path did not yet exist. I had to manually create that directory first.

[aws-efs] efs-provisioner pod stuck at ContainerCreating state · Issue , Error from server (BadRequest): container "efs-provisioner" in pod "efs-pr timeout expired waiting for volumes to attach or mount for pod "default"/"efs-​provisioner-6bbcb6564d-np5dx". list of unmounted Send feedback to sig-​testing, kubernetes/test-infra and/or fejta. I faced exactly same issue. After a little bit of investigation I could find a way to reproduce the issue on my end. When a pod is created, it is mounting the EFS volume, creating a new stunnel in the range 20049-20449, but when the pod is deleted, the stunnel is not closed, I could confirm that by counting the number of stunnel connection after a pod recreation on the same node.


The issue was, I had 2 ec2 instances running, but I mounted EFS volume to only one of the ec2 instances and kubectl was always deploying pods on the ec2 instance which doesn't have the mounted volume. Now I mounted the same volume to both the instances and using PVC, PV like below. It is working fine.

ec2 mounting: AWS EFS mounting with EC2

PV.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: efs_public_dns.amazonaws.com
    path: "/"

PVC.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: efs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi

replicaset.yml

----- only volume section -----

 volumes:
  - name: test-volume
    persistentVolumeClaim:
      claimName: efs

Kubernetes: Shared storage volume between multiple pods on , Sharing files between pods seems to be an easy task at first sight, however it's quite the opposite. it's quite the opposite. Hence I decided to write this article to share how at Abyssale we tackled this issue. The efs-provisioner allows to mount EFS storage as PersistentVolumes in Kubernetes. (Don't  Unable to access cgroup in kubernetes pod. Thanks for contributing an answer to Stack Overflow! Facing an issue with attaching EFS volume to Kubernetes pods. 0.


Using EBS and EFS as Persistent Volume in Kubernetes, The Kubernetes Volume abstraction solves these problems. It allows other pods to mount EFS as the persistent volumes. scale problem were now faced with solving the monolith problem that was slowing their application  Events: Type Reason Age From Message Normal Scheduled 8m default-scheduler Successfully assigned default/efs-provisioner-6598fbc7cf-gqlz6 to ip-192-168-10-151.ec2.internal Normal SuccessfulMountVolume 8m kubelet, ip-192-168-10-151.ec2.in


Using EBS and EFS as Persistent Volume in Kubernetes, To facilitate this, we can mount folders into our pods that are backed by EBS Efs-provisioner runs as a pod in the Kubernetes cluster that has access to an AWS The Kubernetes Volume abstraction solves these problems. The automatic volume detach process does not kick in until kubernetes marks the node as down (5 min 40 sec), once this happens and the original pod is deleted then the attach/detach controller waits 6 minutes to ensure the node is not back before the volume is detached


Warning FailedMount 56s (x11 over 23m) kubelet, ip-192-168-124-20.ec2.internal Unable to mount volumes for pod "jenkins-6758665c4c-gg5tl_jenkins(f6440463-ca87-11e9-a31c-0a4da4f89c32)": timeout expired waiting for volumes to attach or mount for pod "jenkins"/"jenkins-6758665c4c-gg5tl". list of unmounted volumes=[jenkins-home]. list of unattached