Placing Files In A Kubernetes Persistent Volume Store On GKE

gke backup persistent volume
gke persistent volume
kubectl cp persistent volume
kubernetes delete persistent volume
gke readwritemany
kubernetes glusterfs persistent volume
kubernetes shared persistent volume between pods
kubernetes copy files from persistent volume

I am trying to run a Factorio game server on Kubernetes (hosted on GKE).

I have setup a Stateful Set with a Persistent Volume Claim and mounted it in the game server's save directory.

I would like to upload a save file from my local computer to this Persistent Volume Claim so I can access the save on the game server.

What would be the best way to upload a file to this Persistent Volume Claim?

I have thought of 2 ways but I'm not sure which is best or if either are a good idea:

  • Restore a disk snapshot with the files I want to the GCP disk which backs this Persistent Volume Claim
  • Mount the Persistent Volume Claim on an FTP container, FTP the files up, and then mount it on the game container

It turns out there is a much simpler way: The kubectl cp command.

This command lets you copy data from your computer to a container running on your cluster.

In my case I ran:

kubectl cp ~/.factorio/saves/ factorio/factorio-0:/factorio/saves/

This copied the file on my computer to /factorio/saves/ in a container running on my cluster.

See kubectl cp -h for more more detail usage information and examples.

Storing data into Persistent Volumes on Kubernetes, Kubernetes provides an API to separate storage from computation, i.e., a pod can perform computations while the files in use are stored on a separate resource. On-disk files in a container are the simplest place for an application to write data, but this approach has drawbacks. The files are lost when the container crashes or stops for any other reason. Furthermore, files within a container are inaccessible to other containers running in the same Pod. The Kubernetes Volume abstraction addresses both of these issues. Conceptually, a volume is a directory which is accessible to all of the containers in a Pod.

You can create data-folder on your GoogleCloud:

gcloud compute ssh <your cloud> <your zone>
mdkir data

Then create PersistentVolume:

kubectl create -f hostpth-pv.yml

kind: PersistentVolume
apiVersion: v1
  name: pv-local
    type: local
  storageClassName: local
    storage: 5Gi
    - ReadWriteOnce
    path: "/home/<user-name>/data"

Create PersistentVolumeClaim:

kubectl create -f hostpath-pvc.yml

kind: PersistentVolumeClaim
apiVersion: v1
  name: hostpath-pvc
  storageClassName: local
    - ReadWriteOnce
      storage: 5Gi
      type: local

Then copy file to GCloud:

gcloud compute scp <your file> <your cloud> <your zone> 

And at last mount this PersistentVolumeClaim to your pod:

       - name: hostpath-pvc
         mountPath: <your-path>
         subPath: hostpath-pvc  
    - name: hostpath-pvc
        claimName: hostpath-pvc

And copy file to data-folder in GGloud:

  gcloud compute scp <your file> <your cloud>:/home/<user-name>/data/hostpath-pvc <your zone>

Persistent volumes with persistent disks, PersistentVolume resources are used to manage durable storage in a cluster. In GKE, PersistentVolumes are typically backed by Compute Engine persistent disks. Unlike Volumes, the PersistentVolumes lifecycle is managed by Kubernetes. Persistent Volumes in GKE are supported using the Persistent Disks (both SSDs and Spinning Disks). The problem with these disks is that they only support ReadWriteOnce(RWO) (the volume can be

You can just use Google Cloud Storage ( since you're looking at serving a few files.

The other option is to use PersistenVolumeClaims. This will work better if you're not updating the files frequently because you will need to detach the disk from the Pods (so you need to delete the Pods) while doing this.

You can create a GCE persistent disk, attach it to a GCE VM, put files on it, then delete the VM and bring the PD to Kubernetes as PersistentVolumeClaim. There's doc on how to do that:

Accessing Fileshares from Google Kubernetes Engine Clusters, Use this topic to learn how to access a Filestore fileshare from a GKE cluster by creating a persistent volume and persistent volume claim. The cluster must be in​  The Local Persistent Volumes beta feature in Kubernetes 1.10 makes it possible to leverage local disks in your StatefulSets. You can specify directly-attached local disks as PersistentVolumes, and use them in StatefulSets with the same PersistentVolumeClaim objects that previously only supported remote volume types.

Multi-Writer File Storage on GKE, The next two files in the example create a PersistentVolume and a persistentvolumeclaim "nfs" created$ kubectl get persistentvolumeclaim nfs Putting these two things together, we can figure out where to put the fsGroup . Here, the group  WordPress uses PersistentVolumes (PV) and PersistentVolumeClaims (PVC) to store data. A PV is a representation of storage volume in the cluster that is provisioned by an admin, or dynamically provisioned by Kubernetes, to fulfill a request made in a PVC.

Kubernetes Persistent Volumes, Looking for more information on what is a Kubernetes persistent volume and how to use Kubernetes has a matching primitive for each of the traditional storage By ensuring the consistent state of our data, we can start putting complex  What's the best way to store a persistent file in Kubernetes? I have a cert (.pfx) and I want to be passing to the application its path. From the looks of it it can't be stored in secrets. Was thinking about a volume but the question is how do I upload the file to it? And which type of volume to choose? Or is there any other efficient way?

Secrets, Consuming Secret values from volumes. Inside the container that mounts a secret volume, the secret keys appear as files and the secret values are base64  Persistent Volume Claim: PVC, especially on GKE, will create a physical Persistent Disk on Google Cloud Platform, and will attach it to the node on which the pod is running, as secondary disk. So, the claim is more Cloud Provider specific. Note: you don't need to manually create the disk.