{"id":381,"date":"2018-10-16T15:28:43","date_gmt":"2018-10-16T15:28:43","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw93\/?p=381"},"modified":"2018-10-17T09:00:36","modified_gmt":"2018-10-17T09:00:36","slug":"kubernetes-1-8-hidden-gems-volume-snapshotting","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/16\/kubernetes-1-8-hidden-gems-volume-snapshotting\/","title":{"rendered":"Kubernetes 1.8: Hidden Gems &#8211; Volume Snapshotting \/"},"content":{"rendered":"<p>23\/Nov 2017<\/p>\n<p>By <a target=\"\">Luke Addison<\/a><\/p>\n<p>In this Hidden Gems blog post, Luke looks at the new volume snapshotting functionality in Kubernetes and how cluster administrators can use this feature to take and restore snapshots of their data.<\/p>\n<p>In Kubernetes 1.8, volume snapshotting has been released as a prototype. It is external to core Kubernetes whilst it is in the prototype phase, but you can find the project under the snapshot subdirectory of the <a href=\"https:\/\/github.com\/kubernetes-incubator\/external-storage\">kubernetes-incubator\/external-storage<\/a> repository. For a detailed explanation of the implementation of volume snapshotting, read the design proposal <a href=\"https:\/\/github.com\/kubernetes\/community\/blob\/master\/contributors\/design-proposals\/storage\/volume-snapshotting.md\">here<\/a>. The prototype currently <a href=\"https:\/\/github.com\/kubernetes-incubator\/external-storage\/tree\/master\/snapshot\/pkg\/volume\">supports<\/a> GCE PD, AWS EBS, OpenStack Cinder and Kubernetes <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#hostpath\">hostPath<\/a> volumes. Note that aside from hostPath volumes, the logic for snapshotting a volume is implemented by cloud providers; the purpose of volume snapshotting in Kubernetes is to provide a common API for negotiating with different cloud providers in order to take and restore snapshots.<\/p>\n<p>The best way to get an overview of volume snapshotting in Kubernetes is by going through an example. In this post, we are going to spin up a Kubernetes 1.8 cluster on GKE, deploy snapshot-controller and snapshot-provisioner and take and restore a snapshot of a GCE PD.<\/p>\n<p>For reproducibility, I am using Git commit hash b1d5472a7b47777bf851cfb74bfaf860ad49ed7c of the kubernetes-incubator\/external-storage repository.<\/p>\n<p>The first thing we need to do is compile and package both snapshot-controller and snapshot-provisioner into Docker containers. Make sure you have installed Go and configured your GOPATH correctly.<\/p>\n<p>$ go get -d github.com\/kubernetes-incubator\/external-storage<br \/>\n$ cd $GOPATH\/src\/github.com\/kubernetes-incubator\/external-storage\/snapshot<br \/>\n$ # Checkout a fixed revision<br \/>\n$ #git checkout b1d5472a7b47777bf851cfb74bfaf860ad49ed7c<br \/>\n$ GOOS=linux GOARCH=amd64 go build -o _output\/bin\/snapshot-controller-linux-amd64 cmd\/snapshot-controller\/snapshot-controller.go<br \/>\n$ GOOS=linux GOARCH=amd64 go build -o _output\/bin\/snapshot-provisioner-linux-amd64 cmd\/snapshot-pv-provisioner\/snapshot-pv-provisioner.go<\/p>\n<p>You can then use the following Dockerfiles. These will build both snapshot-controller and snapshot-provisioner. We run apk add &#8211;no-cache ca-certificates in order to add root certificates into the container images. To avoid using stale certificates, we could alternatively pass them into the containers by mounting the hostPath \/etc\/ssl\/certs to the same location in the containers.<\/p>\n<p>FROM alpine:3.6<\/p>\n<p>RUN apk add &#8211;no-cache ca-certificates<\/p>\n<p>COPY _output\/bin\/snapshot-controller-linux-amd64 \/usr\/bin\/snapshot-controller<\/p>\n<p>ENTRYPOINT [&#8220;\/usr\/bin\/snapshot-controller&#8221;]<\/p>\n<p>FROM alpine:3.6<\/p>\n<p>RUN apk add &#8211;no-cache ca-certificates<\/p>\n<p>COPY _output\/bin\/snapshot-provisioner-linux-amd64 \/usr\/bin\/snapshot-provisioner<\/p>\n<p>ENTRYPOINT [&#8220;\/usr\/bin\/snapshot-provisioner&#8221;]<\/p>\n<p>$ docker build -t dippynark\/snapshot-controller:latest . -f Dockerfile.controller<br \/>\n$ docker build -t dippynark\/snapshot-provisioner:latest . -f Dockerfile.provisioner<br \/>\n$ docker push dippynark\/snapshot-controller:latest<br \/>\n$ docker push dippynark\/snapshot-provisioner:latest<\/p>\n<p>We will now create a cluster on GKE using gcloud.<\/p>\n<p>$ gcloud container clusters create snapshot-demo &#8211;cluster-version 1.8.3-gke.0<br \/>\nCreating cluster snapshot-demo&#8230;done.<br \/>\nCreated [https:\/\/container.googleapis.com\/v1\/projects\/jetstack-sandbox\/zones\/europe-west1-b\/clusters\/snapshot-demo].<br \/>\nkubeconfig entry generated for snapshot-demo.<br \/>\nNAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS<br \/>\nsnapshot-demo europe-west1-b 1.8.3-gke.0 35.205.77.138 n1-standard-1 1.8.3-gke.0 3 RUNNING<\/p>\n<p>Snapshotting requires two extra resources, VolumeSnapshot and VolumeSnapshotData. For an overview of the lifecyle of these two resources, take a look at the <a href=\"https:\/\/github.com\/kubernetes-incubator\/external-storage\/blob\/master\/snapshot\/doc\/user-guide.md\">user guide<\/a> in the project itself. We will look at the functionality of each of these resources further down the page, but the first step is to register them with the API Server. This is done using <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/api-extension\/custom-resources\/#customresourcedefinitions\">CustomResourceDefinitions<\/a>. snapshot-controller will create a CustomeResourceDefinition for each of VolumeSnapshot and VolumeSnapshotData when it starts up so some of the work is taken care of for us. snapshot-controller will also watch for VolumeSnapshot resources and take snapshots of the volumes they reference. To allow us to restore our snapshots we will deploy snapshot-provisioner as well.<\/p>\n<p>apiVersion: v1<br \/>\nkind: ServiceAccount<br \/>\nmetadata:<br \/>\nname: snapshot-controller-runner<br \/>\nnamespace: kube-system<br \/>\n&#8212;<br \/>\nkind: ClusterRole<br \/>\napiVersion: rbac.authorization.k8s.io\/v1<br \/>\nmetadata:<br \/>\nname: snapshot-controller-role<br \/>\nrules:<br \/>\n&#8211; apiGroups: [&#8220;&#8221;]<br \/>\nresources: [&#8220;persistentvolumes&#8221;]<br \/>\nverbs: [&#8220;get&#8221;, &#8220;list&#8221;, &#8220;watch&#8221;, &#8220;create&#8221;, &#8220;delete&#8221;]<br \/>\n&#8211; apiGroups: [&#8220;&#8221;]<br \/>\nresources: [&#8220;persistentvolumeclaims&#8221;]<br \/>\nverbs: [&#8220;get&#8221;, &#8220;list&#8221;, &#8220;watch&#8221;, &#8220;update&#8221;]<br \/>\n&#8211; apiGroups: [&#8220;storage.k8s.io&#8221;]<br \/>\nresources: [&#8220;storageclasses&#8221;]<br \/>\nverbs: [&#8220;get&#8221;, &#8220;list&#8221;, &#8220;watch&#8221;]<br \/>\n&#8211; apiGroups: [&#8220;&#8221;]<br \/>\nresources: [&#8220;events&#8221;]<br \/>\nverbs: [&#8220;list&#8221;, &#8220;watch&#8221;, &#8220;create&#8221;, &#8220;update&#8221;, &#8220;patch&#8221;]<br \/>\n&#8211; apiGroups: [&#8220;apiextensions.k8s.io&#8221;]<br \/>\nresources: [&#8220;customresourcedefinitions&#8221;]<br \/>\nverbs: [&#8220;create&#8221;, &#8220;list&#8221;, &#8220;watch&#8221;, &#8220;delete&#8221;]<br \/>\n&#8211; apiGroups: [&#8220;volumesnapshot.external-storage.k8s.io&#8221;]<br \/>\nresources: [&#8220;volumesnapshots&#8221;]<br \/>\nverbs: [&#8220;get&#8221;, &#8220;list&#8221;, &#8220;watch&#8221;, &#8220;create&#8221;, &#8220;update&#8221;, &#8220;patch&#8221;, &#8220;delete&#8221;]<br \/>\n&#8211; apiGroups: [&#8220;volumesnapshot.external-storage.k8s.io&#8221;]<br \/>\nresources: [&#8220;volumesnapshotdatas&#8221;]<br \/>\nverbs: [&#8220;get&#8221;, &#8220;list&#8221;, &#8220;watch&#8221;, &#8220;create&#8221;, &#8220;update&#8221;, &#8220;patch&#8221;, &#8220;delete&#8221;]<br \/>\n&#8212;<br \/>\napiVersion: rbac.authorization.k8s.io\/v1<br \/>\nkind: ClusterRoleBinding<br \/>\nmetadata:<br \/>\nname: snapshot-controller<br \/>\nroleRef:<br \/>\napiGroup: rbac.authorization.k8s.io<br \/>\nkind: ClusterRole<br \/>\nname: snapshot-controller-role<br \/>\nsubjects:<br \/>\n&#8211; kind: ServiceAccount<br \/>\nname: snapshot-controller-runner<br \/>\nnamespace: kube-system<br \/>\n&#8212;<br \/>\napiVersion: apps\/v1beta1<br \/>\nkind: Deployment<br \/>\nmetadata:<br \/>\nname: snapshot-controller<br \/>\nnamespace: kube-system<br \/>\nspec:<br \/>\nreplicas: 1<br \/>\nstrategy:<br \/>\ntype: Recreate<br \/>\ntemplate:<br \/>\nmetadata:<br \/>\nlabels:<br \/>\napp: snapshot-controller<br \/>\nspec:<br \/>\nserviceAccountName: snapshot-controller-runner<br \/>\ncontainers:<br \/>\n&#8211; name: snapshot-controller<br \/>\nimage: dippynark\/snapshot-controller<br \/>\nimagePullPolicy: Always<br \/>\nargs:<br \/>\n&#8211; -cloudprovider=gce<br \/>\n&#8211; name: snapshot-provisioner<br \/>\nimage: dippynark\/snapshot-provisioner<br \/>\nimagePullPolicy: Always<br \/>\nargs:<br \/>\n&#8211; -cloudprovider=gce<\/p>\n<p>In this case we have specified -cloudprovider=gce, but you can also use aws or openstack depending on your environment. For these other cloud providers there may be other parameters you need to set to configure the neccessary authorisation. Examples of how to do this can be found <a href=\"https:\/\/github.com\/kubernetes-incubator\/external-storage\/tree\/master\/snapshot\/deploy\/kubernetes\">here<\/a>. hostPath is enabled by default, but requires you to run snapshot-controller and snapshot-provisioner on the same node as the hostPath volume that you want to snapshot and restore and should only be used on <a href=\"https:\/\/github.com\/kubernetes-incubator\/external-storage\/blob\/master\/snapshot\/pkg\/apis\/crd\/v1\/types.go#L227-L231\">single node development clusters for testing purposes<\/a>. For an example of how to deploy snapshot-controller and snapshot-provisioner to take and restore hostPath volume snapshots for a particular directory, see <a href=\"https:\/\/github.com\/kubernetes-incubator\/external-storage\/tree\/master\/snapshot\/deploy\/kubernetes\/hostpath\">here<\/a>. For a walkthrough of taking and restoring a hostPath volume snapshot see <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/cron-jobs\/\">here<\/a>.<\/p>\n<p>We have also defined a new ServiceAccount to which we have bound a custom <a href=\"https:\/\/kubernetes.io\/docs\/admin\/authorization\/rbac\/\">ClusterRole<\/a>. This is only needed for <a href=\"https:\/\/kubernetes.io\/docs\/admin\/authorization\/rbac\/\">RBAC<\/a> enabled clusters. If you have not enabled RBAC in your cluster, you can ignore the ServiceAccount, ClusterRole and ClusterRoleBinding and remove the serviceAccountName field from the snapshot-controller Deployment. If you have enabled RBAC in your cluster, notice that we have authorised the ServiceAccount to create, list, watch and delete CustomResourceDefinitions. This is so that snapshot-controller can set them up for our two new resources. Since snapshot-controller only needs these CustomResourceDefinition permissions temporarily on startup, it would be better to remove them and make administrators create the two CustomResourceDefinitions manually. Once snapshort-controller is running, you will be able to see the created CustomResourceDefinitions.<\/p>\n<p>$ kubectl get crd<br \/>\nNAME AGE<br \/>\nvolumesnapshotdatas.volumesnapshot.external-storage.k8s.io 1m<br \/>\nvolumesnapshots.volumesnapshot.external-storage.k8s.io 1m<\/p>\n<p>To see the full definitions for these resources you can run kubectl get crd -o yaml. Note that VolumeSnapshot specifies a scope of Namespaced and VolumeSnapshotData is non namespaced. We can now interact with our new resource types.<\/p>\n<p>$ kubectl get volumesnapshot,volumesnapshotdata<br \/>\nNo resources found.<\/p>\n<p>Looking at the logs for both snapshot containers we can see that they are working correctly.<\/p>\n<p>$ kubectl get pods -n kube-system<br \/>\nNAME READY STATUS RESTARTS AGE<br \/>\n&#8230;<br \/>\nsnapshot-controller-66f7c56c4-h7cpf 2\/2 Running 0 1m<br \/>\n$ kubectl logs snapshot-controller-66f7c56c4-h7cpf -n kube-system -c snapshot-controller<br \/>\nI1104 11:38:53.551581 1 gce.go:348] Using existing Token Source &amp;oauth2.reuseTokenSource, mu:sync.Mutex, t:(*oauth2.Token)(nil)}<br \/>\nI1104 11:38:53.553988 1 snapshot-controller.go:127] Register cloudprovider %sgce-pd<br \/>\nI1104 11:38:53.553998 1 snapshot-controller.go:93] starting snapshot controller<br \/>\nI1104 11:38:53.554050 1 snapshot-controller.go:168] Starting snapshot controller<br \/>\n$ kubectl logs snapshot-controller-66f7c56c4-h7cpf -n kube-system -c snapshot-provisioner<br \/>\nI1104 11:38:57.565797 1 gce.go:348] Using existing Token Source &amp;oauth2.reuseTokenSource, mu:sync.Mutex, t:(*oauth2.Token)(nil)}<br \/>\nI1104 11:38:57.569374 1 snapshot-pv-provisioner.go:284] Register cloudprovider %sgce-pd<br \/>\nI1104 11:38:57.585940 1 snapshot-pv-provisioner.go:267] starting PV provisioner volumesnapshot.external-storage.k8s.io\/snapshot-promoter<br \/>\nI1104 11:38:57.586017 1 controller.go:407] Starting provisioner controller be8211fa-c154-11e7-a1ac-0a580a200004!<\/p>\n<p>Let\u2019s now create the PersistentVolumeClaim we are going to snapshot.<\/p>\n<p>apiVersion: v1<br \/>\nkind: PersistentVolumeClaim<br \/>\nmetadata:<br \/>\nname: gce-pvc<br \/>\nspec:<br \/>\naccessModes:<br \/>\n&#8211; ReadWriteOnce<br \/>\nresources:<br \/>\nrequests:<br \/>\nstorage: 3Gi<\/p>\n<p>Note that this is using the default StorageClass on GKE which will dynamically provision a GCE PD PersistentVolume. Let\u2019s now create a Pod that will create some data in the volume. We will take a snapshot of the data and restore it later.<\/p>\n<p>apiVersion: v1<br \/>\nkind: Pod<br \/>\nmetadata:<br \/>\nname: busybox<br \/>\nspec:<br \/>\nrestartPolicy: Never<br \/>\ncontainers:<br \/>\n&#8211; name: busybox<br \/>\nimage: busybox<br \/>\ncommand:<br \/>\n&#8211; &#8220;\/bin\/sh&#8221;<br \/>\n&#8211; &#8220;-c&#8221;<br \/>\n&#8211; &#8220;while true; do date &gt;&gt; \/tmp\/pod-out.txt; sleep 1; done&#8221;<br \/>\nvolumeMounts:<br \/>\n&#8211; name: volume<br \/>\nmountPath: \/tmp<br \/>\nvolumes:<br \/>\n&#8211; name: volume<br \/>\npersistentVolumeClaim:<br \/>\nclaimName: gce-pvc<\/p>\n<p>The Pod appends the current date and time to a file stored on our GCE PD every second. We can use cat to inspect the file.<\/p>\n<p>$ kubectl exec -it busybox cat \/tmp\/pod-out.txt<br \/>\nSat Nov 4 11:41:30 UTC 2017<br \/>\nSat Nov 4 11:41:31 UTC 2017<br \/>\nSat Nov 4 11:41:32 UTC 2017<br \/>\nSat Nov 4 11:41:33 UTC 2017<br \/>\nSat Nov 4 11:41:34 UTC 2017<br \/>\nSat Nov 4 11:41:35 UTC 2017<br \/>\n$<\/p>\n<p>We are now ready to take a snapshot. Once we create the VolumeSnapshot resource below, snapshot-controller will attempt to create the actual snapshot by interacting with the configured cloud provider (GCE in our case). If successful, the VolumeSnapshot resource is bound to a corresponding VolumeSnapshotData resource. We need to reference the PersistentVolumeClaim that references the data we want to snapshot.<\/p>\n<p>apiVersion: volumesnapshot.external-storage.k8s.io\/v1<br \/>\nkind: VolumeSnapshot<br \/>\nmetadata:<br \/>\nname: snapshot-demo<br \/>\nspec:<br \/>\npersistentVolumeClaimName: gce-pvc<\/p>\n<p>$ kubectl create -f snapshot.yaml<br \/>\nvolumesnapshot &#8220;snapshot-demo&#8221; created<br \/>\n$ kubectl get volumesnapshot<br \/>\nNAME AGE<br \/>\nsnapshot-demo 18s<br \/>\n$ kubectl describe volumesnapshot snapshot-demo<br \/>\nName: snapshot-demo<br \/>\nNamespace: default<br \/>\nLabels: SnapshotMetadata-PVName=pvc-048bd424-c155-11e7-8910-42010a840164<br \/>\nSnapshotMetadata-Timestamp=1509796696232920051<br \/>\nAnnotations: &lt;none&gt;<br \/>\nAPI Version: volumesnapshot.external-storage.k8s.io\/v1<br \/>\nKind: VolumeSnapshot<br \/>\nMetadata:<br \/>\nCluster Name:<br \/>\nCreation Timestamp: 2017-11-04T11:58:16Z<br \/>\nGeneration: 0<br \/>\nResource Version: 2348<br \/>\nSelf Link: \/apis\/volumesnapshot.external-storage.k8s.io\/v1\/namespaces\/default\/volumesnapshots\/snapshot-demo<br \/>\nUID: 71256cf8-c157-11e7-8910-42010a840164<br \/>\nSpec:<br \/>\nPersistent Volume Claim Name: gce-pvc<br \/>\nSnapshot Data Name: k8s-volume-snapshot-7193cceb-c157-11e7-8e59-0a580a200004<br \/>\nStatus:<br \/>\nConditions:<br \/>\nLast Transition Time: 2017-11-04T11:58:22Z<br \/>\nMessage: Snapshot is uploading<br \/>\nReason:<br \/>\nStatus: True<br \/>\nType: Pending<br \/>\nLast Transition Time: 2017-11-04T11:58:34Z<br \/>\nMessage: Snapshot created successfully and it is ready<br \/>\nReason:<br \/>\nStatus: True<br \/>\nType: Ready<br \/>\nCreation Timestamp: &lt;nil&gt;<br \/>\nEvents: &lt;none&gt;<\/p>\n<p>Notice the Snapshot Data Name field. This is a reference to the VolumeSnapshotData resource that was created by snapshot-controller when we created our VolumeSnapshot. The conditions towards the bottom of the output above show that our snapshot was created successfully. We can check snapshot-controller\u2019s logs to verify this.<\/p>\n<p>$ kubectl logs snapshot-controller-66f7c56c4-ptjmb -n kube-system -c snapshot-controller<br \/>\n&#8230;<br \/>\nI1104 11:58:34.245845 1 snapshotter.go:239] waitForSnapshot: Snapshot default\/snapshot-demo created successfully. Adding it to Actual State of World.<br \/>\nI1104 11:58:34.245853 1 actual_state_of_world.go:74] Adding new snapshot to actual state of world: default\/snapshot-demo<br \/>\nI1104 11:58:34.245860 1 snapshotter.go:516] createSnapshot: Snapshot default\/snapshot-demo created successfully.<\/p>\n<p>We can also view the snapshot in GCE.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/blog.jetstack.io\/blog\/kubernetes-1-8-hidden-gems-snapshotting\/gce-snapshot.png\" alt=\"gce snapshot\" \/><\/p>\n<p>We can now look at the corresponding VolumeSnapshotData resource that was created.<\/p>\n<p>$ kubectl get volumesnapshotdata<br \/>\nNAME AGE<br \/>\nk8s-volume-snapshot-7193cceb-c157-11e7-8e59-0a580a200004 3m<br \/>\n$ kubectl describe volumesnapshotdata k8s-volume-snapshot-2a97d3f9-c155-11e7-8e59-0a580a200004<br \/>\nName: k8s-volume-snapshot-7193cceb-c157-11e7-8e59-0a580a200004<br \/>\nNamespace:<br \/>\nLabels: &lt;none&gt;<br \/>\nAnnotations: &lt;none&gt;<br \/>\nAPI Version: volumesnapshot.external-storage.k8s.io\/v1<br \/>\nKind: VolumeSnapshotData<br \/>\nMetadata:<br \/>\nCluster Name:<br \/>\nCreation Timestamp: 2017-11-04T11:58:17Z<br \/>\nDeletion Grace Period Seconds: &lt;nil&gt;<br \/>\nDeletion Timestamp: &lt;nil&gt;<br \/>\nResource Version: 2320<br \/>\nSelf Link: \/apis\/volumesnapshot.external-storage.k8s.io\/v1\/k8s-volume-snapshot-7193cceb-c157-11e7-8e59-0a580a200004<br \/>\nUID: 71a28267-c157-11e7-8910-42010a840164<br \/>\nSpec:<br \/>\nGce Persistent Disk:<br \/>\nSnapshot Id: pvc-048bd424-c155-11e7-8910-42010a8401641509796696237472729<br \/>\nPersistent Volume Ref:<br \/>\nKind: PersistentVolume<br \/>\nName: pvc-048bd424-c155-11e7-8910-42010a840164<br \/>\nVolume Snapshot Ref:<br \/>\nKind: VolumeSnapshot<br \/>\nName: default\/snapshot-demo<br \/>\nStatus:<br \/>\nConditions:<br \/>\nLast Transition Time: &lt;nil&gt;<br \/>\nMessage: Snapshot creation is triggered<br \/>\nReason:<br \/>\nStatus: Unknown<br \/>\nType: Pending<br \/>\nCreation Timestamp: &lt;nil&gt;<br \/>\nEvents: &lt;none&gt;<\/p>\n<p>Notice the reference to the GCE PD snapshot. It also references the VolumeSnapshot resource we created above and the PersistentVolume that the snapshot has been taken from. This was the PersistentVolume that was dynamically provisioned when we created our gcd-pvc PersistentVolumeClaim earlier. One thing to point out here is that snapshot-controller does not deal with pausing any applications that are interacting with the volume before the snapshot is taken, so the data may be inconsistent if you do not deal with this manually. This will be less of a problem for some applications than others.<\/p>\n<p>The following diagram shows how the various resources discussed above reference each other. We can see how a VolumeSnapshot binds to a VolumeSnapshotData resource. This is analogous to PersistentVolumeClaims and PersistentVolumes. We can also see that VolumeSnapshotData references the actual snapshot taken by the volume provider, in the same way to how a PersistentVolume references the physical volume backing it.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/blog.jetstack.io\/blog\/kubernetes-1-8-hidden-gems-snapshotting\/relationship-diagram.png\" alt=\"relationship diagram\" \/><\/p>\n<p>Now that we have created a snapshot, we can restore it. To do this we need to create a special StorageClass implemented by snapshot-provisioner. We will then create a PersistentVolumeClaim referencing this StorageClass. An annotation on the PersistentVolumeClaim will inform snapshot-provisioner on where to find the information it needs to negotiate with the cloud provider to restore the snapshot. The StorageClass can be defined as follows.<\/p>\n<p>kind: StorageClass<br \/>\napiVersion: storage.k8s.io\/v1<br \/>\nmetadata:<br \/>\nname: snapshot-promoter<br \/>\nprovisioner: volumesnapshot.external-storage.k8s.io\/snapshot-promoter<br \/>\nparameters:<br \/>\ntype: pd-standard<\/p>\n<p>Note the provisioner field which tells snapshot-provisioner it needs to implement the StorageClass. We can now create a PersistentVolumeClaim that will use the StorageClass to dynamically provision a PersistentVolume that contains the contents of our snapshot.<\/p>\n<p>apiVersion: v1<br \/>\nkind: PersistentVolumeClaim<br \/>\nmetadata:<br \/>\nname: busybox-snapshot<br \/>\nannotations:<br \/>\nsnapshot.alpha.kubernetes.io\/snapshot: snapshot-demo<br \/>\nspec:<br \/>\naccessModes:<br \/>\n&#8211; ReadWriteOnce<br \/>\nresources:<br \/>\nrequests:<br \/>\nstorage: 3Gi<br \/>\nstorageClassName: snapshot-promoter<\/p>\n<p>Note the snapshot.alpha.kubernetes.io\/snapshot annotation which refers to the VolumeSnapshot we want to use. snapshot-provisioner can use this resource to get all the information it needs to perform the restore. We have also specified snapshot-promoter as the storageClassName which tells snapshot-provisioner that it needs to act. snapshot-provisioner will provision a PersistentVolume containing the contents of the snapshot-demo snapshot. We can see from the STORAGECLASS columns below that the snapshot-promoter StorageClass has been used.<\/p>\n<p>$ kubectl get pvc<br \/>\nNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE<br \/>\n&#8230;<br \/>\nbusybox-snapshot Bound pvc-8eed96e4-c157-11e7-8910-42010a840164 3Gi RWO snapshot-promoter 11s<br \/>\n&#8230;<br \/>\n$ kubectl get pv pvc-8eed96e4-c157-11e7-8910-42010a840164<br \/>\nNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE<br \/>\npvc-8eed96e4-c157-11e7-8910-42010a840164 3Gi RWO Delete Bound default\/busybox-snapshot snapshot-promoter 21s<\/p>\n<p>Checking the snapshot-provisioner logs we can see that the snapshot was restored successfully.<\/p>\n<p>$ kubectl logs snapshot-controller-66f7c56c4-ptjmb -n kube-system -c snapshot-provisioner<br \/>\n&#8230;<br \/>\nProvisioning disk pvc-8eed96e4-c157-11e7-8910-42010a840164 from snapshot pvc-048bd424-c155-11e7-8910-42010a8401641509796696237472729, zone europe-west1-b requestGB 3 tags map[source:Created from snapshot pvc-048bd424-c155-11e7-8910-42010a8401641509796696237472729 -dynamic-pvc-8eed96e4-c157-11e7-8910-42010a840164]<br \/>\n&#8230;<br \/>\nI1104 11:59:10.563990 1 controller.go:813] volume &#8220;pvc-8eed96e4-c157-11e7-8910-42010a840164&#8221; for claim &#8220;default\/busybox-snapshot&#8221; created<br \/>\nI1104 11:59:10.987620 1 controller.go:830] volume &#8220;pvc-8eed96e4-c157-11e7-8910-42010a840164&#8221; for claim &#8220;default\/busybox-snapshot&#8221; saved<br \/>\nI1104 11:59:10.987740 1 controller.go:866] volume &#8220;pvc-8eed96e4-c157-11e7-8910-42010a840164&#8221; provisioned for claim &#8220;default\/busybox-snapshot&#8221;<\/p>\n<p>Let\u2019s finally mount the busybox-snapshot PersistentVolumeClaim into a Pod to see that the snapshot was restored properly.<\/p>\n<p>apiVersion: v1<br \/>\nkind: Pod<br \/>\nmetadata:<br \/>\nname: busybox-snapshot<br \/>\nspec:<br \/>\nrestartPolicy: Never<br \/>\ncontainers:<br \/>\n&#8211; name: busybox<br \/>\nimage: busybox<br \/>\ncommand:<br \/>\n&#8211; &#8220;\/bin\/sh&#8221;<br \/>\n&#8211; &#8220;-c&#8221;<br \/>\n&#8211; &#8220;while true; do sleep 1; done&#8221;<br \/>\nvolumeMounts:<br \/>\n&#8211; name: volume<br \/>\nmountPath: \/tmp<br \/>\nvolumes:<br \/>\n&#8211; name: volume<br \/>\npersistentVolumeClaim:<br \/>\nclaimName: busybox-snapshot<\/p>\n<p>We can use cat to see the data written to the volume by the busybox pod.<\/p>\n<p>$ kubectl exec -it busybox-snapshot cat \/tmp\/pod-out.txt<br \/>\nSat Nov 4 11:41:30 UTC 2017<br \/>\nSat Nov 4 11:41:31 UTC 2017<br \/>\nSat Nov 4 11:41:32 UTC 2017<br \/>\nSat Nov 4 11:41:33 UTC 2017<br \/>\nSat Nov 4 11:41:34 UTC 2017<br \/>\nSat Nov 4 11:41:35 UTC 2017<br \/>\n&#8230;<br \/>\nSat Nov 4 11:58:13 UTC 2017<br \/>\nSat Nov 4 11:58:14 UTC 2017<br \/>\nSat Nov 4 11:58:15 UTC 2017<br \/>\n$<\/p>\n<p>Notice that since the data is coming from a snapshot, the final date does not change if we run cat repeatedly.<\/p>\n<p>$ kubectl exec -it busybox-snapshot cat \/tmp\/pod-out.txt<br \/>\n&#8230;<br \/>\nSat Nov 4 11:58:15 UTC 2017<br \/>\n$<\/p>\n<p>Comparing the final date to the creation time of the snapshot in GCE, we can see that the snapshot took about 2 seconds to take.<\/p>\n<p>We can delete the VolumeSnapshot resource which will also delete the corresponding VolumeSnapshotData resource and the snapshot in GCE. This will not affect any PersistentVolumeClaims or PersistentVolumes we have already provisioned using the snapshot. Conversely, deleting any PersistentVolumeClaims or PersistentVolumes that have been used to take a snapshot or have been provisioned using a snapshot will not delete the snapshot itself from GCE, however deleting the PersistentVolumeClaim or PersistentVolume that was used to take a snapshot will prevent you from restoring any further snapshots using snapshot-provisioner.<\/p>\n<p>$ kubectl delete volumesnapshot snapshot-demo<br \/>\nvolumesnapshot &#8220;snapshot-demo&#8221; deleted<\/p>\n<p>We should also delete the busybox Pods so they do not keep checking the date forever.<\/p>\n<p>$ kubectl delete pods busybox busybox-snapshot<br \/>\npod &#8220;busybox&#8221; deleted<br \/>\npod &#8220;busybox-snapshot&#8221; deleted<\/p>\n<p>For good measure we will also clean up the PersistentVolumeClaims and the cluster itself.<\/p>\n<p>$ kubectl delete pvc busybox-snapshot gce-pvc<br \/>\npersistentvolumeclaim &#8220;busybox-snapshot&#8221; deleted<br \/>\npersistentvolumeclaim &#8220;gce-pvc&#8221; deleted<br \/>\n$ yes | gcloud container clusters delete snapshot-demo &#8211;async<br \/>\nThe following clusters will be deleted.<br \/>\n&#8211; [snapshot-demo] in [europe-west1-b]<\/p>\n<p>Do you want to continue (Y\/n)?<br \/>\n$<\/p>\n<p>As usual, any GCE PDs you provisioned will not be deleted by deleting the cluster, so make sure to clear those up too if you do not want to be charged.<\/p>\n<p>Although this project is in the early stages, you can instantly see its potential from this simple example and we will hopefully see support for other volume providers very soon as it matures. Together with <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/cron-jobs\/\">CronJobs<\/a>, we now have the primitives we need within Kubernetes to perform automated backups of our data. For submitting any issues or project contributions, the best place to start is the external-storage <a href=\"https:\/\/github.com\/kubernetes-incubator\/external-storage\/issues\">issues tab<\/a>.<\/p>\n<p><a href=\"https:\/\/blog.jetstack.io\/blog\/volume-snapshotting\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>23\/Nov 2017 By Luke Addison In this Hidden Gems blog post, Luke looks at the new volume snapshotting functionality in Kubernetes and how cluster administrators can use this feature to take and restore snapshots of their data. In Kubernetes 1.8, volume snapshotting has been released as a prototype. It is external to core Kubernetes whilst &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/16\/kubernetes-1-8-hidden-gems-volume-snapshotting\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Kubernetes 1.8: Hidden Gems &#8211; Volume Snapshotting \/&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-381","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/381","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/comments?post=381"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/381\/revisions"}],"predecessor-version":[{"id":528,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/381\/revisions\/528"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/media?parent=381"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/categories?post=381"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/tags?post=381"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}