Verifying CSI configuration
Summary
Creation of CSI snapshots fails sometimes due to various reasons such as not installing external snapshotter or CSI driver running into some issues. For example, you may see these error messages under “PV Details” tab:
Awaiting VolumeSnapshot reconciliation
VSC <snapcontent-*> lacks SnapshotHandle
This generally points to either some problem with CSI driver installation/configuration or to the fact that volume snapshot class is not explicitly selected in CloudCasa. Before proceeding further, note that some CSI drivers such as Ceph, HPE, and Nutanix will not work with volume snapshot classes automatically created by CloudCasa because they require some parameters to be set. For such drivers, you need to explicitly select volume snapshot class as described at Volume Snapshot Classes. If you still see snapshot errors after proper volume snapshot configuration, check the following instructions to verify that CSI is properly configured on the cluster.
Note
This article assumes that the user has already set up a PVC of CSI type.
Step-by-Step
Verify that “v1” version of Snapshot CRDs are available on the cluster. You can check this by running the following command:
$ kubectl api-resources | grep snapshot.storage.k8s.io
You should see output similar to the following. Note “v1”.:
volumesnapshotclasses vsclass,vsclasses snapshot.storage.k8s.io/v1 false VolumeSnapshotClass volumesnapshotcontents vsc,vscs snapshot.storage.k8s.io/v1 false VolumeSnapshotContent volumesnapshots vs snapshot.storage.k8s.io/v1 true VolumeSnapshot
If you don’t find the CRDs listed, follow the documentation of your CSI driver and install all necessary components.
Identify a PVC you want to snapshot. Ensure that it is using CSI and identify the CSI driver. This can be done by checking the PV associated with the PVC. Here is an example of how to do that (only partial output is shown for the sake of brevity).
Get PVC:
$ kubectl get pvc -A NAMESPACE NAME STATUS VOLUME testapp-csi testapp-storage Bound pvc-f2e4f9ec-13b7-4251-8eb0-8e3f28a8703d
Get details of the corresponding PV:
$ kubectl describe pv pvc-f2e4f9ec-13b7-4251-8eb0-8e3f28a8703d Name: pvc-f2e4f9ec-13b7-4251-8eb0-8e3f28a8703d StorageClass: csi-hostpath-sc Status: Bound Claim: testapp-csi/testapp-storage ... Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: hostpath.csi.k8s.io ...
From PV details, you can see that the type is CSI. Also note the CSI driver.
Make sure that there is a
VolumeSnapshotClassfor the CSI driver. If not, create one. Here is a sample:apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: my-snapshotclass driver: <CSI_DRIVER> deletionPolicy: Delete
Note that some CSI drivers require custom properties to be set in a volume snapshot class. Please refer to the documentation of your CSI driver. If such configuration is required, you would pass parameters in the field
parameters. Here is an example:parameters: param1: param1-value
Create
VolumeSnapshotresource in the same namespace as the original PVC. This acts as the request to create a snapshot of the PVC. Here is a sample spec:apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: my-snapshot spec: volumeSnapshotClassName: my-snapshotclass source: persistentVolumeClaimName: <PVC_NAME>
If the CSI is properly configured, you should see the field “readyToUse” set in VolumeSnapshot resource.:
$ kubectl -n <PVC_NAMESPACE> get volumesnapshot # Showing only relevant output
NAME READYTOUSE SOURCEPVC SNAPSHOTCONTENT
my-snapshot true testapp-storage snapcontent-3dddf8c5-c018-45a4-999c-b4f001f30555
If you don’t see Volume snapshot content in the output, external snapshotter is either not installed or is not working as expected. If
VolumeSnapshotContent is created but “readyToUse” is not set, there may be an issue with CSI driver. Check logs of
the CSI driver and consult the driver’s documentation.
Some possible reasons for snapshots not working:
External snapshotter and/or CSI driver are not installed.
External snapshotter and/or CSI driver cannot handle the version of volume snapshot CRDs.
CSI driver requires some parameters in Volume snapshot class resource but they are missing.
Finally, some CSI drivers don’t implement snapshot functionality so confirm from the driver documentation that this is not the case.
If snapshot is successfully created, verify that a PVC can be created from it. Here is a sample spec:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-from-snapshot
spec:
storageClassName: <STORAGE-CLASS>
dataSource:
name: my-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Make sure that the storage request size is same as the “restoreSize” field of the volume snapshot created above.
The PVC should now move to “Bound” state unless “volumeBindingMode” in the storage class is set to “WaitForFirstConsumer”. In this case, the PVC waits for it to be mounted to a pod before moving to “Bound” state. In that scenario, start a dummy pod, mounting the PVC. Here is one way of doing it:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: testvol
volumes:
- name: testvol
persistentVolumeClaim:
claimName: pvc-from-snapshot
Note that the image “nginx” should be able to be fetched by the cluster. This spec is only a sample. You can use any pod definition that works in your environment but the PVC created from the snapshot above needs to be mounted. After starting the pod, confirm that PVC moves to “Bound” state and that the pod is running.
Finally, don’t forget to delete all the resources created for the testing (such as snapshot, pvc from snapshot, and pod).