Backups

The h8lio subscription includes two backup solutions.

Hot Backups

The hot backups are managed by the Kubernetes Volume Snapshots.

The Volumes Snapshots are created by the CSI Snapshotter within the Ceph Cluster (managed by Rook) which also manages your Persistent Volume Claims.

To provision a Persistent Volume Claim from a Volume Snapshot you have to configure its Restore Volume datasource.

🟢 Pros:

  • the Volume Snapshot creation and restoration is almost instantaneous
  • the size used by the Volume Snapshot is included in your subscription (no extra fee)
  • Is able to clone a Persistent Volume Claim (its persistent size is charged)

🔴 Cons:

  • the data of the Volume Snapshot are staying in the Ceph Cluster in the same zone than the original data (see Cold Backups)
  • the Volume Snapshot can not be restored in a different cluster

The snapshots are limited to 10 per cluster

You can also directly clone a Persistent Volume Claim using your dashboard or the CSI Volume Cloning manifest

Use Cases

  • Before an application upgrade, you can create Volume Snapshots of the Persistent Volume Claims used by this application to be able to quickly restore them in case of trouble
  • Keep a history of data change
  • Clone Persistent Volume Claims to be used by another application pod environment, example: create a test database as identical as the production one.

Manage the Hot Backups

Using the dashboard

  1. Go to your h8lio domain
  2. Select the cluster where the Persistent Volume Claims to snapshot are located
  3. Enter the menu [Cluster Name] > Volumes

hot#1

  1. Click the actions menu located at the end of the line then the “Snapshot” icon button

hot#2

  1. Click on the line of the snapshotted volume or go to the “Snapshots” tab to see the newly created snapshot

hot#3

  1. To restore the snapshot, click on the actions menu at the end of its line then click restore. The name of the volume to restore should not exist.

hot#3

Using the command line

Manifest to create a Volume Snapshot:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: [nameOfTheSnapshot]
  namespace: [namespaceOfThePersistentVolumeClaim]
spec:
  source:
    persistentVolumeClaimName: [nameOfThePersistentVolumeClaimToSnapshot]
  volumeSnapshotClassName: [block-snapshot|files-snapshot]
  • volumeSnapshotClassName depends of the storage class of the Persistent Volume Claim:
    • *-block-* storage classes should use the block-snapshot
    • *-files-* storage classes should use the files-snapshot
Manifest to provision a Persistent Volume Claim from a Volume Snapshot
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: restore-pvc
spec:
  storageClassName: [sameStorageClassAsTheSnapshottedVolume]
  dataSource:
    name: [nameOfTheVolumeSnapshot]
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - [sameAccessModeAsTheSnapshottedVolume]
  resources:
    requests:
      storage: [sameSizeAsTheSnapshottedVolume]

Cold Backups

The Cold Backup is the safest way to backup and restore your clusters. It’s based on Velero and Restic.

🟢 Pros:

  • allows to backup and restore an entire cluster: the Kubernetes manifests and the mounted persistent volume claims data. Some restrictions applies:
    • velero is configured to be non-destructive during a restore (does not overwrite existing resources)
    • resources restoration in a different cluster can lead to some conflicts, example: ingress routes with the same rules (you can use the resources exclusion/inclusion during the backup/restore process to avoid this issue)
  • the backups are stored in a different data center in the same geographical location (outside of the Kubernetes and Ceph Cluster)
  • the cold backups are included in your subscription (no extra cost)

🔴 Cons:

  • as the backup are stored outside of the cluster, the backup time could be increased by the size of the data to transport over the network
  • some application could use files locks and make the backup not possible for these files
  • the backup pod’s mounted volumes are compressed and sent to the backup location by Restic: some large volumes could not be relevant to the cold backup system

The native backup system of your applications could be a good (or even the only) alternative to the Cold Backup limitations and performances, example: S3 Mirroring, Database Replication…

Use Cases

  • Safely backup your clusters and their applications data outside of the Kubernetes Cluster
  • Clone all or part of a cluster
  • Download the backup’s resources to be used in a different environment Kubernetes environment

Manage the Cold Backups

  1. Go to your h8lio domain
  2. Select the cluster to backup
  3. Enter the menu [Cluster Name] > Backups

cold#1

Instant Backup

Click on the plus button to create an instant backup

cold#2

  • Retention: the number of days you want to keep this backup
  • Labels: filter this backup by using the Kubernetes resources labels
  • Excluded/Included resources: Kubernetes resources definition to exclude/include in this backup
  • The volumes included in this backup, click on the size to get the details (see Backup Volume Selection)

Scheduled Backup

Go to the “Schedules” tab and click the plus button

cold#3

  • Frequency: how often this scheduled backup has to be fired: Daily, Weekly or Monthly
  • Hour, Day of the Week or Day of the month: set the time or date anchor of this scheduled (depends of the Frequency field value)
  • The other fields are the same as the Instant Backup)

You are limited to 3 scheduled backup per cluster

Backup Volume Selection

By default a Cold Backup doesn’t include the mounted Persistent Volume Claims (PVC) data. To select the volumes data to backup you have to had an Velero annotation to the pods which are mounting the PVC:

annotations:
  backup.velero.io/backup-volumes: volumeNameA,volumeNameB...

⚠️ if your pod is created by another Kubernetes resource (Deployment, StatefulSet…) you have to set the annotation within the pod template

if you provide a volume list to the annotation, pay attention to separate the volumes’ names only by a coma character (without whitespace)

no needs to include in this annotations the ConfigMap, Secret, EmptyDir mounted volumes are they are volatile or will be backup by Velero as part as the Kubernetes manifests Exemple in a Kubernetes Deployment:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: my-service
  namespace: my-cluster
  labels:
    app: my-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-service
  template:
    metadata:
      labels:
        app: my-service
      annotations:
        # here is the velero annotation (within the pod template section)
        backup.velero.io/backup-volumes: my-volume-data
    spec:
      volumes:
        - name: my-volume-data
          persistentVolumeClaim:
            claimName: my-volume
      containers:
        - name: my-service
          image: registry.h8l.io/my-domain/my-service:latest
          volumeMounts:
            - name: my-volume-data
              mountPath: /data/
      imagePullSecrets:
        - name: harbor-registry
  strategy:
    type: Recreate

Restore a Cold Backup

To restore a Cold Backup you can click on the actions button at the end of the backup line and configure the restore:

restore#1

restore#2

  • Cluster: the restore target cluster. Before to run the restore, ensure the cluster has enough resources allocated (in Standard mode) to run and create the backup resources and its data
  • Labels: the labels used to filter the resources included in the restored backup
  • Excluded/Included resources: Kubernetes resources definitions to include or exclude during the restore process