Storage

The Storage page allows you to define, view, and edit storage targets for backups and migration/replication jobs. From the menu bar, go to Configuration/Storage.

For CloudCasa Pro backups, it is only necessary to define storage endpoints here if you wish to use user-supplied storage. If you wish to use CloudCasa storage, you do not need to define anything on this page.

For CloudCasa Velero Management, at least one separate storage location must be defined for each cluster running Velero.

Migration and replication jobs can use storage defined here as transient storage.

Two tabs are available on the Storage page, Backup storage and Snapshot storage. The Backup storage tab lists user-defined object storage for both CloudCasa Pro and Velero backups. The Snapshot storage tab is used only for Velero backups, and lists Velero VolumeSnapshotLocations (VSLs).

Storage for CloudCasa Pro backups

You have a choice of where to send backup data with CloudCasa Pro backups. You can use CloudCasa’s secure cloud storage, 100 GB/month of which is included for free in all service plans, or you can use your own object storage. If you wish to use your own object storage, you must define the endpoints for it here. Note that configuring storage here is not required to use CloudCasa Pro backups.

Used defined storage has all of the same capabilities as CloudCasa cloud storage, with the exception of SafeLock support. It can be set as a default at both the organization and cluster level, and used as target storage for all purposes.

CloudCasa supports most types of object storage that are compatible with the Amazon S3 API or Microsoft Azure Blob storage as user-supplied storage. Cloud object storage reached through AWS PrivateLink, Azure Private Link, or GCP Private Service Connect is fully supported as a backup destination. So is storage located on private networks, as long as it is reachable from your cluster(s).

For CloudCasa Pro backups, storage locations are global and can be shared across multiple clusters. Many clusters can back up to the same storage and CloudCasa automatically handles separation of data among these clusters.

A global “Default storage” location for your organization can be set at the top of the page. The default is inherited by all clusters and backup jobs unless overridden. Note that a change in Storage target for a backup will trigger a full backup the next time backups are run.

Storage for Velero backups

For Velero backups, two types of storage locations can be defined. BackupStorageLocations (BSLs) are object storage endpoints used for storing backup data. VolumeSnapshotLocations (VSLs) are locations used for storing snapshot data. All BSLs defined in your environment are displayed under the Backup Storage tab. All VSLs are likewise displayed under the Snapshot Storage tab.

Velero requires that a separate BSL for each cluster must be created. Sharing the same BSL across multiple clusters can lead to data loss. You can use the same S3 bucket for different BSLs provided you set up a unique prefix for each cluster during configuration. CloudCasa for Velero will automatically detect duplicate BSLs and alert users about potential data loss.

Note that Velero continually verifies and validates BSLs and sets the “Available” state to “No” if it detects access or connectivity issues. This availability status is reflected in the Status column. Validation Frequency is 1 minute by default.

CloudCasa for Velero supports most types of object storage that are compatible with the Amazon S3 API or Microsoft Azure Blob storage as BSLs. S3 Storage from AWS, Azure and GCP have been explicitly tested and certified, but it is reasonably safe to assume that if your BSL works with Velero, it will work with CloudCasa for Velero.

Adding storage for CloudCasa Pro backups

  1. In the Storage page, select the Backup storage tab and click Add storage +.

  2. Select CloudCasa (the default) as the Storage Type. You must specify the following fields:

    Is your storage isolated/restricted from Internet access?

    Choose Yes if your storage is not reachable via the public Internet.

    Proxy cluster

    If the storage is marked as isolated, you must choose a cluster with an active CloudCasa agent which will be responsible for performing management operations on the storage. The storage must be reachable from this cluster.

    You can also optionally add a proxy cluster for non-isolated storage if you wish for maintenance operations to be run from a local cluster, but this should not normally be necessary. By default, service operations will be run from CloudCasa’s cloud infrastructure.

  3. Click Next.

  4. In the Provider step, choose the Provider Type and fill in the provider-specific options:

    Amazon / S3 compatible

    Select this option for Amazon Simple Storage Service (S3) or any S3 compatible object storage. The following options are available:

    Bucket name

    Enter the bucket name of the S3 storage.

    Endpoint

    Enter the regional endpoint URL of the S3 storage. It usually is of the format “https:://service-code.region-code.provider.com”. E.g: https://s3.ap-south-1.amazonaws.com, https://s3.eu-west-2.wasabisys.com, https://nyc3.digitaloceanspaces.com

    Region

    Enter the region under which the Amazon S3 bucket is created (not applicable for non-AWS buckets).

    Disable TLS certificate validation

    It is generally not recommended to enable this setting. Enabling it allows connections to the storage endpoint even if the server certificate is self-signed or expired, or if the certificate domain name does not match the host name.

    Access key

    Enter the access key of the S3 storage.

    Secret key

    Enter the secret key of the S3 storage.

    Azure

    Select this option for Azure Blob Storage. The following options are available:

    Azure Cloud

    Select Public (the default) for storage in the public Azure cloud or Government for storage in Azure Government Cloud. Azure Government Cloud is managed through portal.azure.us instead of portal.azure.com for Azure Public Cloud. If you don’t know which Azure cloud you are using, chances are you should select Public.

    Resource Group Name

    Enter the resource group name that will be used to hold all resources for the storage.

    Storage Account Name

    Enter the name of the storage account created under the same resource group.

    Subscription ID

    Enter the subscription ID of the Microsoft Azure plan that you have purchased.

    Region

    Enter the region name associated with the resource group.

    Tenant ID

    Enter the tenant ID associated with the Microsoft Azure subscription.

    Client ID

    Enter the unique Client (Application) ID assigned by Microsoft Azure when the application was registered.

    Client Secret

    Enter a valid client secret of the Service Principal.

  5. Click Next

  6. In the Summary step, you will see a summary of the above, and will need to define:

    Name

    Display name for the Storage location.

  7. Click Save.

Using Immutable Storage

CloudCasa supports enabling immutability on user-supplied object stores. This works with both S3-compatible storage (i.e. Object Lock) and Azure Blob Storage (i.e. Immutable Storage). CloudCasa automatically detects if Object Lock/immutability is enabled. When properly configured, a lock icon will appear next to the storage in the Configuration/Storage page. When a backup runs to the storage, the actual retention period will be set to the maximum of the bucket’s retention period and the backup job’s policy retention period. Note that only specific storage settings are supported, as specified below. If the storage configuration is incorrect, CloudCasa will not allow you to add the storage.

Prerequisites for S3 storage buckets with Object Lock:

  • Object Lock enabled

  • Default retention enabled

  • Default retention mode = Compliance

  • Retention period set to a valid period

Prerequisites for Azure blob storage with immutability:

  • Version-level immutability enabled at storage account level

  • Version-level immutability policy defined and set as locked

Note that support for immutable storage is provided only with paid service plans. Also, immutability is not supported for Velero backup storage locations (BSLs). This is a limitation of Velero.

Adding Velero Backup Storage Locations

  1. On the Configuration/Storage page, select the Backup storage tab and click Add Storage +.

  2. Select “Velero” as the Storage Type. You must specify the following fields:

    Cluster

    Name of the Cluster the BSL is configured in.

    AccessMode

    Read/Write implies the BSL is configured for backup and recovery on a cluster. ReadOnly implies the BSL is mapped for recovery purposes alone on a cluster. If CloudCasa for Velero identifies identical BSLs mapped in more than one cluster as Read/Write, it will alert users about potential data loss.

    IsDefault

    Velero allows you to set a BSL as the default storage for backups. If multiple BSLs are defined as default in the same cluster, Velero random rotates among these default storage options. Default BSL can be overridden by the Backup Specifications.

    Bucket Name

    Name of the S3 Bucket

    Prefix

    A Prefix is set when the same bucket is shared across multiple clusters. The BucketName+Prefix combination for BSLs must be unique, unless a BSL is being mapped in ReadOnly AccessMode.

  3. In the Provider step, you choose Provider Type using the following options:

    Amazon S3 or S3 Compatible Storage

    You can add S3 Storage Endpoint through two authentication methods: AWS IAM Roles for Service Accounts (IRSA), and AWS credentials that are AccessKey/SecretKey based. The IRSA method is recommended for better auditability and role management, but the AccessKey/SecretKey method is supported by most object storage vendors. See the Spec for AWS BSLs.

    Azure Blob Storage

    You can add Azure Blob Storage as a backup target through two authentication methods: Azure Service Principal and Azure Storage Account Access Key. The Service Principal method is recommended for better auditability and role management. If using Access Key, it is recommended to rotate and regenerate keys regularly manually or through a Key Vault. See the Detailed Spec for Azure BSLs.

    • For both methods, you will need: Resource Group Name and Storage Account Name

    • For Service Principal authentication, you will also need: Subscription ID, Tenant ID, Client ID, Client Secret

    Google Cloud Storage

    You can add Google Cloud Storage (GCS) or GCS-compatible storage through two authentication methods: Google Service Account Key and Google Workload Identity. Google Workload identity is the recommended method. Detailed Spec for Google BSL is available here.

    • For both authentication methods, you will need: Google Access ID

    • For the Google Service Account Key method, you will need: Type, Project ID, Private Key, Client Email, Client ID, Auth URI, Token URI, Auth Provider X509 Certificate URL, Client X509 Cert URL. You can import a JSON file with these values if performing this action repeatedly for each cluster.

    • For the Google Workload Identity method, you will need: Kubernetes Service Account Namespace, Kubernetes Service Account Name, Service Account.

  4. In the Summary step, you see some selected options and you can add additional properties by editing YAML in the upper right corner.

    Name

    You must assign a name for the storage.

Adding Velero Volume Snapshot Locations

A volume snapshot location is a location in which volume snapshots created by Velero backups can be stored. Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible VolumeSnapshotLocation per provider, although you can only select one location per provider at backup time.

In the Storage page, select the Snapshot Storage tab and click Add storage +. You must specify the following:

Cluster

Name of the Cluster the VSL should be configured in.

Provider Type

Available options are AWS/S3 Storage, Azure Blob Storage and GCP/GCS Storage. For each storage type, different options are presented to configure the VSL.

AWS VSL:

In order to configure your VSL in AWS, you must select the Region you want to store the snapshots in, VSL Profile Name (e.g. Default), and credentials through Access/Secret Key or IRSA. See the Spec for AWS VSLs.

Azure VSL:

In order to configure your VSL in Azure, you must provide the Resource Group Name, Subscription ID, and the details for Authentication either through Azure Service Principal or Azure Storage Account Access Key. Azure supports selection of snapshot type (Full or Incremental). Full snapshot is the default. See the Spec for Azure VSLs.

GCP VSL

In order to configure your VSL in GCP, you must provide the Region and Project Name. See the Spec for GCP VSLs.