Feature Update October 2022

Autumn is officially here again in New Jersey, bringing with it fresh apples, cider, Halloween candy by the ton, and pumpkin spice everything. It’s been more than four months since our last major update to CloudCasa, which is a bit longer than usual. Don’t think that we spent the summer lounging on a beach, though! Our development team has been working as hard as ever, and we have an impressive list of new CloudCasa features to announce.

Google GCP account support

With this update we’ve introduced Google Cloud Platform (GCP) integration for CloudCasa. This works similarly to our existing AWS and Azure account integration. It supports direct integration with Google Cloud Platform on a per-project basis using a custom role. The integration enables GKE cluster auto-discovery, GKE cluster configuration backup, and GKE cluster auto-creation on restore.

To link CloudCasa to your GCP account, simply go to the Cloud Accounts page under the Configuration tab and click the “Add cloud account” button. Then select “Google Cloud”. You can then click on the “Open in Google Cloud Shell” button to open a cloud shell instance to complete the setup. You will be prompted to confirm that you trust the “cloudcasa-gcp-deployment” repo, and that you authorize Cloud Shell to make the necessary GCP API calls. Then a tutorial will walk you through the simple process of completing the integration. This will grant CloudCasa the permissions it needs, and only the permissions it needs, to discover, back up, and restore GKE clusters.

Cross-cloud BMR support

Previously, cluster auto-creation in the cloud (a.k.a. Cloud BMR) was only available in like-to-like restore situations. So you could restore an EKS cluster to an EKS cluster, or an AKS cluster to an AKS cluster, but not an EKS cluster to an AKS cluster. Now you always have the option of creating a cluster on restore if you have cloud accounts configured, regardless of what the source cluster was. On cross-cloud restores, if you choose to create a new cluster, you will be prompted for all cloud-specific Kubernetes parameters. Note that cross-cloud restores can be complex, and many aspects of your configuration must be considered. We’ll be publishing more information about cross-cloud restores in the coming weeks.

Google & Azure native (non-CSI) volume support

We have added support for backing up PVs using additional non-CSI or “in-tree” drivers on AKS and GKE. These include Google Compute Disks on GKE (provisioner “kubernetes.io/gce-pd”) and Azure Managed Disks on AKS (provisioner “kubernetes.io/azure-disk”). These volumes require some additional manual configuration on your clusters, mainly to provide required credentials to the agent. See our documentation or contact support for details.

Azure Files PV support (Paid service plans only)

We have added support for backup and restore of Azure Files PVs on AKS clusters. This is something of a unique feature, since normally Azure Files only supports restores from snapshots by manually copying files. With CloudCasa, you now get fully automated restores, whether your backups are snapshot-only or snapshot and copy. Automatic restore of Azure Files PVs is a paid-only option. Free service users can use CloudCasa to create snapshots of Azure Files PVs, but must perform restores manually in the way that Microsoft documents for Azure Files.

Automatic agent installation

We have added the ability to automatically install the CloudCasa agent on clusters that support it. This currently means auto-discovered AKS and GKE clusters with public control plane endpoints. You’ll be given the automatic install option, if it’s available, when adding a “Discovered” cluster in the UI. Just click “Install” and you’ll be done!

Agent auto-upgrade disable and manual upgrade options

We have added a per-cluster option to disable automatic agent upgrades. We’ve also added a manual option to trigger the upgrade of an agent that has automatic upgrades disabled. In most cases we recommend leaving automatic agent updates on, which is the default.

Private container registry support

We have added support for storing our agent container images in private container registries. This allow the agent to be easily installed on cluster which don’t have or don’t allow access to the public container registry where our agent containers are normally stored (Docker Hub). You’ll need to copy the necessary container images to a private registry, and then set the registry name in the new “Private container registry for agent” field under Advanced Options in the Add Cluster dialog. This will also disable automatic agent upgrades. A list of the required container images is displayed in the UI when enabling this option.

User object store improvements

Most remaining limitations on the use of user-defined storage and isolated user-defined storage have been removed. Used defined storage now has all of the same capabilities as CloudCasa cloud storage, with the exception of SafeLock support. It can now be set as a default at both the organization and cluster level, and used as target storage for all purposes.

With these improvements, cloud object storage reached through AWS PrivateLink, Azure Private Link, or GCP Private Service Connect is fully supported as a backup destination.

Object Lock support for user object stores – (Paid service plans only)

We now support enabling Object Lock on user object stores. This works for both S3-compatible and Azure object stores. CloudCasa automatically detects if Object Lock is enabled. When properly configured, a lock icon will appear next to the storage in the Configuration/My Storage page. When a backup runs to the storage, the actual retention period will be set to the maximum of the bucket’s retention period and the backup job’s policy retention period. Note that only specific storage settings are supported, as specified below. If the storage configuration is incorrect, CloudCasa will not allow you to add the storage.

Prerequisites for S3 storage buckets with Object Lock:

  • Object Lock enabled

  • Default retention enabled

  • Default retention mode = Compliance

  • Retention period set to a valid period

Prerequisites for Azure object storage with immutability:

  • Version-level immutability enabled at storage account level

  • Version-level immutability policy defined and set as locked

NFS PV support

We’re happy to announce that with this release we’ve added another long-awaited feature: backup support for NFS PVs! By their nature, NFS volumes don’t support snapshots. Thus, NFS PVs will only be backed up when the “snapshot and copy” backup type is selected. Note that in the case of NFS, “snapshot and copy” really means “copy without snapshot”. Since data is backed up directly from the active NFS volumes rather than from snapshots, you should consider using application hooks to quiesce your application if locked or potentially inconsistent files are a concern. Also note that application hooks for NFS volumes may be executed at a different time than hooks for other PVs in the same application. This is only a potential issue for applications that use both NFS and non-NFS PVs and require cross-PV consistency. This will be addressed in a future release.

Linking of discovered clusters to manually added clusters

Previously, if you added some of your cloud-based clusters manually and then later configured your cloud account(s), you could get into an annoying situation where CloudCasa would discover anew the clusters that were already configured, forcing you to re-add them in order to take advantage of the additional capabilities provided by our cloud integration. No longer! Now you can simply click on an auto-discovered cluster and choose to link it to an existing cluster if you know they are the same.

Discovery support for EKS Anywhere Clusters

EKS Anywhere clusters will now be discovered and indicated in the UI with a special icon. Since EKS Anywhere clusters don’t have the same management capabilities as native EKS clusters, not all EKS cluster functionality is available for them. In particular, backup of cloud configuration parameters for the clusters is not supported.

Overwrite option for AKS, EKS, GKE clusters

We’ve added an option that allows you to completely overwrite (i.e. delete and re-create) a target cluster in a linked cloud account. This is useful if you want to periodically re-create a target cluster from a source cluster for testing or development. Obviously, this could be a dangerous option, so as a precaution we allow it only for clusters that were previously auto-created by CloudCasa via a restore operation.

Added more options for cluster creation on restore (AKS, EKS, GKE)

We have added several additional options to the restore dialog when auto-creating a cluster on restore in a linked cloud account. Among other things, these allow you to modify the minimum and maximum number of nodes for auto-scaling and modify node group settings. These settings are especially useful when re-creating a production cluster for dev/test, but can be handy in many other situations as well.

Added a “Preserve NodePorts” option for restore

Enabling this option will preserve auto-assigned nodePort numbers on restore. This may be useful in DR scenarios where restored port numbers need to match those separately configured in load balancers. By default, this option is off and only explicitly assigned nodePort numbers are preserved. Auto-assigned nodePort numbers are discarded so that new numbers will be automatically re-assigned. To prevent port number conflicts, enabling this option is only recommended when restoring to a new cluster.

Quick Start Tour

We’ve added a quick start tour guided by Captain Cat, the unofficial mascot of the Catalogic engineering team. The Captain can quickly walk you through what you need to do to get up and running. He’ll greet you when you first log in. If you don’t require his help, you can minimize him or toggle him off completely. He won’t mind, as long as we keep feeding him. And you can always turn him back on from the User Settings menu if you miss him! We’ll be adding additional tour topics in future updates.

Object-level access control for clusters

As we said we would do when we added RBAC, we’ve started adding object-level access controls to the UI beginning with clusters. Now access to individual clusters can be granted to individuals or user groups. Previously this fine-grained control was only available through the API. The settings are available by clicking the Permissions button in the Edit Cluster dialog. Similar access controls will be added for other object in future updates.

Miscellaneous UI improvements

  • A new sidebar in the restore dialog helps guide users through the multi-step dialog and allows them to jump between steps.

  • Improvements made to the cluster add/edit dialog to implement new features and provide more information about alternate agent install methods.

  • The change password process has been improved. It is reachable via in icon on the User Settings page.

  • Real-time PV snapshot progress and PV restore from copy progress are now provided on the job details page.

  • Warnings about unsupported PV types in a cluster have been added to the cluster protection dashboard (Protection/Cluster Overview/cluster name).

  • New metrics have been added to the cluster security dashboard.

  • More detailed activity messages are now provided for restores.

Snapshot V1 CRDs are now required

The CSI snapshot interface became GA in Kubernetes 1.20, which introduced v1 snapshot CRDs. Kubernetes 1.23 deprecated the older v1beta1 snapshot CRDs. The v1 snapshot CRDs should be available in Kubernetes 1.20 and above. Kubernetes 1.17 to 1.19 can still be supported by installing a newer version of the external snapshotter that uses v1 CRDs.

CloudFormation Stack update

Our AWS CloudFormation stack has been updated, mainly to add a new permission that allows CloudCasa to add tags to EKS clusters. Users with existing AWS cloud account integrations should go to the Configuration/Cloud Accounts page and update the CloudFormation stack(s) where indicated.

Kubernetes agent updates

In this release we’ve again made several changes to our Kubernetes agent to add features, improve performance, and fix bugs. However, manual updates shouldn’t normally be necessary anymore because of the automatic agent update feature. If you have automatic updates disabled for any of your agents, you should update them manually as soon as possible.

Miscellaneous

User API docs We have released the first edition of our API docs for users. Previously we only supplied these to customer and partners on request. We will be expanding the API documentation and adding more example code as time goes on.

Tested with Kubernetes 1.24 and 1.25 - We have tested CloudCasa with Kubernetes 1.24 and 1.25 and confirmed compatibility. No changes or user actions were necessary.

Tested with Ondat – As part of our partnership with Ondat, we’ve tested with v1.3.8 of their software-defined storage solution. See our blog about it for more details.

Notes

With some browsers you may need to restart, hit Control-F5, and/or clear the cache to make sure you have the latest version of the CloudCasa web app when first logging in after the update.

As always, we want to hear your feedback on new features! You can contact us via the user forum, using the support chat feature, or by sending email to support@cloudcasa.io.