NFS Configuration
CloudCasa supports backups to NFS (and SMB) targets, in addition to S3 (compatible) and Azure blob storage. While NFS has been around for a very long time, the exact configuration is non-trivial and if not done correctly, users start seeing “Permission denied” errors when CloudCasa tries to access the storage. This document explains how CloudCasa reads from and writes to NFS storage and also suggests a recommended way to configure it.
NFS Gateway pod
Whenever backup, restore, or maintenance operations need to be done for NFS storage, CloudCasa creates a PV and PVC and starts a pod to which this PVC is attached. On most Kubernetes distributions, the pod starts with a random user ID unless the mount options “uid” and “gid” are configured while adding the NFS storage in CloudCasa. In particular, if “uid: 0” and “gid: 0” options are set, the pod will start as root user and that may not be allowed due to security policies.
Permissions
The processing of a read or a write operation to files in an NFS mounted directory follows these steps:
Client sends read or write request to the server along with the user ID / group ID of the process that is performing the operation. In our case, the UID/GID are that of the user/group the NFS gateway pod is running as.
NFS server can either pass the incoming UID/GID as is to the file system or can map it to some other IDs. This behavior is controlled by the options “root_squash”, “no_root_squash”, “all_squash”, and “no_all_squash” (explained below). By default, the server maps incoming user ID 0 (root) to an anonymous user (typically called “nobody”) but it passes non-root user IDs as is. Same applies to group IDs.
Finally, the owner and permissions of the exported directory determine success of the read/write operations, depending on whether or not the user IDs sent by the server match the settings of the exported directory.
The most reliable and secure configuration on the server is to set the mount options “all_squash”, “anon_uid” (to the owner of the export path) and “anon_gid” (to the group owner of the export path). With these settings, the pods can run as any non-root user.
Here is an explanation of the relevant export options:
- no_root_squash
The NFS server trusts the client’s root user. It passes UID 0 to the filesystem “as is,” giving the client’s root user full control over the exported files.
- root_squash (Default)
The NFS server does not trust the client’s root user. It “squashes” or maps the incoming UID 0 to an unprivileged anonymous user (typically “nobody”). This is an important security feature.
- all_squash
The NFS server does not trust any user from the client. It squashes every incoming UID (both root and non-root) to the anonymous user. This creates a uniform and predictable security context for all clients.
- no_all_squash (Default for non-root)
The NFS server trusts all non-root users. It passes their original UID to the filesystem “as is,” leaving the final permission decision to the directory’s ownership and mode (e.g., 755).
- anon_uid
When “root_squash” and “all_squash” are set, incoming user IDs are mapped to this ID.
- anon_gid
When “root_squash” and “all_squash” are set, incoming group IDs are mapped to this ID.