Kahibaro
Discord Login Register

4.5.2 NFS

Introduction

Network File System, or NFS, is a protocol that allows a computer to access files over a network almost as if they were on a local disk. It is heavily used in Linux and other Unix-like systems to share directories between servers and clients in a transparent way. In this chapter, the focus is on how NFS works conceptually, how to configure a basic NFS server and client on Linux, and what to watch out for in terms of permissions, performance, and security.

NFS Concepts and Versions

NFS works by exporting directories from a server and mounting them on one or more clients. The clients then see the exported directory as part of their own filesystem tree. Applications generally do not need to know that the files are remote.

NFS has multiple protocol versions, primarily NFSv3 and NFSv4 in modern use. NFSv3 uses separate daemons for various operations and relies on external mechanisms for locking and authentication. NFSv4 integrates many of these functions into a single protocol that runs on a single TCP port, usually 2049, and introduces features such as stateful operations and improved security integration.

Most contemporary Linux systems support both NFSv3 and NFSv4. Typically, NFSv4 is preferred for new deployments due to better firewall friendliness and improved security support. NFSv3 may still be used for compatibility or in simple, trusted environments.

NFS Server Components

To serve NFS exports, a Linux system runs a set of user space daemons together with kernel NFS support. The exact set depends on the distribution and NFS version in use, but some components are common.

The core is the NFS server kernel module, which handles the actual file operations for remote clients. On top of that, there is a user space NFS daemon, usually nfsd, which coordinates requests. Export definitions are managed by the exportfs mechanism and stored in configuration files. For NFSv3, a portmapper or RPC binding service is used to map procedure numbers to ports, and a mount daemon, often rpc.mountd, helps clients mount exports. For NFSv4, many of these functions are consolidated, and a single TCP port is used.

On most Linux distributions, the overall service is controlled through a unit like nfs-server.service or similar. When it starts, it reads the export configuration, announces the exports, and listens for incoming NFS requests.

The /etc/exports Configuration

The main configuration file for NFS exports on Linux is usually /etc/exports. Each line in this file defines a directory to share and the clients that can access it, along with access options.

A typical entry looks like:

/srv/share 192.168.1.0/24(rw,sync,no_subtree_check)

This example exports the directory /srv/share to all clients in the 192.168.1.0/24 network. The options in parentheses control access. Here rw allows read and write, sync tells the server to only consider operations complete once data is safely written, and no_subtree_check disables subtree checking for performance and stability reasons.

Other common options include ro for read only, no_root_squash to disable root identity mapping for the client root user, and root_squash which is usually the default and maps client root to an unprivileged user on the server.

NFS exports are controlled by /etc/exports lines of the form:
directory client-spec(options)
Be very cautious with:
no_root_squash and wide client specifications like *(rw).
These can effectively give remote systems root control over files on the server.

Client specifications can be hostnames, single IP addresses, CIDR networks, or even wildcard host patterns, depending on the implementation. After editing /etc/exports, the export configuration must be refreshed, usually through tools like exportfs or by restarting the NFS server service.

Exporting and Managing Shared Directories

Once /etc/exports is configured, the server needs to apply the configuration to begin or update sharing. On many systems, a command such as:

exportfs -ra

re-reads /etc/exports and applies any changes without a full service restart. The -r option reexports everything, and -a applies it to all entries. You can list currently exported filesystems with:

exportfs -v

which normally shows each exported path along with its client lists and effective options.

When defining exports, directory ownership and permissions on the server remain important. NFS does not bypass the underlying filesystem permissions. Clients see file ownership via numeric user IDs and group IDs. For consistent access, the same user and group IDs should typically be used across the server and clients, or ID mapping services should be configured.

If you change permissions or ownership on the exported directory or its contents, the NFS clients will observe the changes, although some caching may delay visibility slightly. It is generally better to use standard Linux ownership and permission tuning on the server side rather than trying to grant overly broad NFS export options.

NFS Client Configuration and Mounting

On the client side, NFS mounts a remote exported directory into the local filesystem tree. To do this, the client needs the NFS utilities installed and the kernel NFS client support enabled. In most Linux distributions this is present by default or via a package such as nfs-common or similar.

To mount an NFS export temporarily, you can use the mount command. For example:

mount -t nfs4 server.example.com:/srv/share /mnt/share

This mounts the NFSv4 export /srv/share from server.example.com onto the local directory /mnt/share. The target mount point must exist as an empty directory before mounting. For NFSv3, you might use -t nfs and a different path format, such as server.example.com:/srv/share.

To make an NFS mount persistent across reboots, entries are added to /etc/fstab on the client. A simple entry might look like:

server.example.com:/srv/share  /mnt/share  nfs4  defaults  0  0

Here the filesystem type nfs4 selects NFSv4, and the defaults keyword uses standard mount options. Additional options can be provided to tune performance, reliability, and behavior.

If a mount is no longer needed, it can be unmounted with:

umount /mnt/share

The directory remains, but the remote content is detached.

Permissions, UID Mapping, and Root Squash

One of the most distinctive aspects of NFS is how it handles user and group identities. NFS primarily deals with numeric user IDs (UIDs) and group IDs (GIDs), not names. When a client user with UID 1000 accesses a file over NFS, the server checks the permissions of the file against UID 1000 on its own filesystem.

For a straightforward and predictable setup, it is common to ensure that the same user accounts have the same UIDs and GIDs on both server and client. This can be maintained manually for small environments or centrally using directory services for larger ones. If UIDs differ between systems, a user might unexpectedly gain or lose access to files when they access them via NFS.

NFS has special handling for root on the client. By default, the server treats operations from client root as if they come from an unprivileged user, usually mapping to the nfsnobody user or a similar ID. This behavior is known as root squashing. It is a safety feature that avoids giving client root the same power over server files as local root on the server has.

If you specify no_root_squash in an export, then root on the client is treated as root on the server. This might be useful for specific administrative scenarios, but it is a significant security risk in general. In environments where multiple clients are not fully trusted, root squashing should remain enabled.

Common Mount Options and Performance Tuning

NFS performance and behavior are influenced by mount options on both server and client. Some options are related to data safety, others to caching and network resilience.

On the server side, sync and async are important. sync requests that the server confirm writes only when the data has actually been written to disk, which is safer but can be slower. async allows the server to acknowledge writes before they are fully flushed, which can improve performance but increases the risk of data loss on sudden power failure.

On the client side, there are options like rsize and wsize to control the size of read and write requests, and options that manage attribute and directory entry caching. There are also options controlling how the client handles server unresponsiveness. For instance, some NFS mounts may be hard, where operations will retry indefinitely if the server goes away, potentially blocking client processes. Others may be soft, where operations will eventually time out and return an error instead of blocking forever.

In addition, if you use NFSv4, you can specify version specific options such as vers=4.1 to force a particular minor version, and options related to advanced features, which will be discussed briefly in the next section.

When tuning NFS, there is often a trade-off among performance, safety, and failure behavior. For critical data, conservative settings like sync and hard mounts make failures more visible and safer at the cost of speed. For temporary or less important data, more aggressive caching and asynchronous policies can improve throughput.

NFSv4 Features and Layout

NFSv4 introduced several changes that simplify deployment and improve interoperability compared to earlier versions. One major difference is that NFSv4 usually uses a single port, TCP 2049, for all operations. This makes it easier to traverse firewalls, since multiple RPC services and dynamic ports do not have to be managed separately.

NFSv4 also introduced a logical filesystem namespace associated with an NFS server. Instead of exporting each directory separately, the server can place multiple exports under a common root. From the client perspective, NFSv4 paths may start with / at this NFS root, which does not necessarily correspond directly to the server root filesystem path. Many Linux deployments still export individual directories, but this namespace concept is important in larger or more structured environments.

NFSv4 is stateful, meaning it maintains information about open files and locks. This can improve semantics and reduce some race conditions, but it also means that both server and client need to track and recover state after failures or reboots. NFSv4 includes a concept of sessions and lease times for this purpose.

NFSv4 also improves integration with security mechanisms, including support for Kerberos based authentication, integrity, and privacy services. These allow NFS traffic to be authenticated and even encrypted at the protocol level, which helps in less trusted networks.

Security Considerations

NFS was historically developed for use inside trusted networks. Without additional protection, NFS traffic is usually unencrypted and can be intercepted or altered by an attacker with access to the network. For this reason, traditional NFS setups are often limited to well defended internal networks or private segments that are not accessible to the public internet.

To improve security, there are several layers that can be used. Firewalls can restrict access to the NFS server to specific client IP ranges or networks. NFSv4 simplifies firewall configuration by concentrating traffic on port 2049. The export definitions in /etc/exports should be restricted to known and necessary clients, and options like ro should be used for read-only data where possible.

At the protocol level, NFS can use stronger authentication and security flavors. The most common is Kerberos integration, where NFS uses GSS-API mechanisms to ensure that both server and client authenticate each other using trusted credentials, and that the identity of the user performing operations is validated cryptographically. When combined with integrity or privacy flags, Kerberos-secured NFS can ensure that data is not tampered with or read in transit.

Another approach is to protect NFS within a VPN. In this case, NFS itself still uses its usual security model, but network layer encryption and authentication provided by the VPN keep the traffic confidential and authenticated between sites.

Finally, the traditional filesystem security model on the server still matters. The files and directories that are exported retain their local permissions, SELinux or AppArmor labels if in use, and any other local access controls. NFS provides a remote access path, but it does not automatically override these local protections. Careful design of file ownership, group membership, and permission bits on the server is a crucial part of a secure NFS deployment.

Typical Use Cases and Limitations

NFS is commonly used to provide shared storage for multiple Linux systems. Examples include centralized home directories for users in an organization, shared project directories for application servers, or a common repository of software and configuration files. For read mostly content like software repositories or media libraries, NFS can be a simple and efficient solution.

However, there are limitations. NFS is sensitive to network latency and interruptions. If the network between client and server is unreliable or slow, applications may experience delays or I/O errors when accessing NFS mounted files. There can also be subtle consistency issues because of caching, since clients may cache file attributes or data locally for performance.

NFS is generally not a clustered filesystem in itself. If multiple clients write to the same files concurrently without coordination, you can see race conditions and conflicting updates. Some applications handle this correctly through file locking, but others do not. For workloads that demand strong shared write semantics and distributed locking across many nodes, specialized clustered filesystems or higher level application protocols may be more appropriate.

NFS works best for workloads where there is a clear separation of responsibilities. The server reliably hosts the files and enforces permissions, while clients access and modify them in predictable ways. With correct configuration and an understanding of its characteristics, NFS remains a central building block in many Linux based storage and server environments.

Views: 65

Comments

Please login to add a comment.

Don't have an account? Register now!