NFS protocol questions

CFS

  • Updates and Announcements
    • Function Release Records
    • Product Announcement
      • Announcement on CFS Metric Adjustments
  • Product Description
    • Product Introduction
    • Basic concepts
    • Product features
    • Product specifications
    • Usage Limits and Recommendations
    • Selection Guide
      • How to Choose Between CFS and PFS
      • How to Choose Between CFS, BOS and CDS
    • Application scenarios
  • Product pricing
    • Pay-as-you-go
    • Storage package billing
    • Insufficient balance reminders and debt handling
    • Billing Cases
  • Quick Start
    • Getting Started Overview
    • Quick start (Linux)
    • Quick start (Windows)
  • Operation guide
    • Activate Service
    • Manage File System
      • Create file system
      • Delete a file system
      • View file system details
      • Set File System Capacity Upper Limit
    • Mount and Access
      • Add mount target
      • Mount and Unmount CFS on BCC
        • Mount and Unmount on Linux System
          • Mount and Unmount SMB Protocol CFS on Single BCC
          • Batch mounting and unmounting NFS CFS on multiple BCC
          • Mounting and unmounting NFS CFS on a single BCC
        • Mount and Unmount SMB Protocol CFS on Windows System
        • Mount CFS File System When Purchasing New BCC
      • Using CFS in CCE
    • Permission Group Management
    • Identity and access management
    • Data encryption
    • Management Tags
    • Backup
    • Monitor and Alarm
      • View monitoring
      • Alarm management
      • Metric definition
    • Cloud Audit
  • Typical Practices
    • Set Different User Permissions for Different Directories
    • Using File Systems Across Regions Or Accounts
    • Use SFTP to Upload and Download CFS File System Data
    • Use Rsync to Sync From Old File System to CFS File System
    • Best Practices for Managing CFS with Terraform
    • Performance Testing Methods
      • Linux System Performance Testing Methods
      • Windows System Performance Testing Methods
  • API Reference
    • API Function Update Records
    • API Overview
    • Interface Overview
    • General Description
    • Service domain
    • Error code
    • File System Related Interfaces
      • Create file system
      • Update file system
      • Query file system
      • Query mount client
      • Drop file system
      • Update file system tags
    • mount target Related Interfaces
      • Create mount target
      • Query mount targets
      • Delete mount target
    • Permission Group-related APIs
      • Create Permission Group
      • Update permission group
      • Query Permission Groups
      • Delete permission group
      • Create permission group rules
      • Update permission group rules
      • Query permission group rules
      • Delete permission group rule
    • Data type
  • Go-SDK
    • Overview
    • Initialization
    • File system
    • Mount target
    • Error handling
  • FAQs
    • Common Questions Overview
    • General Problems
    • Billing Problems
    • NFS protocol questions
    • SMB Protocol Issues
  • Service Level Agreement (SLA)
    • CFS Service Level Agreement (SLA)
All documents
menu
No results found, please re-enter

CFS

  • Updates and Announcements
    • Function Release Records
    • Product Announcement
      • Announcement on CFS Metric Adjustments
  • Product Description
    • Product Introduction
    • Basic concepts
    • Product features
    • Product specifications
    • Usage Limits and Recommendations
    • Selection Guide
      • How to Choose Between CFS and PFS
      • How to Choose Between CFS, BOS and CDS
    • Application scenarios
  • Product pricing
    • Pay-as-you-go
    • Storage package billing
    • Insufficient balance reminders and debt handling
    • Billing Cases
  • Quick Start
    • Getting Started Overview
    • Quick start (Linux)
    • Quick start (Windows)
  • Operation guide
    • Activate Service
    • Manage File System
      • Create file system
      • Delete a file system
      • View file system details
      • Set File System Capacity Upper Limit
    • Mount and Access
      • Add mount target
      • Mount and Unmount CFS on BCC
        • Mount and Unmount on Linux System
          • Mount and Unmount SMB Protocol CFS on Single BCC
          • Batch mounting and unmounting NFS CFS on multiple BCC
          • Mounting and unmounting NFS CFS on a single BCC
        • Mount and Unmount SMB Protocol CFS on Windows System
        • Mount CFS File System When Purchasing New BCC
      • Using CFS in CCE
    • Permission Group Management
    • Identity and access management
    • Data encryption
    • Management Tags
    • Backup
    • Monitor and Alarm
      • View monitoring
      • Alarm management
      • Metric definition
    • Cloud Audit
  • Typical Practices
    • Set Different User Permissions for Different Directories
    • Using File Systems Across Regions Or Accounts
    • Use SFTP to Upload and Download CFS File System Data
    • Use Rsync to Sync From Old File System to CFS File System
    • Best Practices for Managing CFS with Terraform
    • Performance Testing Methods
      • Linux System Performance Testing Methods
      • Windows System Performance Testing Methods
  • API Reference
    • API Function Update Records
    • API Overview
    • Interface Overview
    • General Description
    • Service domain
    • Error code
    • File System Related Interfaces
      • Create file system
      • Update file system
      • Query file system
      • Query mount client
      • Drop file system
      • Update file system tags
    • mount target Related Interfaces
      • Create mount target
      • Query mount targets
      • Delete mount target
    • Permission Group-related APIs
      • Create Permission Group
      • Update permission group
      • Query Permission Groups
      • Delete permission group
      • Create permission group rules
      • Update permission group rules
      • Query permission group rules
      • Delete permission group rule
    • Data type
  • Go-SDK
    • Overview
    • Initialization
    • File system
    • Mount target
    • Error handling
  • FAQs
    • Common Questions Overview
    • General Problems
    • Billing Problems
    • NFS protocol questions
    • SMB Protocol Issues
  • Service Level Agreement (SLA)
    • CFS Service Level Agreement (SLA)
  • Document center
  • arrow
  • CFS
  • arrow
  • FAQs
  • arrow
  • NFS protocol questions
Table of contents on this page
  • Supported NFS protocol versions
  • Why other NFS versions are not supported
  • Recommended operating system versions
  • Unsupported features
  • How to resolve the “file does not exist” error after creating a file in an NFS protocol file system?
  • How to resolve data writing delay in an NFS protocol file system?
  • During mounting, “mount: wrong fs type, bad option, bad superblock”
  • Missing, duplicate, or delayed updates in folder traversal lists
  • Interference between client requests with the same hostname and Intranet IP
  • Client fails to connect to the mount target, kernel log shows “nfs: server XXX not responding, still trying”
  • Files named like “.nfsXXXX” found in a directory
  • Low client throughput or QPS
  • Data overwriting when multiple clients perform concurrent append writes to the same file

NFS protocol questions

Updated at:2025-11-11

Supported NFS protocol versions

CFS currently supports the NFS 4.1 protocol.

Why other NFS versions are not supported

Starting with the 2.6 kernel series, the Linux kernel has supported the NFS 4.0 protocol. Hence, the only potential reason to support NFS 3 or earlier versions might be compatibility with the Windows operating system. However, Windows OS has limited support for the NFS protocol. Instead, Windows typically prefers shared file systems using the SMB/CIFS protocol, which is also supported by newer Linux systems (kernel version > 3.4). As a result, CFS has opted not to support NFS 3 or earlier versions of the NFS protocol.

Among the NFS 4.X series, versions 4.0 and 4.1 are the most commonly used. NFS 4.1 addresses design flaws in version 4.0 and introduces features such as session, pNFS, and directory delegation. While the minor version number change might seem small, the actual differences are significant. For this reason, CFS prioritizes supporting the NFS 4.1 protocol, which delivers superior performance.

Recommended operating system versions

The official Linux kernel development for NFS 4.1 began with version 2.6.31, experimental support was released in version 2.6.36, and finally, official support was provided in version 3.7. For specific information, refer to the link. Therefore, users should use a kernel version of 3.7 or higher whenever possible. In general, newer stable version kernels can provide better performance and user experience.

The kernel versions shown by some operating systems might not exactly match the official Linux kernel versions. For example, CentOS/RHEL distributions apply their own bug fixes, modifications, and porting efforts, often resulting in code that is ahead of the official kernel for the same version. Therefore, the actual conditions of each operating system should be verified through official documentation and source code.

For common operating systems, our recommended versions are as follows:

  • CentOS/RHEL: Version 6.6 or higher is recommended. Refer to the link for more detailed NFS protocol bug information on this OS;
  • Ubuntu: Version 14.04 or higher is recommended;
  • Debian: Version 8 or higher is recommended;
  • OpenSUSE: It is recommended to use version 42.3 or higher.

Unsupported features

The attributes unsupported by CFS include: FATTR4_ACL, FATTR4_ACLSUPPORT, FATTR4_ARCHIVE, FATTR4_HIDDEN, FATTR4_MIMETYPE, FATTR4_QUOTA_AVAIL_HARD, FATTR4_QUOTA_AVAIL_SOFT, FATTR4_QUOTA_USED, FATTR4_SYSTEM, FATTR4_TIME_BACKUP, FATTR4_TIME_CREATE, FATTR4_DIR_NOTIF_DELAY, FATTR4_DIRENT_NOTIF_DELAY, FATTR4_DACL, FATTR4_SACL, FATTR4_CHANGE_POLICY, FATTR4_FS_STATUS, FATTR4_LAYOUT_HINT, FATTR4_LAYOUT_ALIGNMENT, FATTR4_FS_LOCATIONS_INFO, FATTR4_MDSTHRESHOLD, FATTR4_RETENTION_GET, FATTR4_RETENTION_SET, FATTR4_RETENTEVT_GET, FATTR4_RETENTEVT_SET, FATTR4_RETENTION_HOLD, FATTR4_MODE_SET_MASKED, and FATTR4_FS_CHARSET_CAP.

File locks supported by CFS are non-blocking and advisory. They support locking entire files or partial byte ranges; other types of locks are not currently supported. This means that attention must be paid when using the 3 common locking APIs:

  • For flock, the LOCK_NB parameter must be specified simultaneously;
  • For lockf, the F_LOCK option is not supported;
  • CFS does not support the F_SETLKW option for fcntl.

CFS does not support delegation, pNFS, Access Control Lists (ACL), or Kerberos security features.

For performance reasons, CFS does not update the time_access attribute of files during read-only operations.

CFS does not persist information such as sessions or replay caches. In the event of a failover, the client will re-establish the session. This approach complies with the provisions of RFC 5661 and is generally handled automatically by the client.

How to resolve the “file does not exist” error after creating a file in an NFS protocol file system?

Problem phenomenon:

Phenomenon I: BCC-1 creates a file named file1, but BCC-2 only observes file1 after a delay—sometimes 1 second, and other times up to 1 minute.

Phenomenon II: BCC-1 creates file1, and after some time, BCC-2 opens file1. Then, BCC-1 deletes file1 and recreates a new file with the same name (file1). When BCC-2 tries to access the file again, it encounters a “file does not exist” error.

Root cause: Both issues arise due to the lookup cache.

For phenomenon I: BCC-2 attempted to access file1 before BCC-1 created it, resulting in a "file does not exist" error. BCC-2 then cached this outcome, marking file1 as non-existent. Since the FileAttr cache had not expired, BCC-2 continued to rely on this outdated cached record (negative lookup cache) when it tried to access file1 again.

For phenomenon II: BCC-2 accessed file1 and cached its inode record (positive lookup cache). Later, when BCC-1 deleted and re-created file1 with the same name, BCC-2 continued to rely on its cached inode record associated with the same path and attempted to communicate with the server.

Solution:

Solution I: To address phenomenon I, disable the negative lookup cache on BCC-2 to prevent caching of non-existent files. When mounting, specify the field lookupcache=positive (default value is lookupcache=all). The mount command is as follows:

Shell
1mount -t nfs4 -o minorversion=1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,lookupcache=positive <Domain name of a mount target>:/ <client local path>

Description:

  • can be found in the file system’s mount target list (see Obtain Domain Name of a Mount Target);
  • Path of the CFS file system: By default, it is the root directory of CFS (e.g., /). You can modify it to an existing subdirectory (e.g., /dir0);
  • refers to the client’s local path for mounting. It must be an absolute path beginning with / (e.g., /mnt/cfs) and must already exist prior to mounting.

Solution II: Applicable to all lookup cache-related issues.

Disable the lookup cache entirely on BCC-2—note that this approach can significantly degrade performance. Choose the solution based on your actual business requirements. When mounting, specify the field lookupcache=none. The mount command is as follows:

Shell
1mount -t nfs4 -o minorversion=1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,lookupcache=none <Domain name of a mount target>:/ <client local path>

You also have the option to disable all caches, but this approach will considerably worsen performance. To do so, include the actimeo=0 parameter when mounting. The mount command is as follows:

Shell
1mount -t nfs4 -o minorversion=1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,actimeo=0 <Domain name of a mount target>:/ <client local path>

Description

  • The lookup cache has three options: all, none, and positive, with the following meanings:

    • all: Both positive and negative caching are enabled (caches records of existing and non-existent files/directories). This is the default option
    • none: The lookup cache is completely disabled
    • positive: Only caches records of existing files/directories

The lookup cache clears cached data based on the modification time (mtime) of files or directories. Since these files or directories also maintain attribute caches, the cache expiry time depends on parameters such as actimeo (including acregmin, acregmax, acdirmin, acdirmax) and noac.

How to resolve data writing delay in an NFS protocol file system?

Problem phenomenon: BCC-1 updates the file file1, but when BCC-2 reads it immediately, it still gets the old content. Why?

Root cause: Two factors are responsible.

Reason I: After BCC-1 writes to file1, it does not flush the data immediately. Instead, the data is first stored in the PageCache, and flushing depends on the application layer calling fsync or close. Reason II: BCC-2 has a file cache and may not retrieve the latest content from the server immediately. For example, if BCC-2 cached the data of file1 when BCC-1 was updating it, BCC-2 will still use the cached content when reading again.

Solutions: To ensure that BCC-2 can see the file immediately after BCC-1 creates it, the following solutions can be used:

Solution I: Use CTO consistency. Ensure that read/write operations for BCC-1 or BCC-2 adhere to CTO mode, which guarantees BCC-2 can access the most recent data. Specifically, after BCC-1 modifies a file, it must execute close or fsync. Before BCC-2 reads the file, it must re-run open and then perform the read operation.

Solution II: Disable all caches on BCC-1 and BCC-2. Note that this approach will significantly impact performance, so choose the most suitable solution based on your business requirements.

  • Disable the cache on BCC-1. To do this, include the noac parameter while mounting, which ensures that all writes are instantly flushed to disk. The mount command is as follows:
Shell
1mount -t nfs4 -o minorversion=1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,noac <Domain name of a mount target>:/ <client local path>

Description

  • If BCC-1 executes fsync after completing write operations or uses synchronous writes (sync), you can replace noac in the above command with actimeo=0. This will slightly improve performance.
  • noac is equivalent to actimeo=0 combined with sync, meaning it forces all writes to be synchronous.

Here, actimeo is the cache validity period (in seconds) for files and directories, including:

  • acregmin: Minimum validity period for file attributes (default: 3 seconds)
  • acregmax: Maximum validity period for file attributes (default: 60 seconds)
  • acdirmin: Minimum validity period for directory attributes (default: 30 seconds)
  • acdirmax: Minimum validity period for directory attributes (default: 60 seconds)

Parameters such as acregmin, acregmax, acdirmin, and acdirmax can be configured individually. However, if actimeo is set, the values of these four parameters will match that of actimeo.

  • Disable the cache on BCC-2. To achieve this, include the actimeo=0 parameter during mounting, which effectively bypasses all caching. The mount command is as follows:
Shell
1mount -t nfs4 -o minorversion=1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,actimeo=0 <Domain name of a mount target>:/ <client local path>

During mounting, “mount: wrong fs type, bad option, bad superblock”

In most operating systems, the NFS client is not installed by default, requiring manual installation.

  • If you are using CentOS/RHEL, run the following command:
Shell
1yum install nfs-utils
  • If you are using the Ubuntu/Debian OS, run the following command:
Shell
1apt-get install nfs-common

The above commands all need to be executed with root permissions.

Missing, duplicate, or delayed updates in folder traversal lists

When one client traverses a folder using programming APIs like readdir/getdents/getdents64 or the ls command, concurrent create, delete, or rename operations by another client in the same folder may result in the displayed list containing missing or duplicate items. This occurs due to interference from concurrent operations. The root cause is that a complete traversal comprises multiple consecutive fetch operations, during which the directory might change.

Typically, clients use directory entry and attribute caches to improve performance. However, caching also means that modifications made by one client may not be visible to other clients immediately. This leads to delayed updates. Users can disable caching by using the noac option during mounting. This method will significantly increase the number of OP_GETATTR remote calls and cause significant performance damage, so it is not recommended.

Interference between client requests with the same hostname and Intranet IP

If two clients access the same file system and share the same hostname and Intranet IP address, CFS cannot differentiate between them, causing their requests to interfere with each other. This issue is common in container environments. When containers on the same machine share the same Intranet IP and hostname, they will be treated as the same client.

Certain older Linux kernels truncate hostnames when encoding the NFS protocol client identifier, limiting it to the initial portion of the hostname. To mitigate this, ensure the distinguishing characters of the hostname appear as early as possible. Based on our experience, it is safer to keep these characters within the first 32 positions.

In summary, avoid situations where clients share the same hostname and Intranet IP.

Client fails to connect to the mount target, kernel log shows “nfs: server XXX not responding, still trying”

In the kernel log, XXX represents the mount target address, which is referred to as below. You can troubleshoot the issue with the following steps:

  • Use the command ping to check if the address is reachable. If it’s not, verify whether the corresponding mount target address still exists on the webpage.
  • If the ping test indicates no issues, use the command telnet 2049 to check connectivity with the NFS protocol port of the backend service. If it’s unreachable, examine the security group policy to ensure the port is not blocked.
  • If both previous steps show no problems, verify if the noresvport parameter is included in the mount command. If it is missing, try remounting with this parameter added.

If the issue continues, you can submit a ticket detailing the problem to allow engineers to troubleshoot further.

Files named like “.nfsXXXX” found in a directory

These files are created through the so-called “silly rename” process.

On UNIX-like operating systems, when a file is deleted while still open, it becomes invisible, but processes can still access it using the file descriptor. The file is only permanently deleted once no processes are using it. In earlier NFS versions (NFS 3 and before), there was no guarantee the server would retain the file after deletion. To address this, the kernel employed a workaround: when a file was deleted, it was renamed to a hidden file like “.nfsXXXX” that would later be removed when no longer in use. This mechanism, known as “silly rename,” could result in leftover “.nfsXXXX” files if a client crashed or there was a network issue, leading to accumulation.

The NFS 4.X protocol already resolves this issue, but the kernel still uses the “silly rename” method to maintain compatibility with older versions.

You can filter out these files when viewing folder contents. Once you confirm no processes are using them, you can safely delete them.

Low client throughput or QPS

Low actual client throughput may be caused by the current session’s slot setting. After mounting a CFS instance, you can confirm this by following these steps:

  1. Install the systool tool. For CentOS/RHEL OS, install it using the following command:
Shell
1yum install sysfsutils

For Ubuntu/Debian OS, install it using the following command:

Shell
1apt-get install sysfsutils
  1. Check the current slot value. Ensure that the CFS instance is mounted successfully before execution:
Shell
1systool -v -m nfs

The max_session_slot value in the output corresponds to the current session's concurrency level. If this value is low, it can be increased to up to 128.

  1. Modify the environment variable
Shell
1echo options nfs max_session_slots=128 > /etc/modprobe.d/nfsclient.conf
  1. Reboot the system
  2. After restarting, use systool again to verify whether the change has been successfully applied.
Shell
1systool -v -m nfs

Data overwriting when multiple clients perform concurrent append writes to the same file

The NFS protocol does not support atomic append writes. To mimic append behavior in a local file system, you must first lock the file, move to the desired position, and then write the data. Here is an example code snippet:

C++
1int lock_file(int fd) {
2    for (;;) {
3        int rc = flock(fd, LOCK_EX | LOCK_NB);
4        if (rc == 0) {
5            return 0;
6        }
7        if (errno == EINTR) {
8            continue; // interupt by a signal, retry
9        } else if (errno == EWOULDBLOCK) {
10            usleep((rand() % 10) * 100); // sleep a while to retry
11            contunue;
12        } else {
13            break;
14        }
15    }
16    return -1; // lock fail
17}
18int unlock_file(int fd) {
19    for (;;) {
20        int rc = flock(fd, LOCK_UN);
21        if (rc == 0) {
22            return 0;
23        }
24        if (errno == EINTR) {
25            continue; // interupt by a signal, retry
26        } else {
27            break;
28        }
29    }
30    return -1; // unlock fail
31}
32int append_write(int fd, const char* data, ssize_t size) {
33    int rc = lock_file(fd);
34    if (rc != 0) {
35        return rc;
36    }
37    off_t offset = lseek(fd, 0, SEEK_END);
38    if (offset < 0) {
39        unlock_file(fd);
40        return -1;
41    }
42    while (size > 0) {
43        ssize_t nwriten = pwrite(fd, buf, size, offset);
44        if (nwriten >= 0) {
45            size -= nwriten;
46            offset += nwriten;
47        } else if (errno == EINTR) {
48            continue;
49        } else {
50            unlock_file(fd);
51            return -1;
52        }
53    }
54    return unlock_file(fd);
55}

Previous
Billing Problems
Next
SMB Protocol Issues