Exporting a GlusterFS volume as an NFS share provides broad client compatibility for systems that already speak NFS, without requiring native GlusterFS mounts on every client. This approach is useful for mixed environments where distributed storage is needed, but workloads or platforms expect an NFS endpoint.

In current GlusterFS deployments, the legacy built-in Gluster-NFS service is deprecated, and NFS-Ganesha is the preferred NFS server. NFS-Ganesha runs in userspace and can export a GlusterFS volume directly using the GLUSTER FSAL (File System Abstraction Layer), mapping the volume to an NFSv4 pseudo path that clients mount.

Commands target a RHEL-based Linux host using systemd and firewalld, with an NFSv4-only export to keep firewall requirements predictable. Production exports should restrict client networks, keep Root_Squash enabled unless a workload explicitly requires otherwise, and apply the same export configuration consistently on every node intended to serve NFS traffic.

Steps to export GlusterFS volume as NFS share:

  1. Install NFS-Ganesha and the GlusterFS FSAL module on the export node.
    $ sudo dnf install --assumeyes nfs-ganesha nfs-ganesha-gluster
    Last metadata expiration check: 0:12:34 ago on Mon 25 Dec 2025 10:00:00 UTC.
    Dependencies resolved.
    ================================================================================
     Package                 Arch     Version                      Repository   Size
    ================================================================================
    Installing:
     nfs-ganesha             x86_64   5.9-1.el9                    appstream   1.2 M
     nfs-ganesha-gluster     x86_64   5.9-1.el9                    appstream    90 k
    ##### snipped #####
    Complete!

    Debian or Ubuntu uses apt for package installation, and the GLUSTER FSAL may be packaged separately depending on distro.

  2. Enable the nfs-ganesha service with immediate start.
    $ sudo systemctl enable --now nfs-ganesha
    Created symlink /etc/systemd/system/multi-user.target.wants/nfs-ganesha.service → /usr/lib/systemd/system/nfs-ganesha.service.
  3. Add an EXPORT block for the GlusterFS volume to /etc/ganesha/ganesha.conf.
    $ sudo vi /etc/ganesha/ganesha.conf
    
    EXPORT {
        Export_Id = 1;
        Path = "/";
        Pseudo = "/volume1";
        Protocols = 4;
        Transports = TCP;
        Access_Type = RW;
        Squash = Root_Squash;
    
        FSAL {
            Name = GLUSTER;
            hostname = "node1";
            Volume = "volume1";
        }
    
        CLIENT {
            Clients = 192.0.2.0/24;
            Access_Type = RW;
        }
    }

    Export_Id must be unique per NFS-Ganesha instance, and Pseudo is the NFSv4 export path mounted by clients.

    Changing Squash to No_Root_Squash grants client root equivalent access to server-side file ownership and should be avoided unless explicitly required.

  4. Restart the nfs-ganesha service to load the updated configuration.
    $ sudo systemctl restart nfs-ganesha
  5. Confirm the nfs-ganesha service is active after the restart.
    $ sudo systemctl status nfs-ganesha --no-pager
    ● nfs-ganesha.service - NFS-Ganesha file server
         Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; enabled; vendor preset: disabled)
         Active: active (running) since Mon 25 Dec 2025 10:05:12 UTC; 3s ago
    ##### snipped #####
  6. Allow inbound NFSv4 traffic on TCP port 2049 in firewalld.
    $ sudo firewall-cmd --zone=public --permanent --add-port=2049/tcp
    success
    $ sudo firewall-cmd --zone=public --reload
    success

    If exporting NFSv3 (Protocols = 3), additional RPC ports are required, and opening only 2049/tcp is not sufficient.

  7. Create a mount point directory on the client.
    $ sudo mkdir -p /mnt/volume1
  8. Mount the NFSv4 export from the server on the client.
    $ sudo mount -t nfs4 node1:/volume1 /mnt/volume1
  9. Verify the mounted filesystem reports the expected server and export path.
    $ df -h /mnt/volume1
    Filesystem       Size  Used Avail Use% Mounted on
    node1:/volume1   2.0T  120G  1.9T   6% /mnt/volume1