[ad_1]
Network File System (NFS) is a distributed file system protocol that allows you to share remote directories over a network. With NFS, you can mount remote directories on your system and work with the files on the remote machine as if they were local files.
NFS protocol is not encrypted by default, and unlike Samba, it does not provide user authentication. Access to the server is restricted by the clients’ IP addresses or hostnames.
In this tutorial, you’ll go through the steps necessary to set up an NFSv4 Server on CentOS 8. We’ll also show you how to mount an NFS file system on the client.
Prerequisites #
We’re assuming that you have a server running CentOS 8 on which we will set up the NFS server and other machines that will act as NFS clients. The server and the clients should be able to communicate with each other over a private network. If your hosting provider doesn’t offer private IP addresses, you can use the public IP addresses and configure the server firewall to allow traffic on port 2049
only from trusted sources.
The machines in this example have the following IPs:
NFS Server IP: 192.168.33.148
NFS Clients IPs: From the 192.168.33.0/24 range
Set Up the NFS Server #
This section explains how to install the necessary packages, create and export the NFS directories, and configure the firewall.
Installing the NFS server #
The “nfs-utils” package provides the NFS utilities and daemons for the NFS server. To install it run the following command:
sudo dnf install nfs-utils
Once the installation is complete, enable and start the NFS service by typing:
sudo systemctl enable --now nfs-server
By default, on CentOS 8 NFS versions 3 and 4.x are enabled, version 2 is disabled. NFSv2 is pretty old now, and there is no reason to enable it. To verify it run the following cat
command:
sudo cat /proc/fs/nfsd/versions
-2 +3 +4 +4.1 +4.2
NFS server configuration options are set in /etc/nfsmount.conf
and /etc/nfs.conf
files. The default settings are sufficient for our tutorial.
Creating the file systems #
When configuring an NFSv4 server, it is a good practice is to use a global NFS root directory and bind mount the actual directories to the share mount point. In this example, we will use the /srv/nfs4
director as NFS root.
To better explain how the NFS mounts can be configured, we’re going to share two directories (/var/www
and /opt/backups
) with different configuration settings.
The /var/www/
is owned by the user and group apache
and /opt/backups
is owned by root
.
Create the export filesystem using the mkdir
command:
sudo mkdir -p /srv/nfs4/{backups,www}
Mount the actual directories:
sudo mount --bind /opt/backups /srv/nfs4/backups
sudo mount --bind /var/www /srv/nfs4/www
To make the bind mounts permanent, add the following entries to the /etc/fstab
file:
sudo nano /etc/fstab
/etc/fstab
/opt/backups /srv/nfs4/backups none bind 0 0
/var/www /srv/nfs4/www none bind 0 0
Exporting the file systems #
The next step is to define the file systems that will be exported by the NFS server, the shares options and the clients that are allowed to access those file systems. To do so open the /etc/exports
file:
sudo nano /etc/exports
Export the www
and backups
directories and allow access only from clients on the 192.168.33.0/24
network:
/etc/exports
/srv/nfs4 192.168.33.0/24(rw,sync,no_subtree_check,crossmnt,fsid=0)
/srv/nfs4/backups 192.168.33.0/24(ro,sync,no_subtree_check) 192.168.33.3(rw,sync,no_subtree_check)
/srv/nfs4/www 192.168.33.110(rw,sync,no_subtree_check)
The first line contains fsid=0
which defines the NFS root directory /srv/nfs
. The access on this NFS volume is allowed only to the clients from the 192.168.33.0/24
subnet. The crossmnt
option is required to share directories that are sub-directories of an exported directory.
The second line shows how to specify multiple export rules for one filesystem. It exports the /srv/nfs4/backups
directory and allows only read access to the whole 192.168.33.0/24
range, and both read and write access to 192.168.33.3
. The sync
option tells NFS to write changes to disk before replying.
The last line should be self-explanatory. For more information about all the available options, type man exports
in your terminal.
Save the file and export the shares:
sudo exportfs -ra
You need to run the command above each time you modify the /etc/exports
file. If there are any errors or warnings, they will be shown on the terminal.
To view the current active exports and their state, use:
sudo exportfs -v
The output will include all shares with their options. As you can see, there are also options that we haven’t define in the /etc/exports
file. Those are default options, and if you want to change them, you’ll need to set those options explicitly.
/srv/nfs4/backups
192.168.33.3(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,no_all_squash)
/srv/nfs4/www 192.168.33.110(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,no_all_squash)
/srv/nfs4 192.168.33.0/24(sync,wdelay,hide,crossmnt,no_subtree_check,fsid=0,sec=sys,rw,secure,root_squash,no_all_squash)
/srv/nfs4/backups
192.168.33.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash)
root_squash
is one of the most important options concerning NFS security. It prevents root users connected from the clients to have root privileges on the mounted shares. It will map root UID
and GID
to nobody
/nogroup
UID
/GID
.
For the users on the client machines to have access, NFS expects the client’s user and group ID’s to match with those on the server. Another option is to use the NFSv4 idmapping feature that translates user and group IDs to names and the other way around.
That’s it. At this point, you have set up an NFS server on your CentOS server. You can now move to the next step and configure the clients and connect to the NFS server.
Firewall configuration #
FirewallD is the default firewall solution on Centos 8
.
The NFS service includes predefined rules for allowing access to the NFS server.
The following commands will permanently allow access from the 192.168.33.0/24
subnet:
sudo firewall-cmd --new-zone=nfs --permanent
sudo firewall-cmd --zone=nfs --add-service=nfs --permanent
sudo firewall-cmd --zone=nfs --add-source=192.168.33.0/24 --permanent
sudo firewall-cmd --reload
Set Up the NFS Clients #
Now that the NFS server is setup and shares are exported, the next step configure the clients and mount the remote file systems.
You can also mount the NFS share
on macOS and Windows machines, but we will focus on Linux systems.
Installing the NFS client #
On the client’s machines, install the tools required to mount remote NFS file systems.
-
Install NFS client on Debian and Ubuntu
The name of the package that includes programs for mounting NFS file systems on Debian based distributions is
nfs-common
. To install it run:sudo apt update
sudo apt install nfs-common
-
Install NFS client on CentOS and Fedora
On Red Hat and its derivatives install the
nfs-utils
package:sudo yum install nfs-utils
Mounting file systems #
We’ll work on the client machine with IP 192.168.33.110
, which has read and write access to the /srv/nfs4/www
file system and read-only access to the /srv/nfs4/backups
file system.
Create two new directories for the mount points. You can create these directories at any location you want.
sudo mkdir -p /backups
sudo mkdir -p /srv/www
Mount the exported file systems with the mount
command:
sudo mount -t nfs -o vers=4 192.168.33.148:/backups /backups
sudo mount -t nfs -o vers=4 192.168.33.148:/www /srv/www
Where 192.168.33.148
is the IP of the NFS server. You can also use the hostname instead of the IP address, but it needs to be resolvable by the client machine. This is usually done by mapping the hostname to the IP in the /etc/hosts
file.
When mounting an NFSv4 filesystem, you need to omit the NFS root directory, so instead of /srv/nfs4/backups
you need to use /backups
.
Verify that the remote file systems are mounted successfully using either the mount or df
command:
df -h
The command will print all mounted file systems. The last two lines are the mounted shares:
...
192.168.33.148:/backups 9.7G 1.2G 8.5G 13% /backups
192.168.33.148:/www 9.7G 1.2G 8.5G 13% /srv/www
To make the mounts permanent on reboot, open the /etc/fstab
file:
sudo nano /etc/fstab
and add the following lines:
/etc/fstab
192.168.33.148:/backups /backups nfs defaults,timeo=900,retrans=5,_netdev 0 0
192.168.33.148:/www /srv/www nfs defaults,timeo=900,retrans=5,_netdev 0 0
To find more information about the available options when mounting an NFS file system, type man nfs
in your terminal.
Another option to mount the remote file systems is to use either the autofs
tool or to create a systemd unit.
Testing NFS Access #
Let’s test the access to the shares by creating a new file
in each of them.
First, try to create a test file to the /backups
directory using the touch
command:
sudo touch /backups/test.txt
The /backup
file system is exported as read-only, and as expected you will see a Permission denied
error message:
touch: cannot touch ‘/backups/test’: Permission denied
Next, try to create a test file to the /srv/www
directory as a root using the sudo
command:
sudo touch /srv/www/test.txt
Again, you will see Permission denied
message.
touch: cannot touch ‘/srv/www’: Permission denied
The /var/www
directory is owned
by the apache
user, and this share has root_squash
option set, which maps the root user to the nobody
user and nogroup
group that doesn’t have write permissions to the remote share.
Assuming that a user apache
exists on the client machine with the same UID
and GID
as on the remote server (which should be the case if, for example, you installed apache
on both machines), you can test to create a file as user apache
with:
sudo -u apache touch /srv/www/test.txt
The command will show no output, which means the file was successfully created.
To verify it list the files in the /srv/www
directory:
ls -la /srv/www
The output should show the newly created file:
drwxr-xr-x 3 apache apache 4096 Jun 23 22:18 .
drwxr-xr-x 3 root root 4096 Jun 23 22:29 ..
-rw-r--r-- 1 apache apache 0 Jun 23 21:58 index.html
-rw-r--r-- 1 apache apache 0 Jun 23 22:18 test.txt
Unmounting NFS File System #
If you no longer need the remote NFS share, you can unmount it as any other mounted file system using the umount command. For example, to unmount the /backup
share you would run:
sudo umount /backups
If the mount point is defined in the /etc/fstab
file, make sure you remove the line or comment it out by adding #
at the beginning of the line.
Conclusion #
In this tutorial, we have shown you how to set up an NFS server and how to mount the remote file systems on the client machines. If you’re implementing NFS in production and sharing sensible data, it is a good idea to enable kerberos authentication.
As an alternative to NFS, you can use SSHFS
to mount remote directories over an SSH connection. SSHFS is encrypted by default and much easier to configure and use.
Feel free to leave a comment if you have any questions.
[ad_2]
Source link