«   »

NFS : Server & Client Set Up

Disclaimer

There are very many options that can be used while exporting and mounting NFS partitions as well as while custom-configuring the firewall with IPTABLES. Please read through the man pages to see what fits best. Below given instructions are what I used to set up NFS within my internal network and these may very well work for you. However, please note that you are using these instructions at your very own risk and this website, sgowtham.com, is not responsible for any/all damage caused to your property, intellectual or otherwise.



It is not uncommon to find people (or organizations) who have multiple computers at their disposal and more often than not, these people (or organizations) find themselves in following situation:

One of these machines, often pretty powerful, contains data that need to be accessed from one or more of the other machines.

As is the case with most problems, there exists more than one way to solve this issue. This article discusses, in step-by-step fashion, one such possible approach – Network File System (abbreviated as NFS) – as applicable to Red Hat Enterprise Linux distributions.


What is NFS?

According to Wikipedia, it is a network file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The Network File System protocol is specified in RFC 1094, RFC 1813, and RFC 3530.


The Server Part

  1. Let us assume that the IP address of the server is 192.168.1.2
  2. Login as root
  3. Decide on two things:
    1. What file systems should be made available to clients? Let us assume that /usr/local (as read only) and /home (as read/write) partitions need to be exported.
    2. Which machines/clients (IP range or specific hostnames) should be allowed to access the exported file systems? Let us assume that all machines in the internal network – identified by IP addresses 192.168.1.xxx/255.255.255.0 – should have access to the exported partitions.
  4. Once the above is determined, this information needs to be put in a file that NFS will look up and do the needful. Add the following to /etc/exports:
    1
    2
    3
    
    # /etc/exports
    /usr/local      192.168.1.0/24(ro,sync)
    /home          192.168.1.0/24(rw,sync)
  5. Save and close the file, Run the following command:
    exportfs -rva
  6. Assuming that a full/complete/maximum installation of the linux distribution was done, start the NFS service:
    /etc/init.d/nfs start
  7. If you plan on keeping this service active over reboots, then:
    chkconfig - -level 345 nfs on


The Client Part

  1. Login as root
  2. Decide on the following:
    1. Where will the exported file systems/partitions (from the Server) be mounted? Let us assume that /usr/local will be mounted at /mnt/192_168_1_2/usr_local (as read only) and /home will be mounted at /mnt/192_168_1_2/home (as read/write).
    2. To that effect, create those mount points:
      mkdir -p /mnt/192_168_1_2/usr_local
      mkdir -p /mnt/192_168_1_2/home
  3. Once the above is done, mount the exported file systems:
    mount -t nfs 192.168.1.2:/usr/local /mnt/192_168_1_2/usr_local
    mount -t nfs 192.168.1.2:/home /mnt/192_168_1_2/home
  4. If you plan on keeping this set up active over reboots, then add the following lines to /etc/fstab:
    1
    2
    
    192.168.1.2:/usr/local /mnt/192_168_1_2/usr_local           nfs     ro,sync,timeo=14,root_squash            0 0
    192.168.1.2:/home /mnt/192_168_1_2/home           nfs     rw,sync,timeo=14,root_squash              0 0
  5. Save and close the file.


Troubleshooting

One of the most common problems that bugged me for a while was the following: When

mount -t nfs 192.168.1.2:/usr/local /mnt/192_168_1_2/usr_local

is executed on the client, it results in the following error:

mount: mount to NFS server '192.168.1.2' failed: System Error: No route to host.

First thing I had to check was to make sure I was using the proper syntax (in commands) and appropriate arguments/options for a given command. Since the error was very easily reproducible, I thought SELinux (Security Enhanced Linux) feature might be obstructing proper functioning of NFS and as such, I disabled it. As root, I edited the /etc/sysconfig/selinux (in server as well as client) and made it look like:

1
2
3
4
5
6
7
8
9
10
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
#       targeted - Only targeted network daemons are protected.
#       strict - Full SELinux protection.
# SELINUXTYPE=targeted

After rebooting (both server and client) and re-attempting the NFS set up, I still got the same error message – meaning, something else was obstructing the process. A little bit of digging around and Google!ng led me to believe that the default firewall rules in the server were the culprit. The following steps were followed to resolve this issue:

  1. Login as root on the server (192.168.1.2)
  2. cd /etc/sysconfig/
  3. cp iptables iptables.default
  4. cd
  5. Based on firewall rules implemented in a beowulf linux cluster, I created a file called custom_firewall.sh, with following contents:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    
    #! /bin/bash
    #
    # Define a local variable, IPTABLES
    export IPTABLES=/sbin/iptables
     
    # Flush out all existing rules
    $IPTABLES -F INPUT
     
    # Set default Policy for Input, Output and Forward chains
    # If nothing else matches, these are followed
    $IPTABLES -P INPUT   ACCEPT
    $IPTABLES -P OUTPUT  ACCEPT
    $IPTABLES -P FORWARD DROP
     
    # Allow self-access by loopback interface
    $IPTABLES -A INPUT  -i lo -p all -j ACCEPT
    $IPTABLES -A OUTPUT -o lo -p all -j ACCEPT
     
    # Accept established connections
    $IPTABLES -A INPUT -i eth0 -p tcp  -m state --state ESTABLISHED -j ACCEPT
    $IPTABLES -A INPUT -i eth0 -p udp  -m state --state ESTABLISHED -j ACCEPT
    $IPTABLES -A INPUT -i eth0 -p icmp -m state --state ESTABLISHED -j ACCEPT
     
    # Ping requests
    $IPTABLES -A INPUT -p icmp -j ACCEPT
     
    # FTP requests - not secure enough
    $IPTABLES -A INPUT -p tcp --dport 20 -j DROP
    $IPTABLES -A INPUT -p tcp --dport 21 -j DROP
     
    # TelNet requests - not secure enough
    $IPTABLES -A INPUT -p tcp --dport 23 -j DROP
     
    # HTTP requests
    $IPTABLES -A INPUT -p tcp --dport 80 -j ACCEPT
    $IPTABLES -A INPUT -p tcp --dport 443 -j ACCEPT
     
    # SSH requests - allows ssh, scp and sftp requests
    $IPTABLES -A INPUT -p tcp --dport 22 -s 192.168.1.0/255.255.255.0  -j ACCEPT
     
    # If more than 5 packets are dropped in 3 seconds they will be ignored
    # Helps to prevent a DOS attack crashing the computer
    $IPTABLES -A INPUT -m limit --limit 3/second --limit-burst 5 -i ! lo -j LOG
     
    # NFS
    $IPTABLES -A INPUT -p tcp --dport nfs -s 192.168.1.0/255.255.255.0 -j ACCEPT
    $IPTABLES -A INPUT -p udp --dport nfs -s 192.168.1.0/255.255.255.0 -j ACCEPT
    $IPTABLES -A INPUT -p tcp --dport 111 -s 192.168.1.0/255.255.255.0 -j ACCEPT
    $IPTABLES -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
     
    # Keep track of log in attempts - /var/log/messages
    $IPTABLES -A INPUT  -j LOG --log-prefix "INPUT_DROP: "
    $IPTABLES -A OUTPUT -j LOG --log-prefix "OUTPUT_DROP: "
  6. chmod 700 custom_firewall.sh
  7. ./custom_firewall.sh
  8. To keep these rules intact over reboots,
    /sbin/service iptables save

After these steps, my attempt to set up NFS (both server and client) worked just fine. I understand my options for NFS as well as the firewall rules are neither comprehensive nor complete. As such, I (as well as others) would very much appreciate any thoughts to improve them.

5 responses to “NFS : Server & Client Set Up

  1. chong says:

    I’d recommend setting up autofs on the client to mount nfs shares for you. That way they get unmounted automatically when you stop using them. Also, you might consider not exporting the shares sync it may lead to poor performance.

  2. [...] Linux :: Open Sorce with Linux wrote an interesting post today onHere’s a quick excerpt It is not uncommon to find people (or organizations) who have multiple computers at their disposal … File System (abbreviated as NFS) – as applicable to Red Hat Enterprise Linux distributions [...]

  3. Gowtham says:

    @Chong,
    Thanks for the tip about ‘sync’ thing – appreciate it. The cluster (of our group) actually has the AUTOFS set up between main node and compute nodes. Setting up NFS via AUTOFS was/is my next entry’s topic – have been testing and documenting to make sure I can reproduce the set up. Now that I think about it, my previous AUTOFS set up attempt must have also failed because of IPTABLES thing.

  4. Yu says:

    Your post is really helpful. I tried to setup NFS service, and when I connect from client I got “no route to host” error. The solution you provided solved my problem! Thanks a lot!



Comments are closed.