A recurring request at UCLA-Mathnet is that we allow a user to mount his UNIX home directory on a rogue laptop, i.e. a machine not under our administrative control, either on our own net or from a remote site such as a visitor's institution or home. At present all our filesystems are exported via NFSv3, described below, which cannot handle such service. What alternative network filesystem could we use for this?
NFSv2 is the traditional Network File System deployed in SunOS. Its major characteristics are:
honor systemaccess control, it is generally assumed that NFSv2 cannot stand up to hacking from the global Internet. In particular, when users ask to mount their UNIX home directories from a remote location, we have to refuse.
The Andrew File System (AFS) from Carnegie-Mellon is a widely deployed alternative to NFS, and institutions who have deployed it like it very much. For example, both CMU and Stanford export all student home directories globally, specifically to the dormitories and all campus departmental nets, as well as off campus, with little hassle for the students and little or no exposure to hacking. UCLA-Mathnet considered deploying AFS, but there is a downside as well (follow the link for discussion), and we did not pursue this filesystem.
NFSv3 appeared with newer versions of Sun Solaris. It includes TCP transport, making it more responsive on a heavily loaded net, but it seems to have most or all of the disadvantages of NFSv2. Somewhere in the history, possibly with the advent of NFSv3, it became possible but optional to authenticate the client host using Kerberos, plugging that security hole.
NFSv4 is an evolutionary advance that keeps the same flavor
as
traditional NFS but which improves quite a number of aspects.
Specifically:
Clearly we should make NFSv4 available to our users. In the rest of this document I will discuss issues in how to do that.
The NFSv4 client syncs and gives up its filehandle, if unused for some time (default is 10 minutes). If used again, the filehandle is reacquired transparently. This reduces stale NFS filehandles if the server crashes, even if a client has a file open which prevents autofs from unmounting the filesystem. (The client can possibly survive the reboot.)
There is only one mount per (client x server), not one per filesystem. For Mathnet the number of mounts is not a problem, but it is with big storage appliances, since the kernel has an upper bound on the number of mounts.
NFSv4 does hostbased authentication through Kerberos rather than by interpreting the client's IP address. This makes it impossible for a rogue laptop to impersonate an authorized host and mount filesystems.
NFSv4 does not give access to individual users using Kerberos, as AFS does. It gives access to hosts, and relies on the client to honestly identify the users. Individual authentication is something that we want and aren't going to get.
Netgroups are no longer used for export control: if a host has a Kerberos principal (nfs/fqdn@REALM) then it's authorized to mount. This could be seen as an advantage or a disadvantage. Maintaining the export netgroup is really a nightmare for us.
This all applies to SuSE 10.0, kernel 2.6.13, nfs-utils-1.0.7. Updated for SuSE 10.2, kernel 2.6.18, nfs-utils-1.0.10. Much of this info comes from http://wiki.linux-nfs.org.
To use Kerberos authentication, both the server and the client
need to have a Kerberos principal nfs/fqdn@realm
, e.g.
nfs/simba.math.ucla.edu@MATH.UCLA.EDU
, and its key must be in the
host's default keytab (mode 600, not encrypted). An alternate default keytab
specified in /etc/krb5.conf is not honored; it has to be /etc/krb5.keytab.
A user credential does not get you anything either.
Here are the commands for kadmin (MIT Kerberos) to create the principal and
copy the key:
addprinc -randkey nfs/simba.math.ucla.eduThe page from which I copied this command also included -e des-cbc-crc:normal; but now better keys are useable, and we generate AES (and others), and it works without the single-DES key. If you omit -e, all available keys are copied.
ktadd -k /etc/krb5.keytab nfs/simba.math.ucla.edu
Make sure that the RPC and NFSD filesystems are mounted. In SuSE the startup scripts for NFS take care of this, but on another distro you may need to add these lines to fstab:
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs defaults 0 0 nfsd /proc/fs/nfsd nfsd defaults 0 0
On the server, modify the fstab lines for the filesystems you will
be exporting to include the acl
option. NFSv4 does Access Control Lists
similar (but not identical) to POSIX and Windows ACLs, and it expects that
the underlying filesystem also supports them. I don't know what happens if
that's not true, e.g. for a VFAT filesystem, but the effects probably aren't
too traumatic. SuSE 10.0 enables ACLs unless you remove the option from fstab.
Pre-create an export root, here called /nfsx, and a mount point for each filesystem to be exported. NFSv4 exports only this one directory tree. You now need to make a design decision: should the filesystems be mounted on these mount points, or on other mount points which are then brought into the export area with a bind mount? Testing so far has been with the bind mounts.
/etc/exports needs to export /nfsx and, separately, each filesystem mounted within it. This is standard NFS behavior: exportability is not inherited just by being mounted within an exported directory tree. The format of each line is:
/dirname type(options,options,...)where the
typemay be a host list or netgroup as for Sun's NFSv3, or
gss/krb5,
gss/krb5i, or
gss/krb5p. The latter two turn on integrity checking (so a maliciously altered packet will be detected) or privacy (encryption on the wire, assumed to include integrity). With the gss types, access is granted to any host that has a nfs/fqdn@REALM principal in the server's realm's Kerberos server, possibly extended through cross-realm trust. The client also must have the secret key that goes with this principal; a rogue laptop is no longer able to impersonate an authorized host; a root exploit on the client is required to steal its secret key.
The options recommended in the wiki and which have been tested include:
nobody, recommended to limit damage in case of a root exploit on the client.
Here is a sample /etc/exports file. All three GSS types have to be exported if you want them used. (Since there isn't client support yet, you could cut corners and omit integrity and privacy, but it's better to do it right in the beginning.) The first line is for NFSv3 exporting and can be omitted if you only use NFSv4.
/m1 @nfsc(rw,root_squash,async,no_subtree_check) /nfsx gss/krb5(rw,root_squash,async,no_subtree_check,insecure,fsid=0) /nfsx/m1 gss/krb5(rw,root_squash,async,no_subtree_check,insecure,nohide) /nfsx/m1 gss/krb5i(rw,root_squash,async,no_subtree_check,insecure,nohide) /nfsx/m1 gss/krb5p(rw,root_squash,async,no_subtree_check,insecure,nohide)The client must mount the server's export root like this. A subtree of the export root could also be specified.
mount -t nfs4 -o sec=krb5 server:/ /mountpointAll exported subdirectories in the export root are available with no further mount actions.
This automounter map row will do the mounting automatically:
* -fstype=nfs4,sec=krb5 &:/For example if that map were assigned to /net4, then a reference to /net4/server/m1 would get the /nfsx/m1 directory (see the above exports example) on host
server.