Skip to content
Philippe DENIEL edited this page Oct 30, 2012 · 2 revisions

Table of Contents

Configuring a NFS-GANESHA server as a NFS proxy

NFS-GANESHA has different beckoned modules, each of them dedicated to address a specific namespace. These bookends are called FSAL (which stands for "File System Abstraction Layer"). One of these FSAL, the FSAL_PROXY, is in fact a NFSv4 client whose API is compliant with the FSAL API. Used with NFS-GANESHA, it turns the NFSv4 server into a NFSv4 proxy.

Step 1: Compiling NFS-GANESHA with the FSAL_PROXY

This is very simple, just proceed as follow:

  # ./configure --with-fsal=PROXY
  # make
  # make install

This will produce the binaries proxy.ganesha.nfsd and proxy.ganeshell.

Step 2 : Writing the configuration file.

Suppose you have a NFSv4 server running on host named alpha, which exports /home via NFSv4. You want to make the NFS-GANESHA server running on a secondary machine named beta.

First of all, make sure that you have regular NFSv4 access to the alpha machine from beta for /home

  beta# mount -t nfs4 alpha:/home /mnt

Make sure that you have root access from beta to alpha from the mount point.

Then you can write the configuration file. As for any other NFS-GANESHA's configuration step, you have a dedicated block, entitled NFSv4_Proxy to be written:

This is a simple example:

NFSv4_Proxy
{
        Srv_Addr = alpha.mydomain ;
        NFS_Port =    2049 ;
        NFS_Service = 100003 ;

#WARNING /!\  Small NFS_SendSize and NFS_RecvSize may lead to problems
        NFS_SendSize = 32768 ;
        NFS_RecvSize = 32768 ;
        Retry_SleepTime = 60 ;
        NFS_Proto = "tcp" ;
}

The fields have the following meanings:

      • Srv_Addr: the name or the IP address (in dotted notation) of the server to be accessed by the proxy
      • NFS_Port and NFS_Service: can be used to specify alternate value to the classical port=2049, service=100003. This fields are used for debugging purpose mostly, you can omit them
      • NFS_SendSize, NFS_RecvSize: they are used to specify the size to be used for RPC packets. I strongly suggest not to use small packets, 32KB is commonly a very good value.
      • NFS_Proto: determines if UDP or TCP are to be used for accessing the server to be proxyfied. Because the proxy is using NFSv4 connection, TCP is a far better choice than udp
The field Path in the Export block is important too: it tells the path you want to access via the proxy. It is equivalent to the path after the name of the server in the classical mount command So if you do this command to mount server alpha on host beta : mount -t nfs4 alpha:/home /mnt You should have this in you configuration file:
Export
{
      # Exported path (mandatory)
  Path = "/home" ;
  (...)
}

Let's have an actual example

I want to access by proxy the /home on host alpha. I'll be using this basic configuration file:

EXPORT
{
  Export_Id = 1 ;
 
  Path = "/home" ;
 
  Root_Access = "*" ;
 
  Pseudo = "/proxy/alpha/home_alpha";
 
  Access_Type = RW;
 
  Anonymous_root_uid = -2 ;
  
  Cache_Data = FALSE ;
 
}

###################################################
#
# Configuration of the NFSv4 proxy
#
###################################################
NFSv4_Proxy
{
    Srv_Addr = alpha;

#WARNING /!\  Small NFS_SendSize and NFS_RecvSize may lead to problems
        NFS_SendSize = 32768 ;
    NFS_RecvSize = 32768 ;
        Retry_SleepTime = 60 ;
    NFS_Proto = "tcp" ;
}

Now, from a NFSv4 client machine, I can do this (note the impact of the NFSv4 path using pseudofs) :

mount -t nfs4 beta:/proxy/alpha/home_alpha /mnt

You'll have access to alpha:/home through this mount point.

Re-exporting in NFSv3 from the proxy.

The proxy server can re-export in NFSv3. This could be pretty useful: imagine a cluster of local machine wanting to access a remote server, on another site (through a WAN for example). The proxy server will be the only one to make NFSv4 access to this remote server (which simplifies a lot firewall configuration) and then re-export the namespace it access through NFSv3 on the local cluster, serving lots of clients in a stateless way.

For enabling this feature, you will need SQLite 3.0 to be installed on your machine and configure NFS-GANESHA that way:

# ./configure --with-fsal=PROXY --enable-handle-mapping

This will had a small SQLite engine to the proxy. This DB will store association between remote files and NFSv3 file handle. A few additional tags are required in the NFSv4_Proxy blocks, in order to configure this DB engine:

NFSv4_Proxy
{
    Srv_Addr = alpha;

    NFS_SendSize = 32768 ;
    NFS_RecvSize = 32768 ;
    Retry_SleepTime = 60 ;
    NFS_Proto = "tcp" ;

     Enable_Handle_Mapping = TRUE;
    HandleMap_DB_Dir      = "/var/nfs-ganesha/handledbdir/";
    HandleMap_Tmp_Dir     = "/tmp";
    HandleMap_DB_Count    = 8;
}

These parameters have the following meaning:

      • Enable_Handle_Mapping : activates the feature of Handle Mapping that is used there
      • HandleMap_DB_Dir : the path were SQLite will store its data files
      • HandleMap_Tmp_Dir: temporary directory required by SQLite
      • HandleMap_DB_Count: the number of different maps to be used. Each maps is managed by a dedicated thread, this number also indicates a number of threads spawned as the service starts up.
With the previous example, on a non NFSv4 client, you can now do this:
mount -o vers=3,udp beta:/home  /mnt

Through mount point /mnt, you now have access to /home on alpha, via the proxy on beta.

Using RPCSEC_GSS with FSAL_PROXY

You need to activate the use of libgssrpc (RPCSEC_GSS implementation provided with MIT krb5)

#  ./configure --enable-gssrpc   --with-fsal=PROXY

Then, you have to use the following additional parameters:

      • Active_krb5: must be set to TRUE
      • Local_PrincipalName: nfs local principal name, should be something like nfs@beta
      • Remote_PrincipalName: nfs principal for remote server, should be something like nfs@alpha
      • KeytabPath: path to the keytab to be used for acquiring cred for Local_PrincipalName
      • Credential_LifeTime: the time before refreshing credentials
      • Sec_Type: type of security, possible value are krb5, krb5i, krb5p. The meaning is the same as for the "-osec=" option of mount.
Example of configuration file
NFSv4_Proxy
{
        Srv_Addr = alpha ;
        
        # RPCSEC_GSS/krb5 specific items
        Active_krb5 = TRUE ;

        # Principal used by FSAL_PROXY
        Local_PrincipalName = [email protected] ;

        # NFS Principal on remote NFS Server
        Remote_PrincipalName = [email protected] ;

        # Ketab were key for local principal resides
        KeytabPath = /etc/krb5.keytab ;

        # Lifetime for acquired credentials
        Credential_LifeTime = 86400 ;

        # Security Type should be krb5,krb5i or krb5p
        Sec_Type = krb5p ;
}

Configuring a NFS-GANESHA server as a NFS proxy

NFS-GANESHA has different beckoned modules, each of them dedicated to address a specific namespace. These bookends are called FSAL (which stands for "File System Abstraction Layer"). One of these FSAL, the FSAL_PROXY, is in fact a NFSv4 client whose API is compliant with the FSAL API. Used with NFS-GANESHA, it turns the NFSv4 server into a NFSv4 proxy.

Step 1: Compiling NFS-GANESHA with the FSAL_PROXY

This is very simple, just proceed as follow:

  # ./configure --with-fsal=PROXY
  # make
  # make install

This will produce the binaries proxy.ganesha.nfsd and proxy.ganeshell.

Step 2 : Writing the configuration file

Suppose you have a NFSv4 server running on host named alpha, which exports /home via NFSv4. You want to make the NFS-GANESHA server running on a secondary machine named beta.

First of all, make sure that you have regular NFSv4 access to the alpha machine from beta for /home

  beta# mount -t nfs4 alpha:/home /mnt

Make sure that you have root access from beta to alpha from the mount point.

Then you can write the configuration file. As for any other NFS-GANESHA's configuration step, you have a dedicated block, entitled NFSv4_Proxy to be written:

This is a simple example:

NFSv4_Proxy
{
        Srv_Addr = alpha.mydomain ;
        NFS_Port =    2049 ;
        NFS_Service = 100003 ;

#WARNING /!\  Small NFS_SendSize and NFS_RecvSize may lead to problems
        NFS_SendSize = 32768 ;
        NFS_RecvSize = 32768 ;
        Retry_SleepTime = 60 ;
}

The fields have the following meanings:

      • Srv_Addr: the name or the IP address (in dotted notation) of the server to be accessed by the proxy
      • NFS_Port and NFS_Service: can be used to specify alternate value to the classical port=2049, service=100003. This fields are used for debugging purpose mostly, you can omit them
      • NFS_SendSize, NFS_RecvSize: they are used to specify the size to be used for RPC packets. I strongly suggest not to use small packets, 32KB is commonly a very good value.
The field Path in the Export block is important too: it tells the path you want to access via the proxy. It is equivalent to the path after the name of the server in the classical mount command So if you do this command to mount server alpha on host beta : mount -t nfs4 alpha:/home /mnt You should have this in you configuration file:
Export
{
      # Exported path (mandatory)
  Path = "/home" ;
  (...)
}

Let's have an actual example

I want to access by proxy the /home on host alpha. I'll be using this basic configuration file:

EXPORT
{
  Export_Id = 1 ;
 
  Path = "/home" ;
 
  Root_Access = "*" ;
 
  Pseudo = "/proxy/alpha/home_alpha";
 
  Access_Type = RW;
 
  Anonymous_root_uid = -2 ;
  
  Cache_Data = FALSE ;
 
}

###################################################
#
# Configuration of the NFSv4 proxy
#
###################################################
NFSv4_Proxy
{
    Srv_Addr = alpha;

#WARNING /!\  Small NFS_SendSize and NFS_RecvSize may lead to problems
    NFS_SendSize = 32768 ;
    NFS_RecvSize = 32768 ;
    Retry_SleepTime = 60 ;
    
}

Now, from a NFSv4 client machine, I can do this (note the impact of the NFSv4 path using pseudofs) :

mount -t nfs4 beta:/proxy/alpha/home_alpha /mnt

You'll have access to alpha:/home through this mount point.

Re-exporting in NFSv3 from the proxy.

The proxy server can re-export in NFSv3. This could be pretty useful: imagine a cluster of local machine wanting to access a remote server, on another site (through a WAN for example). The proxy server will be the only one to make NFSv4 access to this remote server (which simplifies a lot firewall configuration) and then re-export the namespace it access through NFSv3 on the local cluster, serving lots of clients in a stateless way.

For enabling this feature, you will need SQLite 3.0 to be installed on your machine and configure NFS-GANESHA that way:

# ./configure --with-fsal=PROXY --enable-handle-mapping

This will had a small SQLite engine to the proxy. This DB will store association between remote files and NFSv3 file handle. A few additional tags are required in the NFSv4_Proxy blocks, in order to configure this DB engine:

NFSv4_Proxy
{
    Srv_Addr = alpha;

    NFS_SendSize = 32768 ;
    NFS_RecvSize = 32768 ;
    Retry_SleepTime = 60 ;

    Enable_Handle_Mapping = TRUE;
    HandleMap_DB_Dir      = "/var/nfs-ganesha/handledbdir/";
    HandleMap_Tmp_Dir     = "/tmp";
    HandleMap_DB_Count    = 8;
}

These parameters have the following meaning:

      • Enable_Handle_Mapping : activates the feature of Handle Mapping that is used there
      • HandleMap_DB_Dir : the path were SQLite will store its data files
      • HandleMap_Tmp_Dir: temporary directory required by SQLite
      • HandleMap_DB_Count: the number of different maps to be used. Each maps is managed by a dedicated thread, this number also indicates a number of threads spawned as the service starts up.
With the previous example, on a non NFSv4 client, you can now do this:
mount -o vers=3,udp beta:/home  /mnt

Through mount point /mnt, you now have access to /home on alpha, via the proxy on beta.

Using RPCSEC_GSS with FSAL_PROXY

You need to activate the use of libgssrpc (RPCSEC_GSS implementation provided with MIT krb5)

#  ./configure --enable-gssrpc   --with-fsal=PROXY

Then, you have to use the following additional parameters:

      • Active_krb5: must be set to TRUE
      • Local_PrincipalName: nfs local principal name, should be something like nfs@beta
      • Remote_PrincipalName: nfs principal for remote server, should be something like nfs@alpha
      • KeytabPath: path to the keytab to be used for acquiring cred for Local_PrincipalName
      • Credential_LifeTime: the time before refreshing credentials
      • Sec_Type: type of security, possible value are krb5, krb5i, krb5p. The meaning is the same as for the "-osec=" option of mount.
Example of configuration file
NFSv4_Proxy
{
        Srv_Addr = alpha ;
        
        # RPCSEC_GSS/krb5 specific items
        Active_krb5 = TRUE ;

        # Principal used by FSAL_PROXY
        Local_PrincipalName = [email protected] ;

        # NFS Principal on remote NFS Server
        Remote_PrincipalName = [email protected] ;

        # Ketab were key for local principal resides
        KeytabPath = /etc/krb5.keytab ;

        # Lifetime for acquired credentials
        Credential_LifeTime = 86400 ;

        # Security Type should be krb5,krb5i or krb5p
        Sec_Type = krb5p ;
}
Clone this wiki locally