Configuring NFS on Ubuntu


How NFS works:
Typically, NFS allows a client machine (quark) to require transparent access to data stored on a server machine (dirak).
For this to take place successfully:
  1. The server (dirak) runs NFS daemon processes (nfsd and mountd) in order to make its data available to clients.
  2. The sysadmin determines what to make available, and exports names and parameters of directories to be shared, normally using the /etc/exports configuration file and the exportfs command.
  3. The sysadmin configures the server (using hosts.deny, hosts.allow) so that it can recognize and approve validated clients.
  4. The client machine requests access to exported data, typically by issuing a mount command.

Client quark mounts the /usr/home directory from host dirac on the local directory /home
# mount -t nfs dirac:/usr/home/ /home
To mount the remote directory:
  1. mount connects to mountd daemon, running on dirac.
  2. mountd checks whether quark has permission to mount /usr/home. If so, it returns a file handle.
  3. When someone tries to access the file /home/jdoe/login.sh in quark, the kernel places an RPC call to nfsd on the NFS server (dirac):
    • rpc_call(file handle, file name, UID, GID) - User and Group IDs must be the same on both hosts.














  1. If all goes well, users on the client machine can then view and interact with mounted filesystems on the server within the parameters permitted.

  • Client and server NFS functionality is implemented as kernel-level daemons that are started from user space at system boot.
  • These NFS daemons are normally started at boot time and register themselves with the portmapper, a service that manages the access to TCP ports of programs involved in remote procedure calls.
    • mountd - Runs on the NFS Server. Processes client's NFS requests.
    • nfsd (NFS daemon) - Runs on the NFS Server. Service the client's request.


Installing and Configuring NFS Server:
(Step 1): Check whether your kernel has NFS support compiled in. One way to do this is to query the kernel interface on the proc filesystem.
$ cat /proc/filesystems | grep nfs
nodev   nfs
nodev   nfs4
nodev   nfsd

-- If Kernel support for NFS is installed, you should see the lines above. 
-- If no results are displayed, you need to install NFS Server support:

$ sudo apt-get install portmap nfs-kernel-server

(Step 2): Configure NFS Server: define shared directories
  • Now you need to tell the NFS server which directories should be available for mounting, and which parameters should control client access to them.
  • You do this by exporting the files, that is, listing filesystems and access controls in the /etc/exports file.
# exports file for dirac. 
# Each line defines a directory and the hosts allowed to mount it

/home      quark.math.usm.edu(rw, sync)  proton.math.usm.edu(rw, sync)
/usr/TeX   *.math.usm.edu
/home/ftp  *(ro)
In the exports file above:
  • *.math.usm.edu -- matches all hosts in teh domain math.usm.edu
  • Security options:
  • rw - allow read/write in the exported file. Disallowed by default.
  • sync - Reply to requests only after changes have been committed to stable storage.

(Step 3): export the shares.
After modifying /etc/exports, run the command
$ sudo exportfs -ra 

(Step 4): Edit /etc/default/portmap to enable access to portmap from remote machines.
By default, portmap listens only for RPC calls coming from the loopback interface (127.0.0.1). For this,
(a) comment the "-i 127.0.0.1" entry in the file;
(b) restart portmap; and
(c) restart the NFS kernel server:
edit /etc/default/portmap
S sudo /etc/init.d/portmap restart
$ sudo /etc/init.d/nfs-kernel-server restart


Configuring NFS Clients

(Step 1): Install NSF Client
$ sudo apt-get intsall portmap nfs-common

(Step 2 - optional): Configure portmap to allow connections to the NFS server.

/etc/hosts.deny - list of hosts that are not allowed to access the system. Edit the file to block all clients. In this sense, only those that you explicitly authorize (in /etc/hosts.allow) will be able to connect the server.
portmap: ALL
/etc/hosts.allow - list of hosts authorized to access the server
portmap: <nfs Server IP address>

Mounting a remote filesystem manually:
From the client:
$ sudo mount dirac.math.usm.edu:/users/home /home

Configure auto mounting during startup:
  • You can set up automatic nfs mounting by including entries in /etc/fstab.
  • The /etc/fstab file is used to statically define the file systems that will be automatically mounted at boot time.
  • It contains a list of all available disks and disk partitions, and indicates how they are to be initialized into the overall system's file system
  • During machine startup, the mount program reads /etc/fstab file to determine which options should be used when mounting the specified device.
# device name   mount point     fs-type      options       dump-freq pass-num                                          
# servername:dir /mntpoint        nfs          rw,hard,intr   0         0

dirac:/users/home  /home  nfs  rw, hard, intr  0  0 
Just like other /etc/fstab mounts, NFS mounts in /etc/fstab have 6 columns, listed in order as follows:
  • The filesystem to be mounted (dirac.math.usm.edu:/users/home/)
  • The mountpoint (/home)
  • The filesystem type (nfs)
  • The options (rw, hard, intr)
  • Frequency to be dumped (a backup method) (0)
  • Order in which to be fsck'ed at boot time. (0) - dont perform fsck.

Options:
  • rw - read/write
  • hard - share mounted so that if the server becomes unavailable, the program will wait until the server is available again.
See more details on man mount

Network File System (NFS) - Concepts




What is NFS
  • NFS is a platform independent remote file system technology created by SUN in the 1980s.
  • It is a client/server application that provides shared file storage for clients across a network.
  • It was designed to simplify the sharing of filesystems resources in a network of non-homogeneous machines.
  • It is implemented using the RPC protocol and the files are available through the network via a Virtual File System (VFS), an interface that runs on top of the TCP/IP layer.
  • Allows an application to access files on remote hosts in the same way it access local files.

NFS Servers: Computers that share files
  • During the late 1980s and 1990s, a common configuration was to configure a powerful workstation with lots of local disks and often without a graphical display to be a NFS Server.
  • "Thin," diskless workstations would then mount the remote file systems provided by the NFS Servers and transparently use them as if they were local files.

NFS Simplifies management:
  • Instead of duplicating common directories such as /usr/local on every system, NFS provides a single copy of the directory that is shared by all systems on the network.
  • Simplify backup procedures - Instead of setting up backup for the local contents of each workstation (of /home for exmaple), with NFS a sysadm needs to backup only the server's disks.

NFS Clients: Computers that access shared files
  • NFS uses a mixture of kernel support and user-space daemons on the client side.
  • Multiple clients can mount the same remote file system so that users can share files.
  • Mounting can be done at boot time. (i.e. /home could be a shared directory mounted by each client when user logs in).
  • An NFS client
    • (a) mounts a remore file system onto the client's local file system name space and
    • (b) provides an interface so that access to the files in the remote file system is done as if they were local files.

----
Goals of NFS design:
  1. Compatibility:
  2. NFS should provide the same semantics as a local unix file system. Programs should not need or be able to tell whether a file is remote or local. user program: OPEN("/users/jdoe/.profile", READONLY) -- program cannot tell whether "users" or "jdoe" are local path names.
  3. Easy deployable:
  4. implementation should be easily incorporated into existing systems remote files should be made available for local programs without these having to be modified or relinked.
  5. Machine and OS independence:
  6. NFS Clients should run in non-unix platforms Simple protocols that could be easily implementend in other platforms.
  7. Efficienty:
  8. NFS should be good enough to satisfy users, but did not have to be as fast as local FS. Clients and Servers should be able to easily recover from machine crashes and network problems.


NSF Versions
  • Version 1: used only inside Sun Microsystems.
  • Version 2: Released in 1987 (RFC 1989)
  • Version 3: Released 1995
  • Version 4: Released 2000

NFS design: NFS Protocol, Server, Client

NFS Protocol
  • Uses Remote Procedure Call (RPC) mechanisms
  • RPCs are synchronous (client application blocks while waits for the server response)
  • NFS uses a stateless protocol (server do not keep track of past requests) - This simplify crash recovery. All that is needed to resubmit the last request.
  • In this way, the client cannot differentiate between a server that crashed and recovered and one that is just slow.

New File system interface
  • The original Unix file system interface was modified in order to implement NFS as an extension of the Unix file system.
  • NFS was built into the Unix kernel by separating generic file systems operations from specific implementations. With this the kernel can treat all filesystems and nodes in the same way and new file systems can be added to the kernel easily:
    • A Virtual File System (VFS) interface: defines the operations that can be done on a filesystem.
    • A Virtual node (vnode) interface: defines the operations that can be done on a file within a filesystem.
  • A vnode is a logical structure that abstracts whether a file or directory is implemented by a local or a remote file system. In this sense, applications had to "see" only the vnode interface and the actual location of the file (local or remote file system) is irrelevant for the application.
  • In addition, this interface allows a computer to transparently access locally different types of file systems (i.e. ext2, ext3, Reiserfs, msdos, proc, etc).



NFS Client
Uses a mounter program. The mounter:
  1. takes a remote file system identification host:path;
  2. sends RPC to host and asks for (1) a file handle for path and (2) server network address.
  3. marks the mount point in the local file system as a remote file system associated with host address:path pair.



Diagram of NFS architecture

NFS Remote Procedure Calls
NFS client users RPCs to implement each file system operation.
Consider the user program code below:
fd <- OPEN ("f", READONLY)
READ (fd, buf, n)
CLOSE (fd)
  • An application opens file "f" sends a read request and close the file.
  • The file "f" is a remote file, but this information is irrelevant for the application.
  • The virtual file system holds a map with host address and file handles (dirfh) of all the mounted remote file systems.
  • The sequence of steps to obtain the file are listed below:

  1. The Virtual File System finds that file "f" is on a remote file system, and passes the request to the NFS client.
  2. The NFS client sends a lookup request (LOOKUP(dirth, "f") for the NFS Server, passing the file handler (dirth) for the remote file system and file name to be read.
  3. The NFS server receives LOOKUP request, extracts the file system identifier and inode number from dirth, and asks the identified file system to look up the inode number in dirth and find the local directory inode information.
  4. The NFS server searches the directory identified by the inode number for file "f".
    If file is found, the server creates a handle for "f" and sends it back to the client.
  5. The NFS client allocates the first unused entry in the program's file descriptor table, stores a reference to f's file handle in that entry, and returns the index for the entry (fd) to the user program.
  6. Next, the user program calls READ(fd, buf, n).
  7. The NFS client sends the RPC READ(fh,0,n).
  8. The NFS server looks up the inode for fh, reads the data and send it in a reply message.
  9. When the user program calls to close the file (CLOSE(fd)), the NFS client does not issue an RPC, since the program did not modify the file.




References:
Russel Sandberg, David Goldberg, Steve Kleiman, Dan Walsh, and Bob Lyon. Design and Implementation of the Sun Network Filesystem . Proceedings of the Summer 1985 USENIX Conference, Portland OR, June 1985, pp. 119-130.
Saltzer, Jerome H. and M. Frans Kaashoek. 2009. Principles of computer system design.

Oracle ASM on 11g R2: Installing Grid Infrastructure



Note: For steps on how to configure Oracle ASM before installing Grid infrastructure, check here.



This Grid Infrastructure installation on a standalone server will perform the following steps:
  1. Install Oracle ASM software
  2. Install Oracle Restart software
  3. Install and configure the Listener
  4. Create an ASM Disk group
  5. Create and configure an ASM Instance on the machine

Before proceeding, make sure that you set the path to the Oracle base directory.
On bash shell:
$ ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
$ echo $ORACLE_BASE
/u01/app/oracle

  • Logged as the Grid Infrastructure user owner,change directory to the grid infrastructure media directory, run the installation program and follow the installation steps below.
  • In our case, we will set up a single owner environment, so make sure you're logged as the user oracle.
$ ./runInstaller

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 80 MB.   Actual 5902 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2010-09-18_08-01-12PM. Please wait ...
$ 

Select installation type
  • Select option to install and configure Grid Infrastructure for a standalone server.
Select language
  • In the next screen, Select language
Select disks to form the disk group
  • The next string should list all the disks previously configured for ASM use.
  • These candidate disks should have been discovered at boot time by ASMLib.
  • If no disks are listed:
(a) Check if disk devices ownership is appropriately configured.
The disk devices must be owned by the user performing the grid installation.
Check user and group ownership with the command:
# ls -l /dev/oracleasm/disks/
total 0
brw-rw---- 1 oracle dba 8, 17 Set 18 22:33 DISK1
brw-rw---- 1 oracle dba 8, 33 Set 18 22:52 DISK2
brw-rw---- 1 oracle dba 8, 49 Set 18 22:52 DISK3
brw-rw---- 1 oracle dba 8, 65 Set 18 22:53 DISK4
(b) check whether ASMLib driver is loaded:
# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

# oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
(c) Check the default discovery string on the installer.
In linux, the default discovery strign is '/dev/raw*'.
Click on the Change Discovery Path button and type '/dev/oracleasm/disks/*' (without quotes!).
This should list all the disks you have previously configured.

Configure ASM Disk Group
  • Select name for the disk group being created and select the disks that will compose this group.
  • Here we choose normal redundancy and create the oradata_dskgrp with disk1:(/dev/sdb1,3Gb) and disk3:(/dev/sdd1, 3Gb).
  • Each Oracle ASM disk is divided into allocation units (AU).
  • An allocation unit is the fundamental unit of allocation within a disk group and by default it is 1 Mb.












Specify the passwords for SYS and ASMSNMP users.
  • These users are created in the ASM Instance.
  • To manage an ASM Instance, a user needs the SYSASM role, which grants full access to all ASM disks (including authority to created and delete ASM disks).
  • The user ASMSNMP, with only SYSDBA role, can monitor but does not have full access to the ASM diks.
Select the name of the OS groups to be used for OS authentication to ASM:
Select installation location.
In the next two screens, accept or change the location for oracle grid home directory, and accept the location for the inventory directory (if this is the first oracle install in the machine)
Check whether all installation prerequisites were met. If so, proceed.
Review contents and click Install.
Run the Post-installation scripts (as root)
# ./root.sh 
Running Oracle 11g root.sh script...

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /u01/app/oracle/product/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-09-18 00:16:18: Checking for super user privileges
2010-09-18 00:16:18: User has super user privileges
2010-09-18 00:16:18: Parsing the host name
Using configuration parameter file: /u01/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE 
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
CRS-4664: Node quark successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on oraclelinux-release-5-6.0.1

quark     2010-09-18 00:16:54     /u01/app/oracle/product/11.2.0/grid/cdata/quark/backup_20100918_001654.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Updating inventory properties for clusterware
...

  • When the installation completes, you should have an ASM instance up and running.
  • Some of the processes running include:
$ ps -ef |grep ora
...
oracle   17900     1  0 00:16 ?        00:00:03 /u01/app/oracle/product/11.2.0/grid/bin/ohasd.bin reboot
          --> This is the Oracle Restart (Oracle High Availability Service) daemon.
oracle   18356     1  0 00:18 ?        00:00:01 /u01/app/oracle/product/11.2.0/grid/bin/oraagent.bin
          --> Extends clusterware to support Oracle-specific requirements and complex resources.
          --> Runs server callout scripts when FAN events occur. 
          --> Pprocess was known as RACG in Oracle Clusterware 11g release 1 (11.1).
oracle   18375     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/tnslsnr LISTENER -inherit
oracle   18563     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/cssdagent
          --> Starts, stops and monitors Oracle Clusterware
oracle   18565     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/orarootagent.bin
          --> specialized oraagent process that helps crsd manage resources 
          (con't) owned by root, such as the network, and the Grid virtual IP address.
oracle   18599     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/diskmon.bin -d -f
          --> I/O Fencing and SKGXP HA monitoring daemon
oracle   18600     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/ocssd.bin 
          --> Oracle Cluster Synchronization Service Daemon (OCSSD). 
          --> performs some of the clusterware functions on UNIX-based systems
          --> ocssd.bin is required for ASM Instance. 

oracle   18884     1  0 00:19 ?        00:00:00 asm_pmon_+ASM  ----
oracle   18888     1  0 00:19 ?        00:00:00 asm_vktm_+ASM      |
oracle   18894     1  0 00:19 ?        00:00:00 asm_gen0_+ASM      |
oracle   18898     1  0 00:19 ?        00:00:00 asm_diag_+ASM      |
oracle   18902     1  0 00:19 ?        00:00:00 asm_psp0_+ASM      |
oracle   18906     1  0 00:19 ?        00:00:00 asm_dia0_+ASM      |
oracle   18910     1  0 00:19 ?        00:00:00 asm_mman_+ASM      |==================
oracle   18914     1  0 00:19 ?        00:00:00 asm_dbw0_+ASM      |=>     +ASM Instance 
oracle   18918     1  0 00:19 ?        00:00:00 asm_lgwr_+ASM      |=> background processes
oracle   18922     1  0 00:19 ?        00:00:00 asm_ckpt_+ASM      |==================
oracle   18926     1  0 00:19 ?        00:00:00 asm_smon_+ASM      |
oracle   18930     1  0 00:19 ?        00:00:00 asm_rbal_+ASM      |
oracle   18934     1  0 00:19 ?        00:00:00 asm_gmon_+ASM      |
oracle   18938     1  0 00:19 ?        00:00:00 asm_mmon_+ASM      |
oracle   18942     1  0 00:19 ?        00:00:00 asm_mmnl_+ASM  ----
oracle   19119 13210  0 00:23 pts/2    00:00:00 ps -ef
oracle   19120 13210  0 00:23 pts/2    00:00:00 grep ora
$


using Oracle Restart
  • When created, a new database instance will automatically register with Oracle Restart.
  • Once added to the Oracle Restart configuration, if the database then accesses data in an Oracle ASM disk group, a dependency between the database that disk group is created.
  • Oracle Restart then ensures that the disk group is mounted before attempting to start the database.

About SRVCTL
  • You can use SRVCTL commands to add, remove, start, stop, modify, enable, and disable a number of entities, such as databases, instances, listeners, SCAN listeners, services, grid naming service (GNS), and Oracle ASM.
  • SRVCTL utility can be used to start and stop the Oracle Restart components manually.
  • When you start/stop a component with SRVCTL, any components on which this component depends are automatically started/stopped first, and in the proper order.
  • Important Note:
    • To manage Oracle ASM on Oracle Database 11g R2 installations, use the SRVCTL binary in the Oracle Grid Infrastructure home for a cluster (Grid home).
    • If you have Oracle RAC or Oracle Database installed, then you cannot use the SRVCTL binary in the database home to manage Oracle ASM.

Usage: srvctl command object []
    commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config
    objects: database|service|asm|diskgroup|listener|home|ons|eons

(a) check status of grid services and objects
jdoe@quark $ srvctl status asm
ASM is running on quark

jdoe@quark $ srvctl status diskgroup -g  oradata_dskgrp
Disk Group oradata_dskgrp is running on quark

jdoe@quark $ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): quark

-- Displaying the running status of all of the components that are managed by Oracle Restart in the specified Oracle home. 
-- The Oracle home can be an Oracle Database home or an Oracle Grid Infrastructure home.

jdoe@quark $ ./srvctl status home -o /u01/app/oracle/product/11.2.0/grid -s /home/oracle/statefile
Disk Group ora.ORADATA_DSKGRP.dg is running on quark
ASM is running on quark
Listener LISTENER is running on node quark


(b) The srvctl config command displays the Oracle Restart configuration of the specified component or set of components
jdoe@quark $ srvctl config asm -a
ASM home: /u01/app/oracle/product/11.2.0/grid
ASM listener: LISTENER
Spfile: +ORADATA_DSKGRP/asm/asmparameterfile/registry.253.768442773
ASM diskgroup discovery string: /dev/oracleasm/disks
ASM is enabled.

jdoe@quark $ srvctl config listener
Name: LISTENER
Home: /u01/app/oracle/product/11.2.0/grid
End points: TCP:1521

-- Display configuration and enabled/disabled status for the database with the DB_UNIQUE_ID orcl:
jdoe@quark $ srvctl config database -d orcl -a

Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/orcl/spfileorcl.ora
Domain: us.example.com
Start options: open
Stop options: immediate
Database role:
Management policy: automatic
Disk Groups: DATA
Services: mfg,sales
Database is enabled