Network File System (NFS) - Concepts




What is NFS
  • NFS is a platform independent remote file system technology created by SUN in the 1980s.
  • It is a client/server application that provides shared file storage for clients across a network.
  • It was designed to simplify the sharing of filesystems resources in a network of non-homogeneous machines.
  • It is implemented using the RPC protocol and the files are available through the network via a Virtual File System (VFS), an interface that runs on top of the TCP/IP layer.
  • Allows an application to access files on remote hosts in the same way it access local files.

NFS Servers: Computers that share files
  • During the late 1980s and 1990s, a common configuration was to configure a powerful workstation with lots of local disks and often without a graphical display to be a NFS Server.
  • "Thin," diskless workstations would then mount the remote file systems provided by the NFS Servers and transparently use them as if they were local files.

NFS Simplifies management:
  • Instead of duplicating common directories such as /usr/local on every system, NFS provides a single copy of the directory that is shared by all systems on the network.
  • Simplify backup procedures - Instead of setting up backup for the local contents of each workstation (of /home for exmaple), with NFS a sysadm needs to backup only the server's disks.

NFS Clients: Computers that access shared files
  • NFS uses a mixture of kernel support and user-space daemons on the client side.
  • Multiple clients can mount the same remote file system so that users can share files.
  • Mounting can be done at boot time. (i.e. /home could be a shared directory mounted by each client when user logs in).
  • An NFS client
    • (a) mounts a remore file system onto the client's local file system name space and
    • (b) provides an interface so that access to the files in the remote file system is done as if they were local files.

----
Goals of NFS design:
  1. Compatibility:
  2. NFS should provide the same semantics as a local unix file system. Programs should not need or be able to tell whether a file is remote or local. user program: OPEN("/users/jdoe/.profile", READONLY) -- program cannot tell whether "users" or "jdoe" are local path names.
  3. Easy deployable:
  4. implementation should be easily incorporated into existing systems remote files should be made available for local programs without these having to be modified or relinked.
  5. Machine and OS independence:
  6. NFS Clients should run in non-unix platforms Simple protocols that could be easily implementend in other platforms.
  7. Efficienty:
  8. NFS should be good enough to satisfy users, but did not have to be as fast as local FS. Clients and Servers should be able to easily recover from machine crashes and network problems.


NSF Versions
  • Version 1: used only inside Sun Microsystems.
  • Version 2: Released in 1987 (RFC 1989)
  • Version 3: Released 1995
  • Version 4: Released 2000

NFS design: NFS Protocol, Server, Client

NFS Protocol
  • Uses Remote Procedure Call (RPC) mechanisms
  • RPCs are synchronous (client application blocks while waits for the server response)
  • NFS uses a stateless protocol (server do not keep track of past requests) - This simplify crash recovery. All that is needed to resubmit the last request.
  • In this way, the client cannot differentiate between a server that crashed and recovered and one that is just slow.

New File system interface
  • The original Unix file system interface was modified in order to implement NFS as an extension of the Unix file system.
  • NFS was built into the Unix kernel by separating generic file systems operations from specific implementations. With this the kernel can treat all filesystems and nodes in the same way and new file systems can be added to the kernel easily:
    • A Virtual File System (VFS) interface: defines the operations that can be done on a filesystem.
    • A Virtual node (vnode) interface: defines the operations that can be done on a file within a filesystem.
  • A vnode is a logical structure that abstracts whether a file or directory is implemented by a local or a remote file system. In this sense, applications had to "see" only the vnode interface and the actual location of the file (local or remote file system) is irrelevant for the application.
  • In addition, this interface allows a computer to transparently access locally different types of file systems (i.e. ext2, ext3, Reiserfs, msdos, proc, etc).



NFS Client
Uses a mounter program. The mounter:
  1. takes a remote file system identification host:path;
  2. sends RPC to host and asks for (1) a file handle for path and (2) server network address.
  3. marks the mount point in the local file system as a remote file system associated with host address:path pair.



Diagram of NFS architecture

NFS Remote Procedure Calls
NFS client users RPCs to implement each file system operation.
Consider the user program code below:
fd <- OPEN ("f", READONLY)
READ (fd, buf, n)
CLOSE (fd)
  • An application opens file "f" sends a read request and close the file.
  • The file "f" is a remote file, but this information is irrelevant for the application.
  • The virtual file system holds a map with host address and file handles (dirfh) of all the mounted remote file systems.
  • The sequence of steps to obtain the file are listed below:

  1. The Virtual File System finds that file "f" is on a remote file system, and passes the request to the NFS client.
  2. The NFS client sends a lookup request (LOOKUP(dirth, "f") for the NFS Server, passing the file handler (dirth) for the remote file system and file name to be read.
  3. The NFS server receives LOOKUP request, extracts the file system identifier and inode number from dirth, and asks the identified file system to look up the inode number in dirth and find the local directory inode information.
  4. The NFS server searches the directory identified by the inode number for file "f".
    If file is found, the server creates a handle for "f" and sends it back to the client.
  5. The NFS client allocates the first unused entry in the program's file descriptor table, stores a reference to f's file handle in that entry, and returns the index for the entry (fd) to the user program.
  6. Next, the user program calls READ(fd, buf, n).
  7. The NFS client sends the RPC READ(fh,0,n).
  8. The NFS server looks up the inode for fh, reads the data and send it in a reply message.
  9. When the user program calls to close the file (CLOSE(fd)), the NFS client does not issue an RPC, since the program did not modify the file.




References:
Russel Sandberg, David Goldberg, Steve Kleiman, Dan Walsh, and Bob Lyon. Design and Implementation of the Sun Network Filesystem . Proceedings of the Summer 1985 USENIX Conference, Portland OR, June 1985, pp. 119-130.
Saltzer, Jerome H. and M. Frans Kaashoek. 2009. Principles of computer system design.

Oracle ASM on 11g R2: Installing Grid Infrastructure



Note: For steps on how to configure Oracle ASM before installing Grid infrastructure, check here.



This Grid Infrastructure installation on a standalone server will perform the following steps:
  1. Install Oracle ASM software
  2. Install Oracle Restart software
  3. Install and configure the Listener
  4. Create an ASM Disk group
  5. Create and configure an ASM Instance on the machine

Before proceeding, make sure that you set the path to the Oracle base directory.
On bash shell:
$ ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
$ echo $ORACLE_BASE
/u01/app/oracle

  • Logged as the Grid Infrastructure user owner,change directory to the grid infrastructure media directory, run the installation program and follow the installation steps below.
  • In our case, we will set up a single owner environment, so make sure you're logged as the user oracle.
$ ./runInstaller

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 80 MB.   Actual 5902 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2010-09-18_08-01-12PM. Please wait ...
$ 

Select installation type
  • Select option to install and configure Grid Infrastructure for a standalone server.
Select language
  • In the next screen, Select language
Select disks to form the disk group
  • The next string should list all the disks previously configured for ASM use.
  • These candidate disks should have been discovered at boot time by ASMLib.
  • If no disks are listed:
(a) Check if disk devices ownership is appropriately configured.
The disk devices must be owned by the user performing the grid installation.
Check user and group ownership with the command:
# ls -l /dev/oracleasm/disks/
total 0
brw-rw---- 1 oracle dba 8, 17 Set 18 22:33 DISK1
brw-rw---- 1 oracle dba 8, 33 Set 18 22:52 DISK2
brw-rw---- 1 oracle dba 8, 49 Set 18 22:52 DISK3
brw-rw---- 1 oracle dba 8, 65 Set 18 22:53 DISK4
(b) check whether ASMLib driver is loaded:
# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

# oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
(c) Check the default discovery string on the installer.
In linux, the default discovery strign is '/dev/raw*'.
Click on the Change Discovery Path button and type '/dev/oracleasm/disks/*' (without quotes!).
This should list all the disks you have previously configured.

Configure ASM Disk Group
  • Select name for the disk group being created and select the disks that will compose this group.
  • Here we choose normal redundancy and create the oradata_dskgrp with disk1:(/dev/sdb1,3Gb) and disk3:(/dev/sdd1, 3Gb).
  • Each Oracle ASM disk is divided into allocation units (AU).
  • An allocation unit is the fundamental unit of allocation within a disk group and by default it is 1 Mb.












Specify the passwords for SYS and ASMSNMP users.
  • These users are created in the ASM Instance.
  • To manage an ASM Instance, a user needs the SYSASM role, which grants full access to all ASM disks (including authority to created and delete ASM disks).
  • The user ASMSNMP, with only SYSDBA role, can monitor but does not have full access to the ASM diks.
Select the name of the OS groups to be used for OS authentication to ASM:
Select installation location.
In the next two screens, accept or change the location for oracle grid home directory, and accept the location for the inventory directory (if this is the first oracle install in the machine)
Check whether all installation prerequisites were met. If so, proceed.
Review contents and click Install.
Run the Post-installation scripts (as root)
# ./root.sh 
Running Oracle 11g root.sh script...

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /u01/app/oracle/product/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-09-18 00:16:18: Checking for super user privileges
2010-09-18 00:16:18: User has super user privileges
2010-09-18 00:16:18: Parsing the host name
Using configuration parameter file: /u01/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE 
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
CRS-4664: Node quark successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on oraclelinux-release-5-6.0.1

quark     2010-09-18 00:16:54     /u01/app/oracle/product/11.2.0/grid/cdata/quark/backup_20100918_001654.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Updating inventory properties for clusterware
...

  • When the installation completes, you should have an ASM instance up and running.
  • Some of the processes running include:
$ ps -ef |grep ora
...
oracle   17900     1  0 00:16 ?        00:00:03 /u01/app/oracle/product/11.2.0/grid/bin/ohasd.bin reboot
          --> This is the Oracle Restart (Oracle High Availability Service) daemon.
oracle   18356     1  0 00:18 ?        00:00:01 /u01/app/oracle/product/11.2.0/grid/bin/oraagent.bin
          --> Extends clusterware to support Oracle-specific requirements and complex resources.
          --> Runs server callout scripts when FAN events occur. 
          --> Pprocess was known as RACG in Oracle Clusterware 11g release 1 (11.1).
oracle   18375     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/tnslsnr LISTENER -inherit
oracle   18563     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/cssdagent
          --> Starts, stops and monitors Oracle Clusterware
oracle   18565     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/orarootagent.bin
          --> specialized oraagent process that helps crsd manage resources 
          (con't) owned by root, such as the network, and the Grid virtual IP address.
oracle   18599     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/diskmon.bin -d -f
          --> I/O Fencing and SKGXP HA monitoring daemon
oracle   18600     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/ocssd.bin 
          --> Oracle Cluster Synchronization Service Daemon (OCSSD). 
          --> performs some of the clusterware functions on UNIX-based systems
          --> ocssd.bin is required for ASM Instance. 

oracle   18884     1  0 00:19 ?        00:00:00 asm_pmon_+ASM  ----
oracle   18888     1  0 00:19 ?        00:00:00 asm_vktm_+ASM      |
oracle   18894     1  0 00:19 ?        00:00:00 asm_gen0_+ASM      |
oracle   18898     1  0 00:19 ?        00:00:00 asm_diag_+ASM      |
oracle   18902     1  0 00:19 ?        00:00:00 asm_psp0_+ASM      |
oracle   18906     1  0 00:19 ?        00:00:00 asm_dia0_+ASM      |
oracle   18910     1  0 00:19 ?        00:00:00 asm_mman_+ASM      |==================
oracle   18914     1  0 00:19 ?        00:00:00 asm_dbw0_+ASM      |=>     +ASM Instance 
oracle   18918     1  0 00:19 ?        00:00:00 asm_lgwr_+ASM      |=> background processes
oracle   18922     1  0 00:19 ?        00:00:00 asm_ckpt_+ASM      |==================
oracle   18926     1  0 00:19 ?        00:00:00 asm_smon_+ASM      |
oracle   18930     1  0 00:19 ?        00:00:00 asm_rbal_+ASM      |
oracle   18934     1  0 00:19 ?        00:00:00 asm_gmon_+ASM      |
oracle   18938     1  0 00:19 ?        00:00:00 asm_mmon_+ASM      |
oracle   18942     1  0 00:19 ?        00:00:00 asm_mmnl_+ASM  ----
oracle   19119 13210  0 00:23 pts/2    00:00:00 ps -ef
oracle   19120 13210  0 00:23 pts/2    00:00:00 grep ora
$


using Oracle Restart
  • When created, a new database instance will automatically register with Oracle Restart.
  • Once added to the Oracle Restart configuration, if the database then accesses data in an Oracle ASM disk group, a dependency between the database that disk group is created.
  • Oracle Restart then ensures that the disk group is mounted before attempting to start the database.

About SRVCTL
  • You can use SRVCTL commands to add, remove, start, stop, modify, enable, and disable a number of entities, such as databases, instances, listeners, SCAN listeners, services, grid naming service (GNS), and Oracle ASM.
  • SRVCTL utility can be used to start and stop the Oracle Restart components manually.
  • When you start/stop a component with SRVCTL, any components on which this component depends are automatically started/stopped first, and in the proper order.
  • Important Note:
    • To manage Oracle ASM on Oracle Database 11g R2 installations, use the SRVCTL binary in the Oracle Grid Infrastructure home for a cluster (Grid home).
    • If you have Oracle RAC or Oracle Database installed, then you cannot use the SRVCTL binary in the database home to manage Oracle ASM.

Usage: srvctl command object []
    commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config
    objects: database|service|asm|diskgroup|listener|home|ons|eons

(a) check status of grid services and objects
jdoe@quark $ srvctl status asm
ASM is running on quark

jdoe@quark $ srvctl status diskgroup -g  oradata_dskgrp
Disk Group oradata_dskgrp is running on quark

jdoe@quark $ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): quark

-- Displaying the running status of all of the components that are managed by Oracle Restart in the specified Oracle home. 
-- The Oracle home can be an Oracle Database home or an Oracle Grid Infrastructure home.

jdoe@quark $ ./srvctl status home -o /u01/app/oracle/product/11.2.0/grid -s /home/oracle/statefile
Disk Group ora.ORADATA_DSKGRP.dg is running on quark
ASM is running on quark
Listener LISTENER is running on node quark


(b) The srvctl config command displays the Oracle Restart configuration of the specified component or set of components
jdoe@quark $ srvctl config asm -a
ASM home: /u01/app/oracle/product/11.2.0/grid
ASM listener: LISTENER
Spfile: +ORADATA_DSKGRP/asm/asmparameterfile/registry.253.768442773
ASM diskgroup discovery string: /dev/oracleasm/disks
ASM is enabled.

jdoe@quark $ srvctl config listener
Name: LISTENER
Home: /u01/app/oracle/product/11.2.0/grid
End points: TCP:1521

-- Display configuration and enabled/disabled status for the database with the DB_UNIQUE_ID orcl:
jdoe@quark $ srvctl config database -d orcl -a

Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/orcl/spfileorcl.ora
Domain: us.example.com
Start options: open
Stop options: immediate
Database role:
Management policy: automatic
Disk Groups: DATA
Services: mfg,sales
Database is enabled

Configuring Oracle ASM on Oracle 11gR2



General Comments:
  • Oracle Grid Infrastructure (OGI) provides system support for an Oracle database:
    • - Volume Management: Oracle ASM
    • - File System: Oracle ASM, Oracle ACFS
    • - Automatic Restart capabilities: Oracle Restart
  • To use Oracle ASM, OGI MUST be installed BEFORE installing database software.

Steps:
  1. Check requirements for OGI installation
  2. Configure Oracle Grid user's environment
  3. Design and configure storage schema for ASM
  4. Configure disks for ASM - Install ASMLib
  5. Configure ASM Disks - oracleasm
  6. Install Grid Infrastructure
  7. Configure Diskgroups

Check requirements for OGI installation
Memory Requirements:
  • Min 1.5Gb of RAM for OGI. Min 1 Gb for Database software.
  • Swap space: 1.5 times RAM
Disk Space requirements:
  • Min 5.5 Gb on disk. Min 1.5 Gb on /tmp

$ grep MemTotal /proc/meminfo
MemTotal:      1932856 kB
$ grep SwapTotal /proc/meminfo
SwapTotal:     2097144 kB

$ free
             total       used       free     shared    buffers     cached
Mem:       1932856     473832    1459024          0      24196     334204
-/+ buffers/cache:     115432    1817424
Swap:      2097144          0    2097144

Configure Oracle Grid user's environment
Create OS Groups and oracle user (this will be a single onwer installation)
# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd -g 502 dba
# /usr/sbin/groupadd -g 503 oper
# /usr/sbin/groupadd -g 504 asmadmin
# /usr/sbin/groupadd -g 505 asmoper
# /usr/sbin/groupadd -g 506 asmdba


-- Single owner installation
# /usr/sbin/useradd -u 502 -g oinstall -G dba,oper,asmadmin,asmdba  oracle
# passwd oracle

Create $ORACLE_BASE directory:
# mkdir -p /u01/app/oracle
# chown -R oracle:oinstall /u01
# chmod -R 775 /u01/

Design and configure storage schema for ASM
  • In this example, we will configure ASM with Normal Redundancy level.
  • With Normal redundancy, Oracle ASM uses two-way mirroring for datafiles and three-way mirroring for control files, by default.
  • A minimum of two failure groups (or two disk devices) is required.
  • Effective disk space is half the sum of the disk space of all devices in the disk group.

  • If two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the controller represents a single point of failure for the entire disk group.
  • To avoid this type of failure, you can define a failure group for the disks attached in different controllers.
  • Note that all devices in a disk group must be the same size and should have similar performance characteristics.
  • Also, do not specify multiple partitions in the same single physical disk as components of the same disk group.
  • Each disk in a disk group should be on a separate physical disk.
  • In this example:
    • Disk group 1: /dev/sdb (3Gb) and /dev/sdd (3Gb). Effective Space: 3Gb
    • Disk group 2: /dev/sdc (4Gb) and /dev/sde (4Gb). Effective Space: 4Gb
(a) Check the existing disks with the command below.
Note that the devices /dev/sdb,/dev/sdc,/dev/sdd and/dev/sde are not yet partitioned.
# cat /proc/partitions
major minor  #blocks  name
 ...
   8    16    3145728 sdb
   8    32    4194304 sdc
   8    48    3145728 sdd
   8    64    4194304 sde
...

(b) Create a single partition on each device:
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-391, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-391, default 391): 
Using default value 391

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

(c) Repeat for /dev/sdc, /dev/sdd, /dev/sde and check the resulting partitions. Partitions named /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 should be listed.
# cat /proc/partitions
major minor  #blocks  name
...
   8    16    3145728 sdb
   8    17    3140676 sdb1
   8    32    4194304 sdc
   8    33    4192933 sdc1
   8    48    3145728 sdd
   8    49    3140676 sdd1
   8    64    4194304 sde
   8    65    4192933 sde1
...

  • At this point the four disks are empty and partitioned.
  • In order to be used by Oracle ASM they need to be configured in order to be mounted and managed by Oracle ASM.
  • Once configured to be used by Oracle ASM, a disk is known as a candidate disk.
  • To configure the disks for Oracle ASM, you need to:
    1. install the Oracle ASM Library driver (ASMLIB)
    2. Use oracleasm utility to configure disks
Install and configure ASMLib
  • ASMLib is a support library for Oracle ASM. Oracle provides a Linux specific implementation of this library. (Click here for more on why you need ASMLib)
  • All ASMLib installations require the oracleasmlib and oracleasm-support packages appropriate for their machine.
  • The driver packages are named after the kernel they support.

(a) If you are running Oracle Enterprise Linux, you can install oracleasm directly from the installation media. After this, just run oracleasm update-driver to install the correct Oracle ASMLib driver.
# oracleasm update-driver
Kernel:         2.6.18-238.el5PAE i686
Driver name:    oracleasm-2.6.18-238.el5PAE
Latest version: oracleasm-2.6.18-238.el5PAE-2.0.5-1.el5.i686.rpm
Installing driver... 
Preparing...                ########################################### [100%]
        package oracleasm-2.6.18-238.el5PAE-2.0.5-1.el5.i686 installed
Alternatively, if you are installing in a Linux different of the Oracle Enterprise Linux, you need to
(a.1) Determine the kernel version that your system is running:
# uname -rm
2.6.18-238.el5PAE i686
(a.2) Download Oracle ASM library driver packages from the Oracle Technology Network website:
  • You must install the following packages, where version is the version of the Oracle ASM library driver, arch is the system architecture, and kernel is the version of the kernel that you are using:
    • oracleasm-support-version.arch.rpm
    • oracleasm-kernel-version.arch.rpm
    • oracleasmlib-version.arch.rpm

Configure disks for ASM
(b) Configure Oracle ASM using the oracleasm script.
Here you need to inform the username of the Grid Infrastructure owner. In this case, it is user oracle.
(If Grid infrastructure and Oracle sofware were to be managed separately, you would use here the name of the grid owner user, which is often called grid.)
# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting  without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

(d) Configure the disk devices to use the ASMLib driver.
  • Here you identify ("mark") the various disks that will be used by ASM to in the disk groups later on.
  • During boot time, Oracle ASMLib will identify these disks and make them available for Oracle ASM.
# /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
Marking disk "DISK1" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk DISK2 /dev/sdc1
Marking disk "DISK2" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk DISK3 /dev/sdd1
Marking disk "DISK3" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk DISK4 /dev/sde1
Marking disk "DISK4" as an ASM disk:                       [  OK  ]

Other oracleasm options include:
# /etc/init.d/oracleasm
Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandisks|status}

# /etc/init.d/oracleasm listdisks
DISK1
DISK2
DISK3
DISK4

# /etc/init.d/oracleasm querydisk disk1
Disk "DISK1" is a valid ASM disk

# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

-- You can check that the disks are mounted in the oracleasm filesystem with the command:
# ls -l /dev/oracleasm/disks/
total 0
brw-rw---- 1 oracle asmadmin 8, 17 Nov 28 18:00 DISK1
brw-rw---- 1 oracle asmadmin 8, 33 Nov 28 18:00 DISK2
brw-rw---- 1 oracle asmadmin 8, 49 Nov 28 18:00 DISK3
brw-rw---- 1 oracle asmadmin 8, 65 Nov 28 18:00 DISK4