Configuring Oracle ASM on Oracle 11gR2



General Comments:
  • Oracle Grid Infrastructure (OGI) provides system support for an Oracle database:
    • - Volume Management: Oracle ASM
    • - File System: Oracle ASM, Oracle ACFS
    • - Automatic Restart capabilities: Oracle Restart
  • To use Oracle ASM, OGI MUST be installed BEFORE installing database software.

Steps:
  1. Check requirements for OGI installation
  2. Configure Oracle Grid user's environment
  3. Design and configure storage schema for ASM
  4. Configure disks for ASM - Install ASMLib
  5. Configure ASM Disks - oracleasm
  6. Install Grid Infrastructure
  7. Configure Diskgroups

Check requirements for OGI installation
Memory Requirements:
  • Min 1.5Gb of RAM for OGI. Min 1 Gb for Database software.
  • Swap space: 1.5 times RAM
Disk Space requirements:
  • Min 5.5 Gb on disk. Min 1.5 Gb on /tmp

$ grep MemTotal /proc/meminfo
MemTotal:      1932856 kB
$ grep SwapTotal /proc/meminfo
SwapTotal:     2097144 kB

$ free
             total       used       free     shared    buffers     cached
Mem:       1932856     473832    1459024          0      24196     334204
-/+ buffers/cache:     115432    1817424
Swap:      2097144          0    2097144

Configure Oracle Grid user's environment
Create OS Groups and oracle user (this will be a single onwer installation)
# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd -g 502 dba
# /usr/sbin/groupadd -g 503 oper
# /usr/sbin/groupadd -g 504 asmadmin
# /usr/sbin/groupadd -g 505 asmoper
# /usr/sbin/groupadd -g 506 asmdba


-- Single owner installation
# /usr/sbin/useradd -u 502 -g oinstall -G dba,oper,asmadmin,asmdba  oracle
# passwd oracle

Create $ORACLE_BASE directory:
# mkdir -p /u01/app/oracle
# chown -R oracle:oinstall /u01
# chmod -R 775 /u01/

Design and configure storage schema for ASM
  • In this example, we will configure ASM with Normal Redundancy level.
  • With Normal redundancy, Oracle ASM uses two-way mirroring for datafiles and three-way mirroring for control files, by default.
  • A minimum of two failure groups (or two disk devices) is required.
  • Effective disk space is half the sum of the disk space of all devices in the disk group.

  • If two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the controller represents a single point of failure for the entire disk group.
  • To avoid this type of failure, you can define a failure group for the disks attached in different controllers.
  • Note that all devices in a disk group must be the same size and should have similar performance characteristics.
  • Also, do not specify multiple partitions in the same single physical disk as components of the same disk group.
  • Each disk in a disk group should be on a separate physical disk.
  • In this example:
    • Disk group 1: /dev/sdb (3Gb) and /dev/sdd (3Gb). Effective Space: 3Gb
    • Disk group 2: /dev/sdc (4Gb) and /dev/sde (4Gb). Effective Space: 4Gb
(a) Check the existing disks with the command below.
Note that the devices /dev/sdb,/dev/sdc,/dev/sdd and/dev/sde are not yet partitioned.
# cat /proc/partitions
major minor  #blocks  name
 ...
   8    16    3145728 sdb
   8    32    4194304 sdc
   8    48    3145728 sdd
   8    64    4194304 sde
...

(b) Create a single partition on each device:
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-391, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-391, default 391): 
Using default value 391

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

(c) Repeat for /dev/sdc, /dev/sdd, /dev/sde and check the resulting partitions. Partitions named /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 should be listed.
# cat /proc/partitions
major minor  #blocks  name
...
   8    16    3145728 sdb
   8    17    3140676 sdb1
   8    32    4194304 sdc
   8    33    4192933 sdc1
   8    48    3145728 sdd
   8    49    3140676 sdd1
   8    64    4194304 sde
   8    65    4192933 sde1
...

  • At this point the four disks are empty and partitioned.
  • In order to be used by Oracle ASM they need to be configured in order to be mounted and managed by Oracle ASM.
  • Once configured to be used by Oracle ASM, a disk is known as a candidate disk.
  • To configure the disks for Oracle ASM, you need to:
    1. install the Oracle ASM Library driver (ASMLIB)
    2. Use oracleasm utility to configure disks
Install and configure ASMLib
  • ASMLib is a support library for Oracle ASM. Oracle provides a Linux specific implementation of this library. (Click here for more on why you need ASMLib)
  • All ASMLib installations require the oracleasmlib and oracleasm-support packages appropriate for their machine.
  • The driver packages are named after the kernel they support.

(a) If you are running Oracle Enterprise Linux, you can install oracleasm directly from the installation media. After this, just run oracleasm update-driver to install the correct Oracle ASMLib driver.
# oracleasm update-driver
Kernel:         2.6.18-238.el5PAE i686
Driver name:    oracleasm-2.6.18-238.el5PAE
Latest version: oracleasm-2.6.18-238.el5PAE-2.0.5-1.el5.i686.rpm
Installing driver... 
Preparing...                ########################################### [100%]
        package oracleasm-2.6.18-238.el5PAE-2.0.5-1.el5.i686 installed
Alternatively, if you are installing in a Linux different of the Oracle Enterprise Linux, you need to
(a.1) Determine the kernel version that your system is running:
# uname -rm
2.6.18-238.el5PAE i686
(a.2) Download Oracle ASM library driver packages from the Oracle Technology Network website:
  • You must install the following packages, where version is the version of the Oracle ASM library driver, arch is the system architecture, and kernel is the version of the kernel that you are using:
    • oracleasm-support-version.arch.rpm
    • oracleasm-kernel-version.arch.rpm
    • oracleasmlib-version.arch.rpm

Configure disks for ASM
(b) Configure Oracle ASM using the oracleasm script.
Here you need to inform the username of the Grid Infrastructure owner. In this case, it is user oracle.
(If Grid infrastructure and Oracle sofware were to be managed separately, you would use here the name of the grid owner user, which is often called grid.)
# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting  without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

(d) Configure the disk devices to use the ASMLib driver.
  • Here you identify ("mark") the various disks that will be used by ASM to in the disk groups later on.
  • During boot time, Oracle ASMLib will identify these disks and make them available for Oracle ASM.
# /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
Marking disk "DISK1" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk DISK2 /dev/sdc1
Marking disk "DISK2" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk DISK3 /dev/sdd1
Marking disk "DISK3" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk DISK4 /dev/sde1
Marking disk "DISK4" as an ASM disk:                       [  OK  ]

Other oracleasm options include:
# /etc/init.d/oracleasm
Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandisks|status}

# /etc/init.d/oracleasm listdisks
DISK1
DISK2
DISK3
DISK4

# /etc/init.d/oracleasm querydisk disk1
Disk "DISK1" is a valid ASM disk

# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

-- You can check that the disks are mounted in the oracleasm filesystem with the command:
# ls -l /dev/oracleasm/disks/
total 0
brw-rw---- 1 oracle asmadmin 8, 17 Nov 28 18:00 DISK1
brw-rw---- 1 oracle asmadmin 8, 33 Nov 28 18:00 DISK2
brw-rw---- 1 oracle asmadmin 8, 49 Nov 28 18:00 DISK3
brw-rw---- 1 oracle asmadmin 8, 65 Nov 28 18:00 DISK4

Oracle Automatic Storage Management - Concepts


  • Oracle ASM is a volume manager and a file system for Oracle database files.
  • It supports single-instance and Oracle RAC configurations.
  • Oracle ASM also supports a general purpose file system that can store application files and oracle database binaries.
  • It provides an alternative to conventional volume managers, file systems and raw devices.

  • Oracle ASM distributes I/O load across all available resource to optimize performance.
  • In this way, it removes the need for manual I/O tuning (spreading out the database files avoids hotspots).
  • Oracle ASM allows the DBA to define a pool of storage (disk groups).
  • The Oracle kernel manages the file naming and placement of the database files on the storage pool.
Disk groups
  • Oracle ASM store data files on disk groups.
  • A disk group is a collection of disks managed as a unit by Oracle ASM.
  • Oracle ASM disks can be defined on:
    • A disk partition: Entire disk or a section of disk that does not include the partition table (or it will be overwritten).
    • A Disk from a storage array (RAID): RAID present disks as Logical Unit Numbers (LUNs).
    • A logical volume.
    • A Network-attached file (NFS): Including files provided through Oracle Direct NFS (dNFS).Whole disks, partitions and LUNs can also be mounted by ASM through NFS.
  • Load balance: Oracle ASM spreads the files proportionally across all of the disks in the disk group, so the disks within a disk group should be in different physical drives.

Disks can be added or removed "on the fly" to and from disk groups.
After you add a disk, Oracle ASM performs rebalancing.
Data is redistributed to ensure that every file is evenly spread across all of the disks.

  • Disks can be added or removed from a disk group while the database is accessing files on that disk group (without downtime).
  • Oracle ASM redistributes contents automatically
  • Oracle ASM uses Oracle Managed Files (OMF).




  • Any Oracle ASM file is completely contained within a single disk group.
  • However, a disk group might contain files belonging to several databases.
  • A single database can use files from multiple disk groups.
Mirroring and Failure groups
  • Disk groups can be configured with varying redundancy levels.
  • For each disk in a disk group, you need to specify a failure group to which the disk will belong.
  • A failure group is a subset of the disks in a disk group, which could fail at the same time because they share hardware
  • Failure groups are used to store mirror copies of data.
  • In a normal redundancy file, Oracle ASM allocates a primary copy and a secondary copy in disks belonging to different failure groups.
  • Each copy is on a disk in a different failure group so that the simultaneous failure of all disks in a failure group does not result in data loss.
  • A normal redundancy disk group must contain at least two failure groups.
  • Splitting the various disks in a disk group across failure groups allows Oracle ASM to implment file mirroring.
  • Oracle ASM implements mirroring by allocating file and file copies to different failure groups.
  • If you do not explicitly identify failure groups, Oracle allocates each disk in a disk group to its own failure group.

Oracle ASM implements one of three redundancy levels:
  • External redundancy:
    • No ASM mirroring. Useful when the disk group contain RAID devices
  • Normal redundancy
    • Oracle ASM implements 2-way mirroring by default.
    • At least 2 failure groups are needed. Minimum of two disks in group.
  • High redundancy
    • Oracle ASM implements 3-way mirroring: Minimum of 3 disks in group




(a) Diskgr1 below implements 2-way mirroring.
Each disk (dasm-d1, dasm-d2) is assigned to its own failure group.
SQL> Create diskgroup diskgr1 NORMAL redundancy
  2  FAILGROUP controller1 DISK
  4     '/devices/diska1' NAME dasm-d1,
  3  FAILGROUP controller2 DISK
  5     '/devices/diskb1' NAME dasm-d1
  6  ATTRIBUTE 'au_size'='4M';



  • An Oracle ASM disk is divided into allocation units (AU).
  • Files within an ASM disk consist of one or more allocation units.
  • Each ASM file has one or more extents.
  • Extent size is not fixed: starting with one allocation unit, extent size increases as total file size increases.



Oracle ASM Instance

Oracle ASM metadata:
  • disks belonging to a disk group
  • space available in a disk group
  • names of files in a disk group
  • location of disk group data extents
  • redo log for changes in metadata blocks
  • Oracle ADVM (ASM Dynamic volume Manager) volume information
  • With Oracle ASM an ASM instance besides the database instance needs to be configured on the server.
  • An Oracle ASM instance has an SGA and background processes, but is usually much smaller than a database instance.
  • It has minimal (how much?) performance effect on a server.
  • Oracle ASM Instances are responsible for mounting the disk groups so that ASM files are available for DB instances.
  • Oracle ASM instances DO NOT mount databases.
  • They only manage the metadata of the disk group and provide file layout information to the database instances.


ASM Instance on Clusetered configurations:
  • One Oracle ASM instance in each cluster node.
  • All database instances in a node share the same ASM instance
  • In a Oracle RAC environment, the ASM and database instances on the surviving nodes automatically recover from an ASM Instance failure on a node.

TCP/IP Networking (I)



TCP/IP Architecture
  • TCP/IP protocol has a four-layer structure linking an application to the physical network.
  • Each layer has its own independent data structures.
  • Conceptually, each layer is speaking directly to its counterpart on the other machine. In this sense, it is ignorant of what goes one after the data is sent.
  • For example, in the Application layer, a NFS Client talks to a NFS Server and knows only the details of the NFS protocol they both use.
  • As data packets are transported from the application to the physical network, each layer adds some control information in the form of a header.
  • Once the packet reaches its destination in the physical network, each layer reads and removes its corresponding header before passing the package up in the stack until it is received by the application.


  • This layer contains all application protocols (often providing user services) that use the Transport layer.
  • Examples of application protocols include FTP, HTTP, DNS, NFS, SMTP, Telnet
  • To send data, the application calls up a Transport layer protocol, such as TCP.
  • Application Layer protocols usually treat transport and lower layer protocols as "black boxes." In this sense, they assume a stable network connection exist across which to communicate.







  • TCP and UDP are the most importan protocols in this layer, delivering data between application and internet layers.
  • TCP provides reliable data delivery service with error detection and error correction. It delivers data received from IP to the correct application (identified by a port number).
  • UPD provides a connectionless delivery service.
  • When called by an application, TCP wraps the data into a TCP packet.
  • A TCP packet (also called TCP segment) contains a TCP header followed by the application data (including header).
  • TCP then hands the packet to IP.
  • TCP keeps track of what data belongs to what process.
  • It is also responsible for ensuring that the packets are delivered with the correct contents and put in the right order before handing them off to the receiving application.

  • The layer above the Network Access layer, and it provides the packet delivery service on which TCP/IP networks are built.
  • It provides a routing mechanism allowing for packets to be transmitted across one or more different networks.
  • The Internet Protocol (IP) runs in this layer and provides a way to transport datagrams across the network.
  • It is a connectionless protocol and does not provide error control, relying on protocols in the other layers to provide error detection and recovery.
  • Source and destination may be in the same or different networks.
  • The IP protocol performs the functions of (a) host addressing and identification, and (b) packet routing (transporting packets from source to destination).
  • After receiving a TCP packet, IP wraps it up and prepends an IP header, creating an IP datagram.
  • Moving the data down the stack, IP hands it off to the hardware driver, that runs in the Network Access Layer.

  • The IP layer has to figure out how to send the packet.
  • Destination on a different physical network ?
    • Then IP needs to find and send it to the appropriate gateway.
  • Destination on the local ethernet network ?
    • IP uses the Address Resolution Protocol (ARP) to determine what Ethernet card's MAC address is associated with the datagram IP address.
  • How does it work?
    • ARP broadcasts an ARP packet across the entire network asking which MAC address belongs to a particular IP address.
    • Although every machines gets this broadcast, only the one out there that matches will respond. This is then stored by the IP layer in its internal ARP table.

You can look at the ARP table at any time by running the command:
jdoe@quark:~$ arp -a
home (194.113.47.147) at 98:0:bd:bd:8c:d2 [ether] on eth0
jdoe@quark:~$ 

  • Protocols in this layer are designed to move packets (IP datagrams) between the internet layer interface of two different hosts on the same physical link.
  • The actual process of moving packets at this level is usually controlled by device drivers of the network cards, which must know the details of the underlying network in order to format the data appropriately.
  • At this level IP addresses are translated to physical addresses used by the network cards (i.e. Media Access Control (MAC) addresses)
  • The network access layer (also called link layer) can be represented by different kinds of physical connections: Ethernet, token-ring, fiber-optics, ISDN, RS-232, etc.

Network Interfaces
  • TCP/IP defines an abstract interface for hardware access.
  • The interface, offering a set of operations that is used to access all types of hardware, hides the implementation details of operations necessary to access each particular equipment. Each vendor is responsible for provinding a driver that translates the commands of the TCP/IP interface to those of the particular piece of hardware.
  • Each networking device has a corresponding interface in the kernel.
  • When configured, each physical device is assigned an interface name.
  • Each interface must also be assigned an IP address. Some interface names include:
    • Ethernet interfaces: eth0, eth1
    • PPP interfaces: ppp0, ppp1
    • FDDI interfaces: fddi0, fddi1
  • A computer having more than one logical or physical network interface is usually called a Multihomed host.

  • An Ethernet network works like a bus system, where a host may send packets (or frames) of up to 1,500 bytes to another host on the same Ethernet.
  • Hosts are identified by a six-byte address hardcoded into the firmware of its Ethernet network interface card (NIC).
  • Ethernet addresses are usually written as a sequence of two-digit hex numbers separated
    by colons, as in aa:bb:cc:dd:ee:ff.

References:
Bautts, Tony, Terry Dawson and Gregor Prudy. 2005. Linux Network Administratos Guide
Hunt, Craig. 2002. TCP/IP Network Administration