"Como a veces no puede dormir, en vez de contar corderitos contesta mentalmente la correspondencia atrasada, porque su mala conciencia tiene tanto insomnio como él." Un tal Lucas (Cortázar) |
Showing posts with label Grid. Show all posts
Showing posts with label Grid. Show all posts
More on Oracle 11g ASM and Grid Configuration
Oracle Grid Infrastructure and Oracle ASM: Configuration changes in 11g R2
- Oracle 11g Release 2 introduced the Oracle Grid Infrastructure installation.
- Prior to 11g R2, Oracle Automatic Storage Management (ASM) software was automatically installed when you installed the Oracle database software.
- Since 11g R2, if you you need to use Oracle ASM you need first to install the Oracle Grid Infrastructure Software.
In a single instance database environment |
|
- A New method of installing Automatic Storage Management (ASM) with Oracle 11g R2:
- In a cluster configuration: Oracle ASM shares an Oracle home with Oracle Clusterware
- In a single instance datbase configuration: Oracle ASM shares an Oracle home with Oracle Restart.
- To upgrade an existing Oracle ASM installation:
- Upgrade Oracle ASM by running an OGI upgrade
For a clustered environment |
- An Oracle Grid Infrastructure installation includes:
- Oracle Clusterware
- Oracle Automatic Storage Management (ASM), and
- the Listener
About Oracle ASM: |
- Oracle ASM is a volume manager and file system.
- Like other volume managers, Oracle ASM group disks into one or more disk groups.
- While the administrator manages the disk groups, these operate like black boxes and the placement of datafiles within each disk group is automatically managed by Oracle.
- Separation between database and ASM administration:
- Oracle ASM administration requires SYSASM privilege.
- Besides creating a division of responsibilities between ASM and Database administration, this also helpt to prevent that different databases using the same storage accidentally overwrite each others files.
- Starting in Oracle 11g R2, Oracle ASM can also store Oracle Cluster Registry and voting disks files.
- Oracle ASM Cluster File System (ACFS) is a new file system and storage management design that extentds Oracle ASM technology to support data that cannot be stroed in Oracle ASM (both in single instance and cluster configurations).
- ACFS is installed with Oracle Grid Infrastructure.
Oracle ASM on 11g R2: Installing Grid Infrastructure
Note: For steps on how to configure Oracle ASM before installing Grid infrastructure, check here.
This Grid Infrastructure installation on a standalone server will perform the following steps:
|
Before proceeding, make sure that you set the path to the Oracle base directory.
On bash shell: $ ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE $ echo $ORACLE_BASE /u01/app/oracle
- Logged as the Grid Infrastructure user owner,change directory to the grid infrastructure media directory, run the installation program and follow the installation steps below.
- In our case, we will set up a single owner environment, so make sure you're logged as the user
oracle.
$ ./runInstaller Starting Oracle Universal Installer... Checking Temp space: must be greater than 80 MB. Actual 5902 MB Passed Checking swap space: must be greater than 150 MB. Actual 2047 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2010-09-18_08-01-12PM. Please wait ... $
Select installation type |
- Select option to install and configure Grid Infrastructure for a standalone server.
Select language |
- In the next screen, Select language
Select disks to form the disk group |
- The next string should list all the disks previously configured for ASM use.
- These candidate disks should have been discovered at boot time by ASMLib.
- If no disks are listed:
(a) Check if disk devices ownership is appropriately configured.
The disk devices must be owned by the user performing the grid installation.
Check user and group ownership with the command:
# ls -l /dev/oracleasm/disks/ total 0 brw-rw---- 1 oracle dba 8, 17 Set 18 22:33 DISK1 brw-rw---- 1 oracle dba 8, 33 Set 18 22:52 DISK2 brw-rw---- 1 oracle dba 8, 49 Set 18 22:52 DISK3 brw-rw---- 1 oracle dba 8, 65 Set 18 22:53 DISK4(b) check whether ASMLib driver is loaded:
# oracleasm status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes # oracleasm listdisks DISK1 DISK2 DISK3 DISK4(c) Check the default discovery string on the installer.
In linux, the default discovery strign is'/dev/raw*'
.
Click on the Change Discovery Path button and type'/dev/oracleasm/disks/*'
(without quotes!).
This should list all the disks you have previously configured.
Configure ASM Disk Group |
- Select name for the disk group being created and select the disks that will compose this group.
- Here we choose normal redundancy and create the
oradata_dskgrp
withdisk1:(/dev/sdb1,3Gb)
anddisk3:(/dev/sdd1, 3Gb)
. - Each Oracle ASM disk is divided into allocation units (AU).
- An allocation unit is the fundamental unit of allocation within a disk group and by default it is 1 Mb.
Specify the passwords for SYS and ASMSNMP users. |
- These users are created in the ASM Instance.
- To manage an ASM Instance, a user needs the SYSASM role, which grants full access to all ASM disks (including authority to created and delete ASM disks).
- The user ASMSNMP, with only SYSDBA role, can monitor but does not have full access to the ASM diks.
Select the name of the OS groups to be used for OS authentication to ASM: |
Select installation location. In the next two screens, accept or change the location for oracle grid home directory, and accept the location for the inventory directory (if this is the first oracle install in the machine) |
Check whether all installation prerequisites were met. If so, proceed. |
Review contents and click Install. |
Run the Post-installation scripts (as root) |
# ./root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2010-09-18 00:16:18: Checking for super user privileges 2010-09-18 00:16:18: User has super user privileges 2010-09-18 00:16:18: Parsing the host name Using configuration parameter file: /u01/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'oracle', privgrp 'oinstall'.. Operation successful. CRS-4664: Node quark successfully pinned. Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting ADVM/ACFS is not supported on oraclelinux-release-5-6.0.1 quark 2010-09-18 00:16:54 /u01/app/oracle/product/11.2.0/grid/cdata/quark/backup_20100918_001654.olr Successfully configured Oracle Grid Infrastructure for a Standalone Server Updating inventory properties for clusterware ...
- When the installation completes, you should have an ASM instance up and running.
- Some of the processes running include:
$ ps -ef |grep ora ... oracle 17900 1 0 00:16 ? 00:00:03 /u01/app/oracle/product/11.2.0/grid/bin/ohasd.bin reboot --> This is the Oracle Restart (Oracle High Availability Service) daemon. oracle 18356 1 0 00:18 ? 00:00:01 /u01/app/oracle/product/11.2.0/grid/bin/oraagent.bin --> Extends clusterware to support Oracle-specific requirements and complex resources. --> Runs server callout scripts when FAN events occur. --> Pprocess was known as RACG in Oracle Clusterware 11g release 1 (11.1). oracle 18375 1 0 00:18 ? 00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/tnslsnr LISTENER -inherit oracle 18563 1 0 00:18 ? 00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/cssdagent --> Starts, stops and monitors Oracle Clusterware oracle 18565 1 0 00:18 ? 00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/orarootagent.bin --> specialized oraagent process that helps crsd manage resources (con't) owned by root, such as the network, and the Grid virtual IP address. oracle 18599 1 0 00:18 ? 00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/diskmon.bin -d -f --> I/O Fencing and SKGXP HA monitoring daemon oracle 18600 1 0 00:18 ? 00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/ocssd.bin --> Oracle Cluster Synchronization Service Daemon (OCSSD). --> performs some of the clusterware functions on UNIX-based systems --> ocssd.bin is required for ASM Instance. oracle 18884 1 0 00:19 ? 00:00:00 asm_pmon_+ASM ---- oracle 18888 1 0 00:19 ? 00:00:00 asm_vktm_+ASM | oracle 18894 1 0 00:19 ? 00:00:00 asm_gen0_+ASM | oracle 18898 1 0 00:19 ? 00:00:00 asm_diag_+ASM | oracle 18902 1 0 00:19 ? 00:00:00 asm_psp0_+ASM | oracle 18906 1 0 00:19 ? 00:00:00 asm_dia0_+ASM | oracle 18910 1 0 00:19 ? 00:00:00 asm_mman_+ASM |================== oracle 18914 1 0 00:19 ? 00:00:00 asm_dbw0_+ASM |=> +ASM Instance oracle 18918 1 0 00:19 ? 00:00:00 asm_lgwr_+ASM |=> background processes oracle 18922 1 0 00:19 ? 00:00:00 asm_ckpt_+ASM |================== oracle 18926 1 0 00:19 ? 00:00:00 asm_smon_+ASM | oracle 18930 1 0 00:19 ? 00:00:00 asm_rbal_+ASM | oracle 18934 1 0 00:19 ? 00:00:00 asm_gmon_+ASM | oracle 18938 1 0 00:19 ? 00:00:00 asm_mmon_+ASM | oracle 18942 1 0 00:19 ? 00:00:00 asm_mmnl_+ASM ---- oracle 19119 13210 0 00:23 pts/2 00:00:00 ps -ef oracle 19120 13210 0 00:23 pts/2 00:00:00 grep ora $
using Oracle Restart |
- When created, a new database instance will automatically register with Oracle Restart.
- Once added to the Oracle Restart configuration, if the database then accesses data in an Oracle ASM disk group, a dependency between the database that disk group is created.
- Oracle Restart then ensures that the disk group is mounted before attempting to start the database.
About SRVCTL
- You can use SRVCTL commands to add, remove, start, stop, modify, enable, and disable a number of entities, such as databases, instances, listeners, SCAN listeners, services, grid naming service (GNS), and Oracle ASM.
- SRVCTL utility can be used to start and stop the Oracle Restart components manually.
- When you start/stop a component with SRVCTL, any components on which this component depends are automatically started/stopped first, and in the proper order.
- Important Note:
- To manage Oracle ASM on Oracle Database 11g R2 installations, use the SRVCTL binary in the Oracle Grid Infrastructure home for a cluster (Grid home).
- If you have Oracle RAC or Oracle Database installed, then you cannot use the SRVCTL binary in the database home to manage Oracle ASM.
Usage: srvctl command object [] commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config objects: database|service|asm|diskgroup|listener|home|ons|eons (a) check status of grid services and objects jdoe@quark $ srvctl status asm ASM is running on quark jdoe@quark $ srvctl status diskgroup -g oradata_dskgrp Disk Group oradata_dskgrp is running on quark jdoe@quark $ srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node(s): quark -- Displaying the running status of all of the components that are managed by Oracle Restart in the specified Oracle home. -- The Oracle home can be an Oracle Database home or an Oracle Grid Infrastructure home. jdoe@quark $ ./srvctl status home -o /u01/app/oracle/product/11.2.0/grid -s /home/oracle/statefile Disk Group ora.ORADATA_DSKGRP.dg is running on quark ASM is running on quark Listener LISTENER is running on node quark
(b) The
srvctl config
command displays the Oracle Restart configuration of the specified component or set of componentsjdoe@quark $ srvctl config asm -a
ASM home: /u01/app/oracle/product/11.2.0/grid
ASM listener: LISTENER
Spfile: +ORADATA_DSKGRP/asm/asmparameterfile/registry.253.768442773
ASM diskgroup discovery string: /dev/oracleasm/disks
ASM is enabled.
jdoe@quark $ srvctl config listener
Name: LISTENER
Home: /u01/app/oracle/product/11.2.0/grid
End points: TCP:1521
-- Display configuration and enabled/disabled status for the database with the DB_UNIQUE_ID orcl
:
jdoe@quark $ srvctl config database -d orcl -a
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/orcl/spfileorcl.ora
Domain: us.example.com
Start options: open
Stop options: immediate
Database role:
Management policy: automatic
Disk Groups: DATA
Services: mfg,sales
Database is enabled
Configuring Oracle ASM on Oracle 11gR2
General Comments:
- Oracle Grid Infrastructure (OGI) provides system support for an Oracle database:
- - Volume Management: Oracle ASM
- - File System: Oracle ASM, Oracle ACFS
- - Automatic Restart capabilities: Oracle Restart
- To use Oracle ASM, OGI MUST be installed BEFORE installing database software.
Steps:
- Check requirements for OGI installation
- Configure Oracle Grid user's environment
- Design and configure storage schema for ASM
- Configure disks for ASM - Install ASMLib
- Configure ASM Disks - oracleasm
- Install Grid Infrastructure
- Configure Diskgroups
Check requirements for OGI installation |
- Min 1.5Gb of RAM for OGI. Min 1 Gb for Database software.
- Swap space: 1.5 times RAM
- Min 5.5 Gb on disk. Min 1.5 Gb on
/tmp
$ grep MemTotal /proc/meminfo MemTotal: 1932856 kB $ grep SwapTotal /proc/meminfo SwapTotal: 2097144 kB $ free total used free shared buffers cached Mem: 1932856 473832 1459024 0 24196 334204 -/+ buffers/cache: 115432 1817424 Swap: 2097144 0 2097144
Configure Oracle Grid user's environment |
# /usr/sbin/groupadd oinstall # /usr/sbin/groupadd -g 502 dba # /usr/sbin/groupadd -g 503 oper # /usr/sbin/groupadd -g 504 asmadmin # /usr/sbin/groupadd -g 505 asmoper # /usr/sbin/groupadd -g 506 asmdba -- Single owner installation # /usr/sbin/useradd -u 502 -g oinstall -G dba,oper,asmadmin,asmdba oracle # passwd oracle
Create
$ORACLE_BASE
directory:# mkdir -p /u01/app/oracle # chown -R oracle:oinstall /u01 # chmod -R 775 /u01/
Design and configure storage schema for ASM |
- In this example, we will configure ASM with Normal Redundancy level.
- With Normal redundancy, Oracle ASM uses two-way mirroring for datafiles and three-way mirroring for control files, by default.
- A minimum of two failure groups (or two disk devices) is required.
- Effective disk space is half the sum of the disk space of all devices in the disk group.
|
Note that the devices
/dev/sdb,/dev/sdc,/dev/sdd and/dev/sde
are not yet partitioned.# cat /proc/partitions major minor #blocks name ... 8 16 3145728 sdb 8 32 4194304 sdc 8 48 3145728 sdd 8 64 4194304 sde ...
(b) Create a single partition on each device:
# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-391, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-391, default 391): Using default value 391 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
(c) Repeat for
/dev/sdc, /dev/sdd, /dev/sde
and check the resulting partitions. Partitions named /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 should be listed. # cat /proc/partitions major minor #blocks name ... 8 16 3145728 sdb 8 17 3140676 sdb1 8 32 4194304 sdc 8 33 4192933 sdc1 8 48 3145728 sdd 8 49 3140676 sdd1 8 64 4194304 sde 8 65 4192933 sde1 ...
- At this point the four disks are empty and partitioned.
- In order to be used by Oracle ASM they need to be configured in order to be mounted and managed by Oracle ASM.
- Once configured to be used by Oracle ASM, a disk is known as a candidate disk.
- To configure the disks for Oracle ASM, you need to:
- install the Oracle ASM Library driver (ASMLIB)
- Use
oracleasm
utility to configure disks
Install and configure ASMLib |
- ASMLib is a support library for Oracle ASM. Oracle provides a Linux specific implementation of this library. (Click here for more on why you need ASMLib)
- All ASMLib installations require the
oracleasmlib
andoracleasm-support
packages appropriate for their machine. - The driver packages are named after the kernel they support.
(a) If you are running Oracle Enterprise Linux, you can install
oracleasm
directly from the installation media. After this, just run oracleasm update-driver
to install the correct Oracle ASMLib driver. # oracleasm update-driver Kernel: 2.6.18-238.el5PAE i686 Driver name: oracleasm-2.6.18-238.el5PAE Latest version: oracleasm-2.6.18-238.el5PAE-2.0.5-1.el5.i686.rpm Installing driver... Preparing... ########################################### [100%] package oracleasm-2.6.18-238.el5PAE-2.0.5-1.el5.i686 installed
Alternatively, if you are installing in a Linux different of the Oracle Enterprise Linux, you need to
(a.1) Determine the kernel version that your system is running:
# uname -rm 2.6.18-238.el5PAE i686(a.2) Download Oracle ASM library driver packages from the Oracle Technology Network website:
- You must install the following packages, where version is the version of the Oracle ASM library driver, arch is the system architecture, and kernel is the version of the kernel that you are using:
- oracleasm-support-version.arch.rpm
- oracleasm-kernel-version.arch.rpm
- oracleasmlib-version.arch.rpm
Configure disks for ASM |
oracleasm
script. Here you need to inform the username of the Grid Infrastructure owner. In this case, it is user
oracle
. (If Grid infrastructure and Oracle sofware were to be managed separately, you would use here the name of the grid owner user, which is often called
grid
.) # /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hittingwithout typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ]
(d) Configure the disk devices to use the ASMLib driver.
- Here you identify ("mark") the various disks that will be used by ASM to in the disk groups later on.
- During boot time, Oracle ASMLib will identify these disks and make them available for Oracle ASM.
# /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1 Marking disk "DISK1" as an ASM disk: [ OK ] # /etc/init.d/oracleasm createdisk DISK2 /dev/sdc1 Marking disk "DISK2" as an ASM disk: [ OK ] # /etc/init.d/oracleasm createdisk DISK3 /dev/sdd1 Marking disk "DISK3" as an ASM disk: [ OK ] # /etc/init.d/oracleasm createdisk DISK4 /dev/sde1 Marking disk "DISK4" as an ASM disk: [ OK ] |
Other
oracleasm
options include: # /etc/init.d/oracleasm Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandisks|status} # /etc/init.d/oracleasm listdisks DISK1 DISK2 DISK3 DISK4 # /etc/init.d/oracleasm querydisk disk1 Disk "DISK1" is a valid ASM disk # oracleasm status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes -- You can check that the disks are mounted in the oracleasm filesystem with the command: # ls -l /dev/oracleasm/disks/ total 0 brw-rw---- 1 oracle asmadmin 8, 17 Nov 28 18:00 DISK1 brw-rw---- 1 oracle asmadmin 8, 33 Nov 28 18:00 DISK2 brw-rw---- 1 oracle asmadmin 8, 49 Nov 28 18:00 DISK3 brw-rw---- 1 oracle asmadmin 8, 65 Nov 28 18:00 DISK4
Oracle Automatic Storage Management - Concepts
- Oracle ASM is a volume manager and a file system for Oracle database files.
- It supports single-instance and Oracle RAC configurations.
- Oracle ASM also supports a general purpose file system that can store application files and oracle database binaries.
- It provides an alternative to conventional volume managers, file systems and raw devices.
- Oracle ASM distributes I/O load across all available resource to optimize performance.
- In this way, it removes the need for manual I/O tuning (spreading out the database files avoids hotspots).
- Oracle ASM allows the DBA to define a pool of storage (disk groups).
- The Oracle kernel manages the file naming and placement of the database files on the storage pool.
Disk groups |
- Oracle ASM store data files on disk groups.
- A disk group is a collection of disks managed as a unit by Oracle ASM.
- Oracle ASM disks can be defined on:
- A disk partition: Entire disk or a section of disk that does not include the partition table (or it will be overwritten).
- A Disk from a storage array (RAID): RAID present disks as Logical Unit Numbers (LUNs).
- A logical volume.
- A Network-attached file (NFS): Including files provided through Oracle Direct NFS (dNFS).Whole disks, partitions and LUNs can also be mounted by ASM through NFS.
- Load balance: Oracle ASM spreads the files proportionally across all of the disks in the disk group, so the disks within a disk group should be in different physical drives.
Disks can be added or removed "on the fly" to and from disk groups.
After you add a disk, Oracle ASM performs rebalancing.
Data is redistributed to ensure that every file is evenly spread across all of the disks.
- Disks can be added or removed from a disk group while the database is accessing files on that disk group (without downtime).
- Oracle ASM redistributes contents automatically
- Oracle ASM uses Oracle Managed Files (OMF).
|
Mirroring and Failure groups |
- Disk groups can be configured with varying redundancy levels.
- For each disk in a disk group, you need to specify a failure group to which the disk will belong.
- A failure group is a subset of the disks in a disk group, which could fail at the same time because they share hardware
- Failure groups are used to store mirror copies of data.
- In a normal redundancy file, Oracle ASM allocates a primary copy and a secondary copy in disks belonging to different failure groups.
- Each copy is on a disk in a different failure group so that the simultaneous failure of all disks in a failure group does not result in data loss.
- A normal redundancy disk group must contain at least two failure groups.
- Splitting the various disks in a disk group across failure groups allows Oracle ASM to implment file mirroring.
- Oracle ASM implements mirroring by allocating file and file copies to different failure groups.
- If you do not explicitly identify failure groups, Oracle allocates each disk in a disk group to its own failure group.
Oracle ASM implements one of three redundancy levels:
|
(a) Diskgr1 below implements 2-way mirroring. Each disk ( dasm-d1, dasm-d2 ) is assigned to its own failure group.SQL> Create diskgroup diskgr1 NORMAL redundancy 2 FAILGROUP controller1 DISK 4 '/devices/diska1' NAME dasm-d1, 3 FAILGROUP controller2 DISK 5 '/devices/diskb1' NAME dasm-d1 6 ATTRIBUTE 'au_size'='4M'; |
|
Oracle ASM Instance |
Oracle ASM metadata:
|
- With Oracle ASM an ASM instance besides the database instance needs to be configured on the server.
- An Oracle ASM instance has an SGA and background processes, but is usually much smaller than a database instance.
- It has minimal (how much?) performance effect on a server.
- Oracle ASM Instances are responsible for mounting the disk groups so that ASM files are available for DB instances.
- Oracle ASM instances DO NOT mount databases.
- They only manage the metadata of the disk group and provide file layout information to the database instances.
ASM Instance on Clusetered configurations:
- One Oracle ASM instance in each cluster node.
- All database instances in a node share the same ASM instance
- In a Oracle RAC environment, the ASM and database instances on the surviving nodes automatically recover from an ASM Instance failure on a node.
Subscribe to:
Posts (Atom)