Oracle Flashback Technology (I)




Oracle Flashback technology
Logical Flashback (do not depend on RMAN, rely on undo data)
Physical flashback
New on Oracle 11g:



  • Oracle Flashback features use the Automatic Undo Management to obtain metadata and transaction historical data.
  • Undo data is persistent and survives database shutdown.
  • You can use the Flashback options to
    • recover data from user errors,
    • compare table data at two points in time,
    • view transaction actions (the set of actions performed in a given transaction).
    • Undo table drops
    • Revert the entire database to a previous point in time.

Configuring the database for Flashback options
Database must be configured for Automatic Undo Management (default on 11g).
For this you need:
  • (1) Size undo tablespace; and
  • (2) Enable automatic undo management

(Step 1) Create or resize the undo tablespace appropriately
The Undo tablespace must have enough space to keep the data required for flashback operations
(a) check the size of the existing undo tablespaces

SQL> select t.tablespace_name, t.status, t.contents, t.extent_management, 
            t.allocation_type,   t.retention, s.sizemb 
     from dba_tablespaces t, 
           (select tablespace_name, sum(bytes)/1024/1024 sizemb  
              from dba_data_files
              group by tablespace_name) s
     where t.tablespace_name = s.tablespace_name
     and t.contents = 'UNDO';

TABLESPACE_NAME                STATUS    CONTENTS  EXTENT_MANAGEMENT ALLOCATION_TYPE RETENTION   SIZEMB                 
------------------------------ --------- --------- ----------------- --------------- ---------   ---------------------- 
UNDOTBS1                       ONLINE    UNDO      LOCAL             SYSTEM          NOGUARANTEE 60      

(b) Check the current retention period: 
Query v$undostat to check the amount of time (seconds) for which undo will not be recycled.
    
SQL> select to_char(begin_time, 'dd-MON  hh:mi') begin_time, 
       to_char(end_time, 'dd-MON  hh:mi') end_time, 
       tuned_undoretention undoretention_sec
from v$undostat;

BEGIN_TIME    END_TIME      UNDORETENTION_SEC      
------------- ------------- ---------------------- 
19-OCT  09:51 19-OCT  09:58 1911     --  At current workload, undo is preserved for about 30 minutes
19-OCT  09:41 19-OCT  09:51 1608                   
...
19-OCT  03:11 19-OCT  03:21 1787                   
19-OCT  03:01 19-OCT  03:11 1183       
(Step 2) Enable automatic undo management (undo parameters)
  • UNDO_MANAGEMENT = [ AUTO (default) | MANUAL ]
  • UNDO_TABLESPACE - specifies the name of the undo tablespace
  • UNDO_RETENTION - specifies the minimum amount of time that Oracle attempts to retain old undo information before overwriting it.
    • The value for UNDO_RETENTION is ignored for a fixed size undo tablespace
    • If AUTOEXTEND is enabled, database attempts to honor UNDO_RETENTION.
    • 'snapshot too old' errors indicate that undo retention is not enough for flashback operations.
    • To accomodate Oracle Flashback features, you can set the UNDO_RETENTION parameter to a value equal to the longest expected Oracle Flashback operation.
    • If retention is very important, use ALTER TABLESPACE...RETENTION GUARANTEE. Here the database never overwrites unexpired undo data even if it means that transactions fail due to lack of undo space.
    • To set: ALTER SYSTEM SET UNDO_RETENTION = 2400;
(a) check instance parameters

SQL> select name, value, isdefault, description
     from v$parameter
     where name like '%undo%';

NAME            VALUE    ISDEFAULT DESCRIPTION
--------------- ------   --------- ---------------------------------------------------
undo_management AUTO     TRUE      instance runs in SMU mode if TRUE, else RBU mode -- default: AUTO (11g), Manual (before 11g).
undo_tablespace UNDOTBS1 FALSE     use/switch undo tablespace
undo_retention  900      TRUE      undo retention in seconds


Notes on Flashback Query
  • Flashback query can be used with remote tables (table@dblink)
  • Queries on V$ views ALWAYS return current data
  • Flashback query can be used with static dictionary views
  • To query past data at a precise time, use an SCN.
  • With timestamp, the actual time queried might be up to 3 seconds earlier than the time you specify.
  • Oracle Flashback Drop - Reverse the effects of DROP TABLE

Using FLASHBACK QUERY
  • Use SELECT.. AS OF [ TIMESTAMP timestamp| SCN scn ]
  • The query returns committed data as it existed at that point.
  • To use flashback query on a table, the user needs to have FLASHBACK and SELECT privileges on that object.
Case:
(1) Some rows from emp table are deleted by mistake at 11 AM.
(2) Error discovered at 11:20AM.
(3) Use Flashback query to see the data as it was before the 11am delete.
(1) Check table rows at 11:00AM
SQL> select current_scn, to_char(systimestamp, 'mm-dd-yy  HH24:mi:ss') time
     from v$database;

CURRENT_SCN            TIME               
---------------------- ------------------ 
1385809                12-08-10  11:00:22

SQL> select employee_id, last_name,  salary
     from hr.emp
     where department_id =30;

EMPLOYEE_ID            LAST_NAME                 SALARY                 
---------------------- ------------------------- ---------------------- 
114                    Raphaely                  11000                  
...
119                    Colmenares                2500    


(2) At 11:20, rows are incorrectly deleted from emp.

SQL> delete from hr.emp where department_id=30;
SQL> commit;
SQL> select current_scn, to_char(systimestamp, 'mm-dd-yy  HH24:mi:ss') time
     from v$database;
CURRENT_SCN            TIME               
---------------------- ------------------ 
1386301                12-08-10  11:20:03

SQL> select employee_id, last_name,  salary
     from hr.emp
     where department_id =30;

EMPLOYEE_ID            LAST_NAME                 SALARY                 
---------------------- ------------------------- ---------------------- 

(3) Use flashback query to see emp as it was at 11 AM

SQL> select employee_id, last_name, salary
     from hr.emp AS OF TIMESTAMP to_timestamp('12-08-10 11:00:00', 'mm-dd-yy HH24:MI:SS')
     where department_id = 30;

EMPLOYEE_ID            LAST_NAME                 SALARY                 
---------------------- ------------------------- ---------------------- 
114                    Raphaely                  11000                  
...
119                    Colmenares                2500    

(4) Alternatively you can use the SCN 

SQL> select employee_id, last_name, salary
     from hr.emp AS OF SCN 1385809
     where department_id = 30;

EMPLOYEE_ID            LAST_NAME                 SALARY                 
---------------------- ------------------------- ---------------------- 
114                    Raphaely                  11000                  
...
119                    Colmenares                2500 

(5) Restore the data copying it from the earlier version of the table:

SQL> insert into hr.emp 
       (select * 
        from hr.emp AS OF TIMESTAMP to_timestamp('12-08-10 11:00:00', 'mm-dd-yy HH24:MI:SS')
        where department_id = 30 );

6 rows inserted.

Using Flashback Version Query
  • You can retrieve the different versions of a given row that existed during a given time interval.
  • Use VERSIONS { BETWEEN {SCN | TIMESTAMP} start AND end}
  • Returns one row for each version of the existed at any time in the specified interval
(a) check the initial values of the row data, current scn and timestamp
SQL> select employee_id, first_name, last_name, salary, commission_pct
     from hr.emp
     where employee_id = 116;

EMPLOYEE_ID            FIRST_NAME           LAST_NAME                 SALARY                 COMMISSION_PCT         
---------------------- -------------------- ------------------------- ---------------------- ---------------------- 
116                    Shelli               Baida                     2900                                          
                     
SQL> select current_scn, to_char(systimestamp, 'mm-dd-yy  HH24:mi:ss') time
     from v$database;

URRENT_SCN            TIME               
---------------------- ------------------ 
1400261                12-08-10  16:15:20

(b) update salary, and commission
SQL> update hr.emp
     set salary = salary*1.5
     where employee_id = 116;
SQL> commit;

SQL> update hr.emp
     set commission_pct = .12
     where employee_id = 116;
SQL> commit;

(c) Query table emp pseudocolumns (generated by the supplemental logging)
SQL> select versions_startscn as ver_startscn, 
            versions_starttime as ver_startime, 
            versions_endscn as ver_endscn, 
            versions_endtime as ver_endtime,
            versions_xid as ver_xid, 
            versions_operation as oper,
            employee_id as emp_id, salary, commission_pct as comm_pct
      from hr.emp
        versions between timestamp
             to_timestamp('12-08-10 16:14:00', 'mm-dd-yy HH24:MI:SS')
        and  to_timestamp('12-08-10 16:16:30', 'mm-dd-yy HH24:MI:SS')
     where employee_id = 116;

VER_STARTSCN VERS_STARTTIME        VER_ENDSCN  VER_ENDTIME            VER_XID          OPER EMP_ID  SALARY   COMM_PCT 
------------ --------------------- ----------- ---------------------- ---------------- ---- ------- -------- -------- 
1400294      08-DEC-10 04.16.13 PM  1400301    08-DEC-10 04.16.25 PM  0A00100044030000 U    116     4350     0.12 
1400273      08-DEC-10 04.15.49 PM  1400294    08-DEC-10 04.16.13 PM  0A00030043030000 U    116     4350
                                    1400273    08-DEC-10 04.15.49 PM                        116     2900
The pseudo columns are:
VERSIONS_[ STARTSCN | STARTTIME ] Display the SCN or TIMESTAMP when row took on this value.
If the row was created before the lower bound SCN ot TIMESTAMP, a NULL is returned.
VERSIONS_[ ENDSCN | ENDTIME ] Display the last SCN and TIMESTAMP when row had this value.
If the value is still current at the upper bound SCN or TIMESTAMP a NULL is returned.
VERSIONS_XID ID of the transaction that created the row version.
VERSIONS_OPERATION Operation performed by the transaction ((I)nsert, (U)pdate or (D)elete)


Using Flashback Transaction Query
  • Extension to the Flashback Query Version option
  • Used to retrieve information for a given transaction or for all transactions in a given time interval.
  • The transaction IDs (listed in the VERSIONS_XID column) are used to query the FLASHBACK_TRANSACTION_QUERY view
  • The database must have at least minimal supplemental logging enabled
(a) Query the transaction that updated salary of employee_id 116.
col logon_user format a10
col table_name format a10
col table_owner format a10
col operation format a10
col undo_sql format a50

SQL> select logon_user, operation, table_name, 
       to_char(start_timestamp, 'mm-dd-yy hh24:MI:ss') time, row_id, undo_sql
     from flashback_transaction_query
     where xid = hextoraw('0A00030043030000');

LOGON_USER OPERATION  TABLE_NAME TIME               ROW_ID              UNDO_SQL
---------- ---------- ---------- ------------------ ------------------- --------------------------------------------------
SYSTEM     UPDATE     EMP       12-08-10 16:15:46  AAASNxAAEAAAAJHAAB  update "HR"."EMP" set "SALARY" = '2900' where
                                                                         ROWID = 'AAASNxAAEAAAAJHAAB';
SYSTEM     BEGIN

Example: check the various actions performed by a given transaction
SQL> select current_scn from v$database;

CURRENT_SCN            
---------------------- 
1424010

SQL> create table emp
     (empno number, 
      empname varchar2(10),
      salary number);

SQL> select current_scn from v$database;
SQL> insert into emp values (1, 'John', 20000);
SQL> commit;

SQL> update emp set salary = salary*1.2 where empno=1;
SQL> commit;

SQL> delete from emp where empno = 1;
SQL> commit;

SQL> insert into emp values (1, 'Jack', 45000);   -- transaction performs multiple actions 
SQL> update emp set salary = salary*1.05;
SQL> commit;

SQL> update emp set salary = salary + 3320;
SQL> commit;

SQL> select current_scn from v$database;

CURRENT_SCN            
---------------------- 
1424312

(a) To retrieve the various versions of the row: 
SQL> select versions_xid xid, versions_startscn start_scn,
        versions_endscn end_scn, versions_operation operation,
        empname, salary
     from emp
       versions between scn minvalue and maxvalue
     where empno = 1;
/
XID              START_SCN              END_SCN                OPERATION EMPNAME    SALARY                 
---------------- ---------------------- ---------------------- --------- ---------- ---------------------- 
0200060044040000 1424201                                       U         Jack       50570                  
0600080064040000 1424197                1424201                I         Jack       47250                  
05000A006F040000 1424192                                       D         John       24000                  
                                        1424192                          John       24000


(b) The transaction '0600080064040000' inserted the new record with a wrong name. 
To audit all the changes made by this transaction: 

SQL> select xid, start_scn, commit_scn, operation, logon_user, undo_sql
     from flashback_transaction_query
     where xid = hextoraw('0600080064040000')

XID              START_SCN  COMMIT_SCN OPERATION  LOGON_USER UNDO_SQL
---------------- ---------- ---------- ---------- ---------- --------------------------------------------------
0600080064040000 1424195    1424197    UPDATE     SYSTEM     update "SYSTEM"."EMP" set "SALARY" = '45000' where
                                                             ROWID = 'AAASXdAABAAAVBJAAB';

0600080064040000 1424195    1424197    INSERT     SYSTEM    delete from "SYSTEM"."EMP" where ROWID = 'AAASXdAA
                                                            BAAAVBJAAB';

0600080064040000    1424195    1424197 BEGIN   SYSTEM


Using Flashback Table

Flashback Table x Flashback Drop
  • FLASHBACK TABLE to SCN uses undo data to revert DMLs:
    • Delete rows inserted since scn_num
    • Update back rows modified since scn_num
    • Insert rows deleted since scn_num
  • FLASHBACK TABLE to BEFORE DROP uses a recycle bin to restore dropped table.
  • None of them, however, recover TRUNCATED rows.
  • For this you need to use FLASHBACK DATABASE

Allows point-in-time recovery for a table:
  • restore point specified by timestamp or SCN
  • table can stay online while operation executes
  • maintains all table attributes (indexes, triggers, constraints, etc)
  • maintains any remote state in a distributed environment
  • maintains data integrity specified by constraints

Requisites:
  • User must have FLASHBACK ANY TABLE or FLASHBACK object privilege on the table
  • Must have SELECT, INSERT, DELTE and ALTER privileges on the table
  • Must have SELECT ANY DICTIONARY or FLASHBACK ANY TABLE system privilege or SELECT_CATALOG_ROLE role

For an object to be eligible to be flashed back:
  • Object MUST NOT be: part of a cluster, materialized views, AQ tables, static data dictionary tables, system tables, remote tables, object tables, nested tables, or individual partitions or subpartitions.
  • Table structure must not have been changed between now and the point of restore
  • Row movement must be enabled (rowids will change)
  • Enough undo data to perform flashback exists on the undo tablespaces.

(a) enable row movement

SQL> alter table emp enable row movement;

SQL> select * from emp;
EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
1                      Jack       50570        

(b) check current scn (before flashback)

SQL> select current_scn from v$database;
CURRENT_SCN            
---------------------- 
1426179

(c) Insert and update data row data

SQL> Insert into emp values (2, 'Mary', 2000);
SQL> commit;
SQL> select current_scn from v$database;

CURRENT_SCN            
---------------------- 
1426185 

SQL> insert into emp values(100, 'Joseph', 450);
SQL> commit;
SQL> select current_scn from v$database;

CURRENT_SCN            
---------------------- 
1426200   

SQL> update emp set salary = salary*2;
SQL> commit;
SQL> select current_scn from v$database;

CURRENT_SCN            
---------------------- 
1426208 

(d) ensure that enough undo data exists to flashback the table

SQL>  select name, value/60 minutes_retained
      from v$parameter
      where name = 'undo_retention';

NAME              MINUTES_RETAINED       
----------------- ---------------------- 
undo_retention    80                     
 
(e) determine whtehr the table has dependencies on other tables. 
If dependencies exist, these may need to be flashbacked also

SQL>  select other.owner, other.table_name
    from sys.all_constraints this
         sys.all_constraints other
    where this.owner = 'SYSTEM'
    and   this.table_name = 'EMP'
    and   this.r_owner = other.owner
    and   this.r_constraint_name = other.constraint_name
 and   this.constraint_type = 'R';


(f) Check new data

SQL>  select * from emp;

 EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
2                      Mary       4000                   
1                      Jack       101140                 
100                    Joseph     900  


(g) flashback table to point in time (SCN) before  data modification

SQL> flashback table emp to scn 1426179;

SQL> select * from emp;

EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
1                      Jack       50570                  


(h) flashback table forward to after data modification

SQL> flashback table emp to scn 1426208;

SQL>  select * from emp;
 EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
2                      Mary       4000                   
1                      Jack       101140                 
100                    Joseph     900  


Data Movement: Transporting Partitions for Data Warehousing



            Transporting a partition


  • In this example, we see how to use the transportable tablespaces feature to periodically load data into a large fact table in a data warehousing database.
  • For this, suppose that an enterprise data warehouse has a large table sales.
  • The table is partitioned by date, with data for each month stored in its own partition.
  • In addition, local indexes are build so that you dont have to rebuild global indexes when dropping the oldest partition from the table.
  • Every month you need to load one month of data into the sales table, stored in its own partition.

Set up the fact table: sales. In the data warehouse database (dwr-db).
  • For simplicity, all the initial partitions are kept in the same tablespace.
  • As the oldest partitions are deleted and new are added, eventually each partition will be on its own tablespace.
  • A local index is also created in each partition.

SQL> create table dwr.sales (invoice_no number,
  2    sale_year int not null,
  3    sale_month int not null,
  4    sale_day int not null)
  5  partition by range (sale_year, sale_month, sale_day)
  6    (partition jan05 values less than (2005, 2, 1),
  7     partition feb05 values less than (2005, 3, 1),
  8     partition mar05 values less than (2005, 4, 1),
  9     partition apr05 values less than (2005, 5, 1),
 10     partition may05 values less than (2005, 6, 1),
 11     partition jun05 values less than (2005, 7, 1))
 12     tablespace users;

-- create local index.
SQL> create index dwr.sales_index on dwr.sales(invoice_no) local;

-- Now let's insert some data into table sales.
Run the PL/SQL block below to insert some sample data into sales
declare
 month number :=1;
 day   number :=1;
begin
 for i in 1..30 loop
   insert into dwr.sales values (i, 2005, month, day);
   if mod(i,5) = 0 then
     month := month +1;
     day := day +1;
   end if;
end loop;
end;

-- Check that all partitions (table and index) are in the same tablespace (users):
SQL> select tablespace_name, segment_name, segment_type, partition_name
  2  from dba_segments
  3  where owner = 'DWR';

TABLESPACE_NAME    SEGMENT_NAME    SEGMENT_TYPE       PARTITION_NAME 
------------------ --------------- ------------------ ------------------
USERS              SALES           TABLE PARTITION    JAN05  
USERS              SALES           TABLE PARTITION    FEB05  
USERS              SALES           TABLE PARTITION    MAR05  
USERS              SALES           TABLE PARTITION    APR05  
USERS              SALES           TABLE PARTITION    MAY05  
USERS              SALES           TABLE PARTITION    JUN05  
USERS              SALES_INDEX     INDEX PARTITION    JAN05  
USERS              SALES_INDEX     INDEX PARTITION    FEB05  
USERS              SALES_INDEX     INDEX PARTITION    MAR05  
USERS              SALES_INDEX     INDEX PARTITION    APR05  
USERS              SALES_INDEX     INDEX PARTITION    MAY05  
USERS              SALES_INDEX     INDEX PARTITION    JUN05  

12 rows selected.

Prepare the new data for loading in the warehouse

The month of July has ended and you need to load the July data into the sales table.
In the staging database, perform the steps below:
  1. Create a new tablespace (ts_jul). It will hold the table with july data and will be transported to the data warehouse database.
  2. Create a table jul_sales in the new tablespace. It will hold the july data. jul_sales should have the same structure as the sales table.
  3. Load data on jul_sales table.
  4. Create an index jul_sales_idx on the same columns as the local index on the sales table.
SQL> Create smallfile tablespace ts_jul 
  datafile '/u01/app/oracle/oradata/test112/ts_jul_1.dbf' 
  size 5M logging 
  extent management local 
  segment space management auto; 

SQL> create table hr.jul_sales (invoice_no number,
      sale_year int not null,
      sale_month int not null,
      sale_day int not null)
      tablespace ts_jul;

-- Load July data: (run the pl/sql script)
declare
 day   number :=1;
begin
 for i in 40..70 loop
   insert into hr.jul_sales values (i, 2005, 7, day);
     day := day + 1;
end loop;
end;

-- Create index on jul_sales table: 
SQL> create index hr.jul_sale_idx on hr.jul_sales(invoice_no) tablespace ts_jul;

-- Check table and index location
SQL> select tablespace_name, segment_name, segment_type, bytes
   from dba_segments
   where owner = 'HR'
   order by segment_name, segment_type;
TABLESPACE_NAME   SEGMENT_NAME   SEGMENT_TYPE       BYTES
----------------- --------------- ------------------ ----------
...
TS_JUL            JUL_SALES       TABLE              65536
TS_JUL            JUL_SALE_IDX    INDEX              65536
...


Transport tablespace to the data warehouse database
Perform the steps below:
  1. Make tablespace ts_jul read only
  2. Export metadata of the objects in the tablespace being transported
  3. Copy datafiles to the data warehouse server. Make sure to copy the files to its final location in the server. When you import the tablespace, the paths to the datafiles will be stored in the database.
  4. Copy the export file to the directory pointed by the data_pump_dir object (or to other directory pointed by some directory object in the database)
  5. Import the transported tablespace.
(1) Make tablespace read only
SQL> alter tablespace ts_jul read only;

Tablespace altered.

(2) export metadata
oracle@quark:~$ expdp system dumpfile=expts_jul.dmp directory=data_pump_dir transport_tablespaces=ts_jul logfile=tts_exptsjul.log

Export: Release 11.2.0.1.0 - Production on Fri Oct 08 15:34:45 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** dumpfile=expts_jul.dmp directory=data_pump_dir transport_tablespaces=ts_jul logfile=tts_exptsjul.log 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /u01/app/oracle/admin/tempdb/dpdump/expts_jul.dmp
******************************************************************************
Datafiles required for transportable tablespace TS_JUL:
  /u01/app/oracle/oradata/tempdb/ts_jul_1.dbf
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 15:35:35

(3)-(4) Copy export and tablespace data files to the dwr server

(5) Import the tablespace ts_jul to the dwr
oracle@dwr-srv:$ impdp system dumpfile=expts_jul.dmp directory=data_pump_dir transport_datafiles=/u01/app/oracle/oradata/dwr/ts_jul_1.dbf remap_schema=hr:dwr logfile=tts_import.log

Import: Release 11.2.0.1.0 - Production on Fri Oct 08 18:10:20 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** dumpfile=expts_jul.dmp directory=data_pump_dir transport_datafiles=/u01/app/oracle/oradata/dwr/ts_jul_1.dbf remap_schema=hr:crm1 logfile=tts_import.log 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 18:11:10

-- Check the import
SQL> select count(*) from dwr.jul_sales;

  COUNT(*)
----------
  30


Attach the July data to the Sales table
At this point the tablespace ts_jul has been transported and the table jul_sales is available in the dwr database.
It is not yet, however, available as a partition in the large sales table.
For this we will need to:
  1. Add a partition to the sales table for the July data. This will also create another partition for the local index.
  2. Attach the transported table jul_sales to the table sales by exchanging it with the new partition.
(1) In the dwr server, add a partition to the sales table for the July sales data.
SQL> alter table dwr.sales add partition jul05 values less than (2005, 8,1);

Table altered.

(2) Attach the transported table jul_sales to the table sales by exchanging it with the new partition:
SQL> alter table dwr.sales exchange partition jul05 with table dwr.jul_sales
  2  including indexes 
  3  without validation;

Table altered.
 

  • The exchange partition statement assigns the jul_sales data to the jul05 partition on the sales table.
  • The data is not physically moved to the new partition. Instead database pointers and metadata information are updated.
  • In fact, after the statement executes successfully, you can check that the data is now assigned to the sales table, and the jul_sales table is empty.
SQL> select count(*) from crm1.jul_sales;

  COUNT(*)
----------
  0

SQL> select sale_month, count(*) 
  2  from crm1.sales 
  3  group by sale_month;

SALE_MONTH   COUNT(*)
---------- ----------
  1     5
  6     5
  2     5
  4     5
  5     5
  3     5
  7    31

7 rows selected.

SQL>   select tablespace_name, segment_name, segment_type, partition_name
  2    from dba_segments
  3    where owner = 'DWR';

TABLESPACE_NAME   SEGMENT_NAME   SEGMENT_TYPE    PARTITION_NAME
----------------- -------------- --------------- ------------------------------
USERS             SALES          TABLE PARTITION  JAN05
USERS             SALES          TABLE PARTITION  FEB05
USERS             SALES          TABLE PARTITION  MAR05
USERS             SALES          TABLE PARTITION  APR05
USERS             SALES          TABLE PARTITION  MAY05
USERS             SALES          TABLE PARTITION  JUN05
TS_JUL            SALES          TABLE PARTITION  JUL05
USERS             SALES_INDEX    INDEX PARTITION  JAN05
USERS             SALES_INDEX    INDEX PARTITION  FEB05
USERS             SALES_INDEX    INDEX PARTITION  MAR05
USERS             SALES_INDEX    INDEX PARTITION  APR05
USERS             SALES_INDEX    INDEX PARTITION  MAY05
USERS             SALES_INDEX    INDEX PARTITION  JUN05
TS_JUL            SALES_INDEX    INDEX PARTITION  JUL05

14 rows selected.
  • You can now drop jul_sales table.
  • You can also, if needed, make the tablespace ts_jul read write.
SQL> alter tablespace ts_jul read write;

Tablespace altered.

Data Movement: Transporting Tablespaces


            Transporting tablespaces

Requirements:
  • Oracle 8i or later: Enterprise Edition required to generate a transportable tablespace set
  • Since 9i: Block size may be different across Databases
  • Since 10g: Tablespaces can be transported across platforms (both databases need to have the COMPATIBLE parameter set to at least 10.0.0)
  • Since 11gR1: You must use DataPump utilitties (expdb, impdp) to transport tablespaces

When to use:
  • To export/import partitions of large tables (see example here)
  • For publishing structured data on CDs
  • Copying multiple read-only version of a tablespace to multiple databases
  • Archive historical data
  • Perform tablespace point-in-time recovery

Procedure for Transporting a tablespace set:
  1. Determine the Transportable Set. A transportable set is formed by the datafiles of all tablespaces being transported plus the export file containing metadata for the entire set.
  2. Check that the transportable set is self-contained (there should be no references from objects within the tablespace set to objects outside the set).
  3. If you will be migrating across platforms, check whether you will need to convert the datafiles for the same endianess.
  4. Make the tablespaces that will be transported READ ONLY
  5. Export the metadata for the transportable set
  6. Copy the datafiles and export file to the destination host.
  7. Import the metadata and datafiles
  8. Make tablespaces READ WRITE


Transporting tablespaces. Example:

(1) Check the endian format of both platforms (if they are different):

Query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported, and to determine each platform's endian format (byte ordering).
You can check the endian_format for your specific platform:
SQL> select d.platform_name, endian_format
  2  from v$transportable_platform tp, v$database d 
  3  where tp.platform_name = d.platform_name;

PLATFORM_NAME                            ENDIAN_FORMAT
---------------------------------------- --------------
Linux IA (32-bit)                        Little

(2) setup the tablespaces to be transported.
This example create two tablespaces on database DB1. It creates a table and index on the tablespaces to be transported to database DB2.
SQL> Create smallfile tablespace HR1 
  datafile '/u01/app/oracle/oradata/db1/hr1_1.dff' 
  size 5M logging 
  extent management local 
  segment space management auto; 

SQL> Create smallfile tablespace HR2 
  datafile '/u01/app/oracle/oradata/db1/hr2_1.dbf' 
  size 5M reuse autoextend on next 1m maxsize 10m logging 
  extent management local 
  segment space management auto; 

SQL> create table hr.hr_emp
  2  tablespace hr1
  3  as select * from hr.employees;

SQL> create index hr.idx_hr_emp 
     on hr.hr_emp(first_name, last_name)
     tablespace hr2;

SQL> alter table hr.hr_emp
  2  add constraint fk_hr_emp_dept
  3  foreign key (department_id) references hr.departments(department_id);

-- check the location of the created objects.
-- We'll transport HR1 and HR2 between two Oracle 11gr2 running on Linux 32-bit.
SQL>  select owner, segment_name, segment_type, tablespace_name
  2  from dba_segments
  3  where tablespace_name in ('HR1', 'HR2');

OWNER   SEGMENT_NAME    SEGMENT_TYPE       TABLESPACE_NAME
------- --------------- ------------------ ------------------------------
HR      HR_EMP          TABLE              HR1
HR      IDX_HR_EMP      INDEX              HR2

(3) Select the self-contained tablespace set.
  • You can only transport a self-contained tablespace set, which means that there may be no references from an object inside the set of tablespaces to another tablespace not included in the set.
  • In our example, because table HR_EMP on tablespace HR1 has an index on tablespace HR2, both tablespaces need to be transported.
  • In addition, since HR.HR_EMP has a foreign key references to HR.DEPARTMENTS (in another tablespace) the third tablespace should also be included in the set. Otherwise a violation will occur.

Violations of self-containement can be caused by:
  • Index inside the set belongs to table outside the set
  • Partitioned tables has a partition outside the set
  • Referential integrity constraint points to table outside the set
  • Table has a LOB column that point to LOBs outside the set






(4) Check if set is self contained.
  • Use DBMS_TTS.TRANSPORT_SET_CHECK to validate the set of tablespaces.
  • If any violation if found it will be listed in the TRANSPORT_SET_VIOLATIONS view.
  • In this case, HR.HR_EMP has a foreign key referencing HR.DEPARTMENTS (in tablespace EXAMPLE). An violation will be found, since we are not including the third tablespace in the transportable set.
SQL> execute dbms_tts.transport_set_check('hr1, hr2', true);
        
PL/SQL procedure successfully completed.

SQL> select * from transport_set_violations;

VIOLATIONS
----------------------------------------------------------------------------------------------------
ORA-39906: Constraint FK_HR_EMP_DEPT between table HR.DEPARTMENTS in tablespace EXAMPLE and table HR.HR_EMP in tablespace HR1.

So let's drop the foreign key constraint: 

SQL> alter table hr.hr_emp drop constraint fk_hr_emp_dept;
Table altered.

(4) Make tablespaces read only
SQL> alter tablespace hr1 read only;
SQL> alter tablespace hr2 read only;

(5) Invoke data pump as system and export the tablespace in the transportable set.
  • directory: IS the directory objects pointing the directory where you want the export file to be generated.
  • transport_tablespaces: parameter indicating a transportable tablespace export. Only the metadata for the tables and dependent objects within the transportable set are exported.
oracle@quark:~$ expdp system dumpfile=exphr1_2.dmp directory=data_pump_dir transport_tablespaces=hr1, hr2 logfile=tts_exphr1_2.log

Export: Release 11.2.0.1.0 - Production on Thu Oct 07 15:22:51 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** dumpfile=exphr1_2.dmp directory=data_pump_dir transport_tablespaces=hr1, hr2 logfile=tts_exphr1_2.log 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /u01/app/oracle/admin/db1/dpdump/exphr1_2.dmp
******************************************************************************
Datafiles required for transportable tablespace HR1:
  /u01/app/oracle/oradata/db1/hr1_1.dff
Datafiles required for transportable tablespace HR2:
  /u01/app/oracle/oradata/db1/hr2_1.dbf
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 15:24:01 

(6) If the tablespaces are being transported to a different platform, you need to convert the datafiles before copying them.
For example, if the destination database in on a Windows machine, then:
oracle@quark:/$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Thu Oct 07 15:31:55 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DB1 (DBID=64992127)

RMAN> convert tablespace hr1, hr2
2> to platform 'Microsoft Windows IA (32-bit)' 
3> format '/tmp/%U'; 

Starting conversion at source at 07-Oct-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00007 name=/u01/app/oracle/oradata/db1/hr1_1.dff
converted datafile=/tmp/data_D-DB1_I-54881118_TS-HR1_FNO-6_01mrt1cn
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00008 name=/u01/app/oracle/oradata/db1/hr2_1.dbf
converted datafile=/tmp/data_D-DB1_I-54881118_TS-HR2_FNO-7_02mrt1co
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Finished conversion at source at 07-OCT-10

(7) Transport the Tablespace set
  • Step 1: Copy the export file to the directory pointed by the DATA_PUMP_DIR directory object
  • Step 2: Copy the data files to the location of the existing datafiles of the destination database.

(8) Import the tablespaces
oracle@proton:$ impdp system dumpfile=exphr1_2.dmp directory=data_pump_dir transport_datafiles=/u01/app/oracle/oradata/DB2/hr1_1.dff, /u01/app/oracle/oradata/DB2/tts/hr2_1.dbf remap_schema=hr:crm1 logfile=tts_import.log

Import: Release 11.2.0.1.0 - Production on Thu Oct 07 17:18:56 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** dumpfile=exphr1_2.dmp directory=data_pump_dir transport_datafiles=/u01/app/oracle/oradata/DB2/hr1_1.dff, /u01/app/oracle/oradata/DB2/hr2_1.dbf remap_schema=hr:crm1 logfile=tts_import.log 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 17:19:16

-- Check the imported tablespaces
SQL> select owner, segment_name, segment_type, tablespace_name
  2  from dba_segments
  3* where tablespace_name in ('HR1' , 'HR2');

OWNER    SEGMENT_NAME         SEGMENT_TYPE       TABLESPACE_NAME
-------- -------------------- ------------------ ------------------------------
CRM1     HR_EMP               TABLE              HR1
CRM1     IDX_HR_EMP           INDEX              HR2

Configuring NFS on Ubuntu


How NFS works:
Typically, NFS allows a client machine (quark) to require transparent access to data stored on a server machine (dirak).
For this to take place successfully:
  1. The server (dirak) runs NFS daemon processes (nfsd and mountd) in order to make its data available to clients.
  2. The sysadmin determines what to make available, and exports names and parameters of directories to be shared, normally using the /etc/exports configuration file and the exportfs command.
  3. The sysadmin configures the server (using hosts.deny, hosts.allow) so that it can recognize and approve validated clients.
  4. The client machine requests access to exported data, typically by issuing a mount command.

Client quark mounts the /usr/home directory from host dirac on the local directory /home
# mount -t nfs dirac:/usr/home/ /home
To mount the remote directory:
  1. mount connects to mountd daemon, running on dirac.
  2. mountd checks whether quark has permission to mount /usr/home. If so, it returns a file handle.
  3. When someone tries to access the file /home/jdoe/login.sh in quark, the kernel places an RPC call to nfsd on the NFS server (dirac):
    • rpc_call(file handle, file name, UID, GID) - User and Group IDs must be the same on both hosts.














  1. If all goes well, users on the client machine can then view and interact with mounted filesystems on the server within the parameters permitted.

  • Client and server NFS functionality is implemented as kernel-level daemons that are started from user space at system boot.
  • These NFS daemons are normally started at boot time and register themselves with the portmapper, a service that manages the access to TCP ports of programs involved in remote procedure calls.
    • mountd - Runs on the NFS Server. Processes client's NFS requests.
    • nfsd (NFS daemon) - Runs on the NFS Server. Service the client's request.


Installing and Configuring NFS Server:
(Step 1): Check whether your kernel has NFS support compiled in. One way to do this is to query the kernel interface on the proc filesystem.
$ cat /proc/filesystems | grep nfs
nodev   nfs
nodev   nfs4
nodev   nfsd

-- If Kernel support for NFS is installed, you should see the lines above. 
-- If no results are displayed, you need to install NFS Server support:

$ sudo apt-get install portmap nfs-kernel-server

(Step 2): Configure NFS Server: define shared directories
  • Now you need to tell the NFS server which directories should be available for mounting, and which parameters should control client access to them.
  • You do this by exporting the files, that is, listing filesystems and access controls in the /etc/exports file.
# exports file for dirac. 
# Each line defines a directory and the hosts allowed to mount it

/home      quark.math.usm.edu(rw, sync)  proton.math.usm.edu(rw, sync)
/usr/TeX   *.math.usm.edu
/home/ftp  *(ro)
In the exports file above:
  • *.math.usm.edu -- matches all hosts in teh domain math.usm.edu
  • Security options:
  • rw - allow read/write in the exported file. Disallowed by default.
  • sync - Reply to requests only after changes have been committed to stable storage.

(Step 3): export the shares.
After modifying /etc/exports, run the command
$ sudo exportfs -ra 

(Step 4): Edit /etc/default/portmap to enable access to portmap from remote machines.
By default, portmap listens only for RPC calls coming from the loopback interface (127.0.0.1). For this,
(a) comment the "-i 127.0.0.1" entry in the file;
(b) restart portmap; and
(c) restart the NFS kernel server:
edit /etc/default/portmap
S sudo /etc/init.d/portmap restart
$ sudo /etc/init.d/nfs-kernel-server restart


Configuring NFS Clients

(Step 1): Install NSF Client
$ sudo apt-get intsall portmap nfs-common

(Step 2 - optional): Configure portmap to allow connections to the NFS server.

/etc/hosts.deny - list of hosts that are not allowed to access the system. Edit the file to block all clients. In this sense, only those that you explicitly authorize (in /etc/hosts.allow) will be able to connect the server.
portmap: ALL
/etc/hosts.allow - list of hosts authorized to access the server
portmap: <nfs Server IP address>

Mounting a remote filesystem manually:
From the client:
$ sudo mount dirac.math.usm.edu:/users/home /home

Configure auto mounting during startup:
  • You can set up automatic nfs mounting by including entries in /etc/fstab.
  • The /etc/fstab file is used to statically define the file systems that will be automatically mounted at boot time.
  • It contains a list of all available disks and disk partitions, and indicates how they are to be initialized into the overall system's file system
  • During machine startup, the mount program reads /etc/fstab file to determine which options should be used when mounting the specified device.
# device name   mount point     fs-type      options       dump-freq pass-num                                          
# servername:dir /mntpoint        nfs          rw,hard,intr   0         0

dirac:/users/home  /home  nfs  rw, hard, intr  0  0 
Just like other /etc/fstab mounts, NFS mounts in /etc/fstab have 6 columns, listed in order as follows:
  • The filesystem to be mounted (dirac.math.usm.edu:/users/home/)
  • The mountpoint (/home)
  • The filesystem type (nfs)
  • The options (rw, hard, intr)
  • Frequency to be dumped (a backup method) (0)
  • Order in which to be fsck'ed at boot time. (0) - dont perform fsck.

Options:
  • rw - read/write
  • hard - share mounted so that if the server becomes unavailable, the program will wait until the server is available again.
See more details on man mount

Network File System (NFS) - Concepts




What is NFS
  • NFS is a platform independent remote file system technology created by SUN in the 1980s.
  • It is a client/server application that provides shared file storage for clients across a network.
  • It was designed to simplify the sharing of filesystems resources in a network of non-homogeneous machines.
  • It is implemented using the RPC protocol and the files are available through the network via a Virtual File System (VFS), an interface that runs on top of the TCP/IP layer.
  • Allows an application to access files on remote hosts in the same way it access local files.

NFS Servers: Computers that share files
  • During the late 1980s and 1990s, a common configuration was to configure a powerful workstation with lots of local disks and often without a graphical display to be a NFS Server.
  • "Thin," diskless workstations would then mount the remote file systems provided by the NFS Servers and transparently use them as if they were local files.

NFS Simplifies management:
  • Instead of duplicating common directories such as /usr/local on every system, NFS provides a single copy of the directory that is shared by all systems on the network.
  • Simplify backup procedures - Instead of setting up backup for the local contents of each workstation (of /home for exmaple), with NFS a sysadm needs to backup only the server's disks.

NFS Clients: Computers that access shared files
  • NFS uses a mixture of kernel support and user-space daemons on the client side.
  • Multiple clients can mount the same remote file system so that users can share files.
  • Mounting can be done at boot time. (i.e. /home could be a shared directory mounted by each client when user logs in).
  • An NFS client
    • (a) mounts a remore file system onto the client's local file system name space and
    • (b) provides an interface so that access to the files in the remote file system is done as if they were local files.

----
Goals of NFS design:
  1. Compatibility:
  2. NFS should provide the same semantics as a local unix file system. Programs should not need or be able to tell whether a file is remote or local. user program: OPEN("/users/jdoe/.profile", READONLY) -- program cannot tell whether "users" or "jdoe" are local path names.
  3. Easy deployable:
  4. implementation should be easily incorporated into existing systems remote files should be made available for local programs without these having to be modified or relinked.
  5. Machine and OS independence:
  6. NFS Clients should run in non-unix platforms Simple protocols that could be easily implementend in other platforms.
  7. Efficienty:
  8. NFS should be good enough to satisfy users, but did not have to be as fast as local FS. Clients and Servers should be able to easily recover from machine crashes and network problems.


NSF Versions
  • Version 1: used only inside Sun Microsystems.
  • Version 2: Released in 1987 (RFC 1989)
  • Version 3: Released 1995
  • Version 4: Released 2000

NFS design: NFS Protocol, Server, Client

NFS Protocol
  • Uses Remote Procedure Call (RPC) mechanisms
  • RPCs are synchronous (client application blocks while waits for the server response)
  • NFS uses a stateless protocol (server do not keep track of past requests) - This simplify crash recovery. All that is needed to resubmit the last request.
  • In this way, the client cannot differentiate between a server that crashed and recovered and one that is just slow.

New File system interface
  • The original Unix file system interface was modified in order to implement NFS as an extension of the Unix file system.
  • NFS was built into the Unix kernel by separating generic file systems operations from specific implementations. With this the kernel can treat all filesystems and nodes in the same way and new file systems can be added to the kernel easily:
    • A Virtual File System (VFS) interface: defines the operations that can be done on a filesystem.
    • A Virtual node (vnode) interface: defines the operations that can be done on a file within a filesystem.
  • A vnode is a logical structure that abstracts whether a file or directory is implemented by a local or a remote file system. In this sense, applications had to "see" only the vnode interface and the actual location of the file (local or remote file system) is irrelevant for the application.
  • In addition, this interface allows a computer to transparently access locally different types of file systems (i.e. ext2, ext3, Reiserfs, msdos, proc, etc).



NFS Client
Uses a mounter program. The mounter:
  1. takes a remote file system identification host:path;
  2. sends RPC to host and asks for (1) a file handle for path and (2) server network address.
  3. marks the mount point in the local file system as a remote file system associated with host address:path pair.



Diagram of NFS architecture

NFS Remote Procedure Calls
NFS client users RPCs to implement each file system operation.
Consider the user program code below:
fd <- OPEN ("f", READONLY)
READ (fd, buf, n)
CLOSE (fd)
  • An application opens file "f" sends a read request and close the file.
  • The file "f" is a remote file, but this information is irrelevant for the application.
  • The virtual file system holds a map with host address and file handles (dirfh) of all the mounted remote file systems.
  • The sequence of steps to obtain the file are listed below:

  1. The Virtual File System finds that file "f" is on a remote file system, and passes the request to the NFS client.
  2. The NFS client sends a lookup request (LOOKUP(dirth, "f") for the NFS Server, passing the file handler (dirth) for the remote file system and file name to be read.
  3. The NFS server receives LOOKUP request, extracts the file system identifier and inode number from dirth, and asks the identified file system to look up the inode number in dirth and find the local directory inode information.
  4. The NFS server searches the directory identified by the inode number for file "f".
    If file is found, the server creates a handle for "f" and sends it back to the client.
  5. The NFS client allocates the first unused entry in the program's file descriptor table, stores a reference to f's file handle in that entry, and returns the index for the entry (fd) to the user program.
  6. Next, the user program calls READ(fd, buf, n).
  7. The NFS client sends the RPC READ(fh,0,n).
  8. The NFS server looks up the inode for fh, reads the data and send it in a reply message.
  9. When the user program calls to close the file (CLOSE(fd)), the NFS client does not issue an RPC, since the program did not modify the file.




References:
Russel Sandberg, David Goldberg, Steve Kleiman, Dan Walsh, and Bob Lyon. Design and Implementation of the Sun Network Filesystem . Proceedings of the Summer 1985 USENIX Conference, Portland OR, June 1985, pp. 119-130.
Saltzer, Jerome H. and M. Frans Kaashoek. 2009. Principles of computer system design.

Oracle ASM on 11g R2: Installing Grid Infrastructure



Note: For steps on how to configure Oracle ASM before installing Grid infrastructure, check here.



This Grid Infrastructure installation on a standalone server will perform the following steps:
  1. Install Oracle ASM software
  2. Install Oracle Restart software
  3. Install and configure the Listener
  4. Create an ASM Disk group
  5. Create and configure an ASM Instance on the machine

Before proceeding, make sure that you set the path to the Oracle base directory.
On bash shell:
$ ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
$ echo $ORACLE_BASE
/u01/app/oracle

  • Logged as the Grid Infrastructure user owner,change directory to the grid infrastructure media directory, run the installation program and follow the installation steps below.
  • In our case, we will set up a single owner environment, so make sure you're logged as the user oracle.
$ ./runInstaller

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 80 MB.   Actual 5902 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2010-09-18_08-01-12PM. Please wait ...
$ 

Select installation type
  • Select option to install and configure Grid Infrastructure for a standalone server.
Select language
  • In the next screen, Select language
Select disks to form the disk group
  • The next string should list all the disks previously configured for ASM use.
  • These candidate disks should have been discovered at boot time by ASMLib.
  • If no disks are listed:
(a) Check if disk devices ownership is appropriately configured.
The disk devices must be owned by the user performing the grid installation.
Check user and group ownership with the command:
# ls -l /dev/oracleasm/disks/
total 0
brw-rw---- 1 oracle dba 8, 17 Set 18 22:33 DISK1
brw-rw---- 1 oracle dba 8, 33 Set 18 22:52 DISK2
brw-rw---- 1 oracle dba 8, 49 Set 18 22:52 DISK3
brw-rw---- 1 oracle dba 8, 65 Set 18 22:53 DISK4
(b) check whether ASMLib driver is loaded:
# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

# oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
(c) Check the default discovery string on the installer.
In linux, the default discovery strign is '/dev/raw*'.
Click on the Change Discovery Path button and type '/dev/oracleasm/disks/*' (without quotes!).
This should list all the disks you have previously configured.

Configure ASM Disk Group
  • Select name for the disk group being created and select the disks that will compose this group.
  • Here we choose normal redundancy and create the oradata_dskgrp with disk1:(/dev/sdb1,3Gb) and disk3:(/dev/sdd1, 3Gb).
  • Each Oracle ASM disk is divided into allocation units (AU).
  • An allocation unit is the fundamental unit of allocation within a disk group and by default it is 1 Mb.












Specify the passwords for SYS and ASMSNMP users.
  • These users are created in the ASM Instance.
  • To manage an ASM Instance, a user needs the SYSASM role, which grants full access to all ASM disks (including authority to created and delete ASM disks).
  • The user ASMSNMP, with only SYSDBA role, can monitor but does not have full access to the ASM diks.
Select the name of the OS groups to be used for OS authentication to ASM:
Select installation location.
In the next two screens, accept or change the location for oracle grid home directory, and accept the location for the inventory directory (if this is the first oracle install in the machine)
Check whether all installation prerequisites were met. If so, proceed.
Review contents and click Install.
Run the Post-installation scripts (as root)
# ./root.sh 
Running Oracle 11g root.sh script...

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /u01/app/oracle/product/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-09-18 00:16:18: Checking for super user privileges
2010-09-18 00:16:18: User has super user privileges
2010-09-18 00:16:18: Parsing the host name
Using configuration parameter file: /u01/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE 
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
CRS-4664: Node quark successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on oraclelinux-release-5-6.0.1

quark     2010-09-18 00:16:54     /u01/app/oracle/product/11.2.0/grid/cdata/quark/backup_20100918_001654.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Updating inventory properties for clusterware
...

  • When the installation completes, you should have an ASM instance up and running.
  • Some of the processes running include:
$ ps -ef |grep ora
...
oracle   17900     1  0 00:16 ?        00:00:03 /u01/app/oracle/product/11.2.0/grid/bin/ohasd.bin reboot
          --> This is the Oracle Restart (Oracle High Availability Service) daemon.
oracle   18356     1  0 00:18 ?        00:00:01 /u01/app/oracle/product/11.2.0/grid/bin/oraagent.bin
          --> Extends clusterware to support Oracle-specific requirements and complex resources.
          --> Runs server callout scripts when FAN events occur. 
          --> Pprocess was known as RACG in Oracle Clusterware 11g release 1 (11.1).
oracle   18375     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/tnslsnr LISTENER -inherit
oracle   18563     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/cssdagent
          --> Starts, stops and monitors Oracle Clusterware
oracle   18565     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/orarootagent.bin
          --> specialized oraagent process that helps crsd manage resources 
          (con't) owned by root, such as the network, and the Grid virtual IP address.
oracle   18599     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/diskmon.bin -d -f
          --> I/O Fencing and SKGXP HA monitoring daemon
oracle   18600     1  0 00:18 ?        00:00:00 /u01/app/oracle/product/11.2.0/grid/bin/ocssd.bin 
          --> Oracle Cluster Synchronization Service Daemon (OCSSD). 
          --> performs some of the clusterware functions on UNIX-based systems
          --> ocssd.bin is required for ASM Instance. 

oracle   18884     1  0 00:19 ?        00:00:00 asm_pmon_+ASM  ----
oracle   18888     1  0 00:19 ?        00:00:00 asm_vktm_+ASM      |
oracle   18894     1  0 00:19 ?        00:00:00 asm_gen0_+ASM      |
oracle   18898     1  0 00:19 ?        00:00:00 asm_diag_+ASM      |
oracle   18902     1  0 00:19 ?        00:00:00 asm_psp0_+ASM      |
oracle   18906     1  0 00:19 ?        00:00:00 asm_dia0_+ASM      |
oracle   18910     1  0 00:19 ?        00:00:00 asm_mman_+ASM      |==================
oracle   18914     1  0 00:19 ?        00:00:00 asm_dbw0_+ASM      |=>     +ASM Instance 
oracle   18918     1  0 00:19 ?        00:00:00 asm_lgwr_+ASM      |=> background processes
oracle   18922     1  0 00:19 ?        00:00:00 asm_ckpt_+ASM      |==================
oracle   18926     1  0 00:19 ?        00:00:00 asm_smon_+ASM      |
oracle   18930     1  0 00:19 ?        00:00:00 asm_rbal_+ASM      |
oracle   18934     1  0 00:19 ?        00:00:00 asm_gmon_+ASM      |
oracle   18938     1  0 00:19 ?        00:00:00 asm_mmon_+ASM      |
oracle   18942     1  0 00:19 ?        00:00:00 asm_mmnl_+ASM  ----
oracle   19119 13210  0 00:23 pts/2    00:00:00 ps -ef
oracle   19120 13210  0 00:23 pts/2    00:00:00 grep ora
$


using Oracle Restart
  • When created, a new database instance will automatically register with Oracle Restart.
  • Once added to the Oracle Restart configuration, if the database then accesses data in an Oracle ASM disk group, a dependency between the database that disk group is created.
  • Oracle Restart then ensures that the disk group is mounted before attempting to start the database.

About SRVCTL
  • You can use SRVCTL commands to add, remove, start, stop, modify, enable, and disable a number of entities, such as databases, instances, listeners, SCAN listeners, services, grid naming service (GNS), and Oracle ASM.
  • SRVCTL utility can be used to start and stop the Oracle Restart components manually.
  • When you start/stop a component with SRVCTL, any components on which this component depends are automatically started/stopped first, and in the proper order.
  • Important Note:
    • To manage Oracle ASM on Oracle Database 11g R2 installations, use the SRVCTL binary in the Oracle Grid Infrastructure home for a cluster (Grid home).
    • If you have Oracle RAC or Oracle Database installed, then you cannot use the SRVCTL binary in the database home to manage Oracle ASM.

Usage: srvctl command object []
    commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config
    objects: database|service|asm|diskgroup|listener|home|ons|eons

(a) check status of grid services and objects
jdoe@quark $ srvctl status asm
ASM is running on quark

jdoe@quark $ srvctl status diskgroup -g  oradata_dskgrp
Disk Group oradata_dskgrp is running on quark

jdoe@quark $ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): quark

-- Displaying the running status of all of the components that are managed by Oracle Restart in the specified Oracle home. 
-- The Oracle home can be an Oracle Database home or an Oracle Grid Infrastructure home.

jdoe@quark $ ./srvctl status home -o /u01/app/oracle/product/11.2.0/grid -s /home/oracle/statefile
Disk Group ora.ORADATA_DSKGRP.dg is running on quark
ASM is running on quark
Listener LISTENER is running on node quark


(b) The srvctl config command displays the Oracle Restart configuration of the specified component or set of components
jdoe@quark $ srvctl config asm -a
ASM home: /u01/app/oracle/product/11.2.0/grid
ASM listener: LISTENER
Spfile: +ORADATA_DSKGRP/asm/asmparameterfile/registry.253.768442773
ASM diskgroup discovery string: /dev/oracleasm/disks
ASM is enabled.

jdoe@quark $ srvctl config listener
Name: LISTENER
Home: /u01/app/oracle/product/11.2.0/grid
End points: TCP:1521

-- Display configuration and enabled/disabled status for the database with the DB_UNIQUE_ID orcl:
jdoe@quark $ srvctl config database -d orcl -a

Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/orcl/spfileorcl.ora
Domain: us.example.com
Start options: open
Stop options: immediate
Database role:
Management policy: automatic
Disk Groups: DATA
Services: mfg,sales
Database is enabled