Oracle DBMS_FLASHBACK package




Oracle Flashback technology
Logical Flashback (do not depend on RMAN, rely on undo data)
Physical flashback
New on Oracle 11g:




Using DBMS_FLASHBACK package
  • Released with Oracle 9i
  • Provides the same functionality as Oracle Flashback Query (Oracle 10g)
  • Acts as a time machine:
    • 1. set time to a previous point in time.
    • 2. perform queries without the "AS OF" and "VERSIONS BETWEEN" clauses
    • 3 return time to the "present"

Requirements:
  • You need EXECUTE privilege on DBMS_FLASHBACK package
  • The UNDO_RETENTION parameter defines how far back in time an object can be flashbacked.
  • UNDO_RETENTION can be set in the init.ora or with:
  • ALTER SYSTEM SET UNDO_RETENTION = num_seconds;

Using the DBMS_FLASHBACK package:
  1. Specify a past time with DBMS_FLASHBACK.ENABLE_AT_TIME or DBMS_FLASHBACK.ENABLE_AT_SYSTEM_CHANGE_NUMBER.
  2. Perform regular queries (queries without flashback-feature syntax such as AS OF).
    Do not perform DDL or DML operations.
  3. Return to the present by invoking DBMS_FLASHBACK.DISABLE.
(a) Check current data in the emp table

SQL> select * from emp;

 EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
2                      Mary       4000                   
1                      Jack       101140                 
100                    Joseph     900      

SQL> select current_scn, to_char(systimestamp, 'MM-DD-YY HH24:MI:SS') date_time
     from v$database;
CURRENT_SCN            DATE_TIME         
---------------------- ----------------- 
1432709                12-08-10 10:58:45  

(b) modify data in emp table

SQL> Insert into emp values (3, 'Mark', 5000);
SQL> commit;

SQL> Update emp set salary = 0 where empno=1;
SQL> commit;

SQL> Delete from emp where empno=2;
SQL> update emp set empname = 'Lenny' where empno=100;
SQL> commit;

SQL> select * from emp;

EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
1                      Jack       0                      
100                    Lenny      900                    
3                      Mark       5000   

SQL> select current_scn, to_char(systimestamp, 'MM-DD-YY HH24:MI:SS') date_time
     from v$database;
CURRENT_SCN            DATE_TIME         
---------------------- ----------------- 
1432739                12-08-10 10:59:30 

To view and restore the original data, you can
(1) create a cursor to hold the original data
(2) go back in time with dbms_flashback.enable_at_system_change_number(n);
(3) Open and fetch the data in the cursor.
(4) return to the present with dbms_flashback.enable;
(5) Insert the data in the cursor in the restore table
(c) Create a table to restore original data

SQL> create table emp_restore
     as select * from emp where 1 = 0;


set serveroutput on
DECLARE
  cursor c_emp is 
     select * from emp;
  v_row c_emp%rowtype;
BEGIN
  dbms_output.put_line('enabling dbms_flashback..');
  dbms_flashback.enable_at_system_change_number(1432709);
  Open c_emp;
  dbms_output.put_line('disabling dbms_flashback..');  
  dbms_flashback.disable;
  
  loop 
    fetch c_emp into v_row;
    exit when c_emp%notfound;
    insert into emp_restore values
      (v_row.empno, 
       v_row.empname,
       v_row.salary);
  end loop;
  close c_emp;
  commit;
END;
/
enabling dbms_flashback..
disabling dbms_flashback..

SQL> select * from emp_restore; 

EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
2                      Mary       4000                   
1                      Jack       101140                 
100                    Joseph     900 

Oracle data storage: ROWIDs


About Rowids
  • ROWIDs uniquely identifiy a row in the database. (although there are some exceptions).
  • ROWID may refer to a datatype and a pseudocolumn. In its different forms it is a representation of the physical location of a row within the database.
  • The ROWID for a row can change as the result of dumping and reloading the database.
  • For this reason ROWID values should not be used across transaction lines.
  • You cannot set the value of a ROWID pseudocolumn.
  • ROWID is not physically stored in the database.It is inferred from the file and block address of the data.

ROWIDs are used in the construction of indexes. In addition to this,
  • Rowids are the fastest means of accessing particular rows.
  • Rowids provide the ability to see how a table is organized.
  • Rowids are unique identifiers for rows in a given table.

Types of ROWIDs: Physical Rowid, Logical Rowid and Foreign Rowid

Physical ROWIDs:
  • Store the addresses of rows in heap-organized tables, clustered tables, and table and index partitions.
  • Provide the fastest access to a row.
  • Contain the physical address of a row and allow its retrieval in a single block access.
  • Rowid corresponds to the physical address of a row's row piece (or the initial row piece if the row is chained in multiple row pieces).
  • In case of clustered tables, rows in different tables that are in the same data block can have the same rowid.
  • Physical rowids have two formats: restricted and extended.

  • In a logical rowid, a physical guess identifies the block in which a row was stored at the time the guess is made, and the database uses the guess to search the block directly.
  • However, as new rows are inserted, guesses can become stale.
  • Guess staleness can be monitored with DBMS_STATS (index statistics capture the percentage of rows with valid guesses).
  • Statistic stored in the DBA_INDEXES (PCT_DIRECT_ACCESS column).
  • To refresh guess information:
    • Rebuild secondary indexes (resource intensive).
    • ALTER INDEX...UPDATE BLOCK REFERENCES

Logical ROWIDs:
  • Strore the addresses of rows in index-organized tables (IOTs).
  • Used in index-organized tables (IOT), these are based on the table's primary key.
  • Logical rowids change when the IOT's primary key is updated.
  • This is because table data is stored in the index leaves and thus do not have a permanent physical address.
  • Indexes that use logical rowids include a physical guess, which identifies the block location of the row in the IOT at the time the guess was made.
  • Since logical rowids do not indicate the exact physical location of a row, they cannot be used to see how a table is organized.


Restricted x Extended ROWIDs
Restricted ROWIDs:
  • Provided for backward compatibility with applications developed with Oracle 7 and earlier
  • Until Oracle 7, ROWIDs used 8 bytes
  • The old format is known as Restricted rowid format.
  • Restricted ROWID format stores (block number, Row number, file number)
  • Restricted Rowid (8 bytes)
  • Block# (bytes 1-4).Row#(bytes 5-6).File#(bytes 7-8)


Extended ROWIDs
  • Starting with Oracle 8 Rowid's format changed.
  • Extended rowid: four-piece format.
  • Uses a base 64 encodig (A-Z,a-z,0-9,+,/)
  • It is now 10 bytes long and includes the ID of the Object to which the row belongs.
  • Until Oracle 7 file numbers were unique within a database.
  • With the new format, ROWIDs stores the relative file number (the file number within the tablespace in which the object is stored).
  • Until Oracle 7, the 2-byte representation of file numbers limited the maximum number of files a database could have to 65533.
  • This limit now is for the number of files each tablespace can have.
  • An extended rowid includes a data object number. This rowid type uses a base 64 encoding of the physical address for each row.




  • AAAC9E: The data object number identifies the segment. Note:
    • A data object number is assigned to every database segment (table, index, etc).
    • HOWEVER, Schema objects in the same segment (such as a table cluster) have the SAME Object ID.
  • AAE: The data file number: Tablespace-relative. Identifies the file that contains the row.
  • AAAABX: The data block number Identifies the block that contains the row. Relative to the datafile.
  • AAA: The row number: identifies the row in the block.
  • ObjID(bytes 1-4) . RelativeFN(byte 5 + 4bits) . Block#(4bits + byte 7) . Row#(bytes 9-10)


ROWID Pseudocolumn
  • Every Oracle table has a pseudocolumn named ROWID. Its value, however, is not actually stored in the table.
  • You can select from pseudocolumns, but you cannot insert, update, or delete their values.
  • Values of the ROWID pseudocolumn are strings representing ,in hexadecimal format, the address of each row.
  • You can parse the value returned from rowid to understand the physical storage of rows in the database.

(a) Query the extended rowid
SQL> select rowid, last_name from hr.employees where employee_id=100;

ROWID               LAST_NAME
------------------  -----------
AAAR5VAAFAAAADPAAA  King

(b) To see the old rowid format (restricted rowid), you can use the DBMS_ROWID package:
SQL> select dbms_rowid.rowid_to_restricted(rowid,1) old_rowid
  2  from hr.employees
  3* where employee_id = 100


OLD_ROWID
------------------
000000CF.0000.0005

Restriced rowid format:
000000CF: data block that contains the row.
  • Block numbers are relative to the datafile, not tablespace.
0000: row numbers start with 0. 0005: data file that contains the row.

  • With the restricted format, to find out the segment (table, index) a rowid belonged to it was necessary to compare ROWID's block and file numbers with entries in the DBA_EXTENTS view.
  • With extended format, you can use DBMS_ROWID as below:
SQL> select first_name, dbms_rowid.rowid_object(rowid) "ObjectID",
  2                     dbms_rowid.rowid_relative_fno(rowid) "File#",
  3                     dbms_rowid.rowid_block_number(rowid) "Block#",
  4                     dbms_rowid.rowid_row_number(rowid)   "Row#"
  5  from hr.employees
  6  where employee_id = 100;

FIRST_NAME        ObjectID      File#     Block#     Row#
-------------------- ---------- ---------- ---------- ----------
Steven     73301   5   207        0

Now query DBA_OBJECTS to find out what database object the row belongs to

SQL> select owner, object_name, object_type, data_object_id, object_id
  2  from dba_objects
  3* where data_object_id =  73301;

OWNER       OBJECT_NAME     OBJECT_TYPE    DATA_OBJECT_ID  OBJECT_ID
----------- --------------- -------------- --------------- ----------
HR          EMPLOYEES       TABLE          73301           73933


When does a ROWID change?
  • If row movement is enabled: ROWID can change because of partition key updates, Flashback Table operations, shrink table operations, and so on.
  • If row movement is disabled: ROWID can change if the row is exported and imported using Oracle Database utilities

Can you use ROWIDs as a table's primary key?
  • No. Oracle won't let you create a PK constraint on a rowid column.
  • In addition, besides not being always unique (rows in clustered tables may share a rowid), a row's rowid can change (i.e., after and export/import or partition movement operation).

Can I store ROWIDs in a table column?
  • Yes. You can create table column using the ROWID data type.
  • However, ROWID values should not be used across transaction lines.
  • Storing a rowid with the intent of using it latter (i.e. as an audit trail record), may lead to errors and incorrect information, since rowids may change as the result of:
    • Export/import operation
    • ALTER TABLE...MOVE
    • ALTER TABLE...SHRINK SPACE
    • FLASHBACK TABLE...
    • Partition operation (split, combination)
    • Data update that results in row being moved to another partition

Using Log Miner (II) - Example



Example 1: Find all modifications in the last archived redo log
  • Mining and source db are the same
  • Using Online catalog as dictionary

Steps: (the various steps are part of a script file logmnr_sql.sh)
  1. Configure supplemental logs and NLS_DATE_FORMAT (optional)
  2. Get user credentials (UserA, UserB, System)
  3. UserA creates table UserA.EMP
  4. UserA updates rows on table UserA.EMP
  5. UserB updates rows on table UserA.EMP
  6. User System:
    • (a) archives current redo log;
    • (b) Add list of log files for mining;
    • (c) starts log miner;
    • (d) query log miner information; and
    • (e) Ends log miner section
(Initial configuration): Enable supplemental logging and set data format
SQL> alter database add supplemental log data;
SQL> alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss';
(1) Get user credentials (UserA, UserB, System)
#!/bin/bash
# logmnr_sql.sh

LOGFILE=logminer.log

###############################################################
# Get users credentials
##############################################################

usersystem=system
echo "Enter password for system"
read -s passwds
usera=usera
echo "Enter password for usera"
read -s passwda
userb=userb
echo "Enter password for userb"
read -s passwdb
sid=${ORACLE_SID}
(cont..)
(2) UserA creates table UserA.EMP
(cont..)
###############################################################
# Log in as userA. Recreate EMP table
##############################################################

sqlplus -s /nolog << EndOfSQL >> ${LOGFILE}
connect ${usera}/${passwda} 
whenever sqlerror exit sql.sqlcode;

prompt "Connected: UserA - recreating EMP.."

drop table emp;
create table emp as
  select employee_id as empno,
         first_name || ' ' || last_name as ename,
         department_id as deptno,
         salary as salary
  from hr.employees;
grant select, update on emp to userb;

exit;
EndOfSQL

ERRORCODE=$?

#check the return code from SQL Plus
if [ $ERRORCODE != 0 ]
then
 echo ********************************
 echo "ERROR: SQL*Plus failed with error code: $ERRORCODE"
else
 echo ********************************
 echo "SQL*Plus completed successfully. Error code: $ERRORCODE"
fi
(cont..)
(3) UserA updates some rows on table UserA.EMP
(cont..)
###############################################################
# Log in as usera. update data on table EMP
##############################################################

sqlplus -s /nolog << EndOfSQL >> ${LOGFILE}
connect ${usera}/${passwda} 
whenever sqlerror exit sql.sqlcode;

prompt "Connected: UserA - Updating rows..."

set serveroutput on  
begin
  update emp set salary = salary*1.05 where deptno = 50;
  commit;
  dbms_output.put_line('salaries on dept 50 updated..');  
  declare
   cursor c_emp is
     select empno, ename, deptno, salary from emp
     where deptno = 30
     for update;
  begin
   for c_reg in c_emp loop
     update emp
      set ename = initcap(c_reg.ename)
      where current of c_emp;
     update emp
      set salary = c_reg.salary *1.10
      where current of c_emp;
   end loop;
   commit;
   dbms_output.put_line('names and salaries on dept 30 updated..');  
  end;
end;
/

exit;
EndOfSQL

ERRORCODE=$?

# Check the return code from SQL Plus
if [ $ERRORCODE != 0 ]
then
 echo ********************************
 echo "ERROR: SQL*Plus failed with error code: $ERRORCODE"
else
 echo ********************************
 echo "SQL*Plus completed successfully. Error code: $ERRORCODE"
fi
(cont..)
(4) UserB updates rows on table UserA.EMP
###############################################################
# Log in as userB. update data on table EMP
##############################################################

sqlplus -s /nolog << EndOfSQL >> ${LOGFILE}
connect ${userb}/${passwdb} 
whenever sqlerror exit sql.sqlcode;

prompt "Connected: USERB"

set serveroutput on  
begin
  update usera.emp set salary = salary*1.05 where deptno = 60;
  dbms_output.put_line('SQL%ROWCOUNT: ['|| SQL%ROWCOUNT || '] rows returned');
  commit;
  dbms_output.put_line('salaries on dept 60 updated..');  
  declare
   cursor c_emp2 is
     select empno, ename, deptno, salary from usera.emp
     where deptno = 90
     for update;
  begin
   for c_reg in c_emp2 loop
     update usera.emp
      set ename = initcap(c_reg.ename)
      where current of c_emp2;
     update usera.emp
      set salary = c_reg.salary *1.10
      where current of c_emp2;
   end loop;
   commit;
   dbms_output.put_line('names and salaries on dept 30 updated..');  
  end;
end;
/
exit;
EndOfSQL

ERRORCODE=$?

# Check the return code from SQL Plus
if [ $ERRORCODE != 0 ]
then
 echo ********************************
 echo "ERROR: SQL*Plus failed with error code: $ERRORCODE"
else
 echo ********************************
 echo "SQL*Plus completed successfully. Error code: $ERRORCODE"
fi
(cont..)
(5) Log in as SYSTEM: (a) archives current redo log; (b) Add list of log files for mining; (c) starts log miner; (d) query log miner information; and (e) Ends log miner section
###############################################################
# Log in as SYSTEM. Run Log Miner 
##############################################################
echo "Configuring Log Miner section..."

sqlplus -s /nolog << EndOfSQL >> ${LOGFILE}
connect ${usersystem}/${passwds} 
whenever sqlerror exit sql.sqlcode;

prompt "Connected: SYSTEM"


prompt " (a) Switching current log file.."
alter system archive log current;

prompt "Most recently archived log file is"
set linesize 150
select name, first_change# from v\$archived_log
 where first_time = (select max(first_time) 
                       from v\$archived_log);

prompt "(b) Configuring the list of redo log files to be mined.."

declare
  v_logname varchar2(200);
begin
  select name into v_logname
   from v\$archived_log
   where first_time = (select max(first_time) 
                         from v\$archived_log);
 dbms_logmnr.add_logfile(LOGFILENAME => v_logname, OPTIONS => dbms_logmnr.new);
end;
/

prompt "List of log files added to log miner: "
select   to_char(low_time, 'dd-mon-yy hh24:mi:ss') lowtime,
         to_char(high_time, 'dd-mon-yy hh24:mi:ss') hightime,
         type, filesize/1024/1024 Mb, filename 
from v\$logmnr_logs;

prompt "(c) Starting log miner using online catalog.."
execute dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog);
prompt "(d) Querying log miner results"

set linesize 150
set pagesize 80
column usr format a10
column osusr format a10
column machine_name format a10
column xid format a12
col operation format a15
col table_name format a10
col sql_redo format a30
col sql_undo format a30

select username as usr,
       os_username as osusr,
       (xidusn ||'.'|| xidslt ||'.'||xidsqn) as xid,
       operation, 
       table_name,
       sql_redo, 
       sql_undo
  from v\$logmnr_contents
  where username in ('USERA', 'USERB');

USR     OSUSR    XID        OPERATION       TABLE_NAME  SQL_REDO                       SQL_UNDO
------  -------- ---------- --------------- ----------  ------------------------------ ------------------------------
USERA   jdoe     10.12.988  START                       set transaction read write;
USERA   jdoe     1.7.928    DDL              EMP        ALTER TABLE "USERA"."EMP" RENA
                                                        ME TO "BIN$sKOUXNE/r7TgQAB/AQE
                                                        yrA==$0" ;
...
USERA   jdoe     1.7.928    DDL              EMP        drop table emp AS "BIN$sKOUXNE
                                                        /r7TgQAB/AQEyrA==$0" ;
...
USERA   jdoe     4.32.983   DDL              EMP       create table emp as
                                                        select employee_id as empno,
                                                        first_name || ' ' ||
                                                        last_name as ename,
                                                        department_id as deptno,
                                                        salary as salary
                                                        from hr.employees;
..
USERA   jdoe     4.23.984   START                       set transaction read write;
USERA   jdoe     4.32.983   DIRECT INSERT    EMP        insert into "USERA"."EMP"("EMP delete from "USERA"."EMP" wher
                                                        NO","ENAME","DEPTNO","SALARY") e "EMPNO" = '198' and "ENAME"
                                                        values ('198','Donald OConnel  = 'Donald OConnell' and "DEPTN
                                                        l','50','2600');               O" = '50' and "SALARY" = '2600
                                                                                       ' and ROWID = 'AAASdlAAEAAAAMb
                                                                                       AAA';

...
USERA   jdoe     4.32.983   DIRECT INSERT   EMP         insert into "USERA"."EMP"("EMP delete from "USERA"."EMP" wher
                                                        NO","ENAME","DEPTNO","SALARY") e "EMPNO" = '197' and "ENAME"
                                                        values ('197','Kevin Feeney',  = 'Kevin Feeney' and "DEPTNO"
                                                        '50','3000');                  = '50' and "SALARY" = '3000' a
                                                                                       nd ROWID = 'AAASdlAAEAAAAMbABq
                                                                                       ';
...
USERA   jdoe     10.31.988  DDL             EMP         grant select, update on emp to
                                                        userb;
...
USERA   jdoe     10.31.988  UPDATE          OBJ$        update "SYS"."OBJ$" set "OBJ#" update "SYS"."OBJ$" set "OBJ#"
                                                        = '75621', "DATAOBJ#" = '7562  = '75621', "DATAOBJ#" = '7562
                                                        1', "TYPE#" = '2', "CTIME" = T 1', "TYPE#" = '2', "CTIME" = T
                                                        O_DATE('22-NOV-10', 'DD-MON-RR O_DATE('22-NOV-10', 'DD-MON-RR
                                                        '), "MTIME" = TO_DATE('22-NOV- '), "MTIME" = TO_DATE('22-NOV-
                                                        10', 'DD-MON-RR'), "STIME" = T 10', 'DD-MON-RR'), "STIME" = T
                                                        O_DATE('22-NOV-10', 'DD-MON-RR O_DATE('22-NOV-10', 'DD-MON-RR
                                                        '), "STATUS" = '1', "FLAGS" =  '), "STATUS" = '1', "FLAGS" =
                                                        '0', "OID$" = NULL, "SPARE1" = '0', "OID$" = NULL, "SPARE1" =
                                                        '6', "SPARE2" = '1' where "OB  '6', "SPARE2" = '1' where "OB
                                                        J#" = '75621' and "DATAOBJ#" = J#" = '75621' and "DATAOBJ#" =
                                                        '75621' and "OWNER#" = '92' a  '75621' and "OWNER#" = '92' a
                                                        nd "TYPE#" = '2' and "CTIME" = nd "TYPE#" = '2' and "CTIME" =
                                                        TO_DATE('22-NOV-10', 'DD-MON-  TO_DATE('22-NOV-10', 'DD-MON-
                                                        RR') and "MTIME" = TO_DATE('22 RR') and "MTIME" = TO_DATE('22
                                                        -NOV-10', 'DD-MON-RR') and "ST -NOV-10', 'DD-MON-RR') and "ST
                                                        IME" = TO_DATE('22-NOV-10', 'D IME" = TO_DATE('22-NOV-10', 'D
                                                        D-MON-RR') and "STATUS" = '1'  D-MON-RR') and "STATUS" = '1'
                                                        and "FLAGS" = '0' and "OID$" I and "FLAGS" = '0' and "OID$" I
                                                        S NULL and "SPARE1" = '6' and  S NULL and "SPARE1" = '6' and
                                                        "SPARE2" = '1' and ROWID = 'AA "SPARE2" = '1' and ROWID = 'AA
                                                        AAASAABAAAU6JAAi';             AAASAABAAAU6JAAi';
..
USERA   jdoe     10.31.988    COMMIT                    commit;
USERA   jdoe     7.26.985     START                     set transaction read write;
...
USERA   jdoe     3.23.1299    START                     set transaction read write;
USERA   jdoe     3.23.1299    UPDATE    EMP          update "USERA"."EMP" set "SALA update "USERA"."EMP" set "SALA
                                                        RY" = '2730' where "EMPNO" = ' RY" = '2600' where "EMPNO" = '
                                                        198' and "ENAME" = 'Donald OCo 198' and "ENAME" = 'Donald OCo
                                                        nnell' and "DEPTNO" = '50' and nnell' and "DEPTNO" = '50' and
                                                        "SALARY" = '2600' and ROWID =  "SALARY" = '2730' and ROWID =
                                                        'AAASdlAAEAAAAMbAAA';          'AAASdlAAEAAAAMbAAA';
...
USERA   jdoe     3.23.1299    UPDATE    EMP          update "USERA"."EMP" set "SALA update "USERA"."EMP" set "SALA
                                                        RY" = '2250' where "EMPNO" = ' RY" = '3000' where "EMPNO" = '
                                                        197' and "ENAME" = 'Kevin Feen 197' and "ENAME" = 'Kevin Feen
                                                        ey' and "DEPTNO" = '50' and "S ey' and "DEPTNO" = '50' and "S
                                                        ALARY" = '3000' and ROWID = 'A ALARY" = '2250' and ROWID = 'A
                                                        AASdlAAEAAAAMbABq';            AASdlAAEAAAAMbABq';

USERA   jdoe     3.23.1299   COMMIT                     commit;
USERA   jdoe     1.12.927    START                      set transaction read write;
USERA   jdoe     1.12.92     SELECT_FOR_UPDATE EMP      select * from "USERA"."EMP" wh
                                                     ere ROWID = 'AAASdlAAEAAAAMbAA
                                                        X' for update;
...
USERA   jdoe     1.12.927    UPDATE       EMP           update "USERA"."EMP" set "ENAM update "USERA"."EMP" set "ENAM
                                                        E" = 'Den Raphaely' where "EMP E" = 'Den Raphaely' where "EMP
                                                        NO" = '114' and "ENAME" = 'Den NO" = '114' and "ENAME" = 'Den
                                                        Raphaely' and "DEPTNO" = '30'  Raphaely' and "DEPTNO" = '30'
                                                        and "SALARY" = '11000' and RO  and "SALARY" = '11000' and RO
                                                        WID = 'AAASdlAAEAAAAMbAAX';    WID = 'AAASdlAAEAAAAMbAAX';
...
USERA   jdoe     1.12.927    UPDATE       EMP           update "USERA"."EMP" set "SALA update "USERA"."EMP" set "SALA
                                                        RY" = '2750' where "EMPNO" = ' RY" = '2500' where "EMPNO" = '
                                                        119' and "ENAME" = 'Karen Colm 119' and "ENAME" = 'Karen Colm
                                                        enares' and "DEPTNO" = '30' an enares' and "DEPTNO" = '30' an
                                                        d "SALARY" = '2500' and ROWID  d "SALARY" = '2750' and ROWID
                                                        = 'AAASdlAAEAAAAMbAAc';        = 'AAASdlAAEAAAAMbAAc';

USERB   jdoe     1.12.927    COMMIT                     commit;
...
USERB   jdoe     8.21.1521   START                      set transaction read write;
USERB   jdoe     6.16.1306   UPDATE       EMP           update "USERA"."EMP" set "SALA update "USERA"."EMP" set "SALA
                                                        RY" = '9450' where "EMPNO" = ' RY" = '9000' where "EMPNO" = '
                                                        103' and "ENAME" = 'Alexander  103' and "ENAME" = 'Alexander
                                                        Hunold' and "DEPTNO" = '60' an Hunold' and "DEPTNO" = '60' an
                                                        d "SALARY" = '9000' and ROWID  d "SALARY" = '9450' and ROWID
                                                        = 'AAASdlAAEAAAAMbAAM';        = 'AAASdlAAEAAAAMbAAM';
...
USERB   jdoe     6.16.1306    COMMIT                    commit;
USERB   jdoe     4.5.983      START                     set transaction read write;
USERB   jdoe     4.5.983      SELECT_FOR_UPDA EMP       select * from "USERA"."EMP" wh
                                           TE           ere ROWID = 'AAASdlAAEAAAAMbAA
...
USERB   jdoe     4.5.983      UPDATE     EMP         update "USERA"."EMP" set "ENAM update "USERA"."EMP" set "ENAM
                                                        E" = 'Steven King' where "EMPN E" = 'Steven King' where "EMPN
                                                        O" = '100' and "ENAME" = 'Stev O" = '100' and "ENAME" = 'Stev
                                                        en King' and "DEPTNO" = '90' a en King' and "DEPTNO" = '90' a
                                                        nd "SALARY" = '24000' and ROWI nd "SALARY" = '24000' and ROWI
                                                        D = 'AAASdlAAEAAAAMbAAJ';      D = 'AAASdlAAEAAAAMbAAJ';
...
USERB   jdoe     4.5.983      COMMIT                    commit;
...
188 rows selected.

(e) End the Log Miner Section
prompt "(e) Ending log miner section.."
execute dbms_logmnr.end_logmnr();
exit;
EndOfSQL


ERRORCODE=$?

# Check the return code from SQL Plus
if [ $ERRORCODE != 0 ]
then
 echo ********************************
 echo "ERROR: SQL*Plus failed with error code: $ERRORCODE"
else
 echo ********************************
 echo "SQL*Plus completed successfully. Error code: $ERRORCODE"
fi

Using the Log Miner (I)



Using the LogMiner
  • First appeared in Oracle 8i
  • Log Miner allows for analysis of online and archived redo log files.
  • LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files

Enhancements in Oracle 11g
  • LogMiner was integrated with OEM
  • Also integrated with the new flashback transaction feature
  • Can be used from command line or through OEM

Possible uses:
  • Find out when a logical error happened and who did it.
  • This information in turn can be used to determine when to initiate time-based or change-based recovery (V$LOGMNR_CONTENTS)
  • Determine what actions are needed to perform fine-grained recovery at the transaction level (flashback transaction)
  • Support for trend analysis (find out which databases get most DMLs)
  • Support on post-facto auditing.
  • Useful to find out who has updated a given table (or set of tables).
  • Explore what happens when a DML/DDL is issued against the database

Configuring the Log Miner
Four objects are used in a LogMiner configuration:
  1. The Source database
  2. The Mining database
  3. The LogMiner dictionary - used to translate internal object identifiers to object names and external data formats.
  4. Redo log files that will be analyzed

Managing LogMiner operation:
  • Use SYS.DBMS_LOGMNR and SYS.DBMS_LOGMNR_D packages to (a) select redo log files for analysis, (b) begin and (c) end log miner section.
  • Use V$LOGMNR_CONTENTS view to query the contents of redo logs.
  • You must be connected as SYS or have EXECUTE_CATALOG_ROLE role

DBMS_LOGMNR package subprograms:
SQL> dbms_logmnr.add_logfile(LOGFILENAME => logname, OPTIONS => dbms_logmnr.new);
-- Add redo log file to a list of redo logs for LogMiner to process.
    OPTIONS:
    • new -- implicit call to dbms_logmnr.end_logmnr and start a new log miner session. Creates a new list and add the redo log.
    • addfile -- Adds the redo log to the existing list
SQL> dbms_logmnr.remove_logfile(LOGFILENAME => logname);
-- Remove redo from list
SQL>  dbms_logmnr.start_logmnr(OPTIONS => dbms_logmnr.dict_from_online_catalog);
-- Initialized the LogMiner
    OPTIONS:
    • committed_data_only
      • Return committed statements only. (none rolled back or in progress)
      • Group statements of a committed transaction.
      • Transactions returned in order.
    • skip_corruption
      • skips corruptions in the redo log (redo log block corruption)
    • ddl_dict_tracking
      • Used when dictionary is in a flat file. Updates dict references as DDL are processed
    • dict_from_online_catalog
      • directs log miner to use the online dictionary of the database when translating object names
    • dict_from_redo_logs
    • no_sql_delimiter
      • Useful if you want to use the reconstructed statements directly in a open cursor
    • no_rowid_in_stmt
    • print_pretty_sql
    • continuous_mine
      • Instructs log miner to add redo log files, as needed, until find the data on the specified SCN or timestamp
    • string_literals_in_stmt
SQL> dbms_logmnr.end_logmnr();
-- Finishes a LogMiner session
Requirements to use LogMiner
  1. Source and Mining databases
    • Both must run on the same platform
    • mining db can be the same or separate from source db
    • mining db must be same or later release of the source db
    • mining db must use same characterset (or superset) of source db
  2. LogMiner dictionary
    • Must be produced from the source db
    • Used to translate redo log information into a meaningful format
    • Used to display table and column names instead of their internal IDs.
    • Without a dictionary, LogMiner also display all values in hexadecimal format.
  3. Redo Logs
    • must be produced by the same source db
    • associated with the same database RESETLOGS SCN
    • Must be from oracle 8.0 or later (mostly 9i or later)
  4. Supplemental Logging
    • must be enabled before redo logs are generated
    • Enable with: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    • Logs the extra information necessary to reconstruct the SQL statement as well
      as one to revert the original action.
    • Without supplemental logging instance and media recovery is possible, but log mining,
      or reconstruction of the original SQL statement, is not possible.
SQL> select supplemental_log_data_min,supplemental_log_data_pk, 
     supplemental_log_data_fk, supplemental_log_data_all
     from v$database;

SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_PK SUPPLEMENTAL_LOG_DATA_FK SUPPLEMENTAL_LOG_DATA_ALL 
------------------------- ------------------------ ------------------------ ------------------------- 
YES                       YES                      YES                      NO                       
About the LogMiner dictionary
Used to translate the information in the redo log files into object names and display the data in alpha numeric, instead of hexadecimal format.
(a) You can use the online catalog of the source database
  • For this, the source database must be online and available
  • To use the online catalog: SQL> execute dbms_logmnr.start_logmnr( options => dbms_logmnr.dict_from_online_catalog);
(b) Can be extracted from source database into the redo log files
  • Used when the source database is not available
  • In order to extract dictionary information, the source database must be (1) open, (2) in archivelog mode, (3) with enabled archiving.
  • To extract dictionary: SQL> execute dbms_logmnr_d.build( options => dbms_logmnr_d.store_in_redo_logs);
  • The dictionary information is stored in the archived redo logs. To find out which:
    • SQL> select name from v$archived_log where dictionary_begin = 'YES';
    • SQL> select name from v$archived_log where dictionary_end = 'YES';
(c) Can be extracted from source db to flat file
  • Available for backward compatibility only. no trasactional consistency.



Example 1: Find all modifications in the last archived redo log
  • Mining and source db are the same
  • Using Online catalog as dictionary

Oracle Flashback Technology (I)




Oracle Flashback technology
Logical Flashback (do not depend on RMAN, rely on undo data)
Physical flashback
New on Oracle 11g:



  • Oracle Flashback features use the Automatic Undo Management to obtain metadata and transaction historical data.
  • Undo data is persistent and survives database shutdown.
  • You can use the Flashback options to
    • recover data from user errors,
    • compare table data at two points in time,
    • view transaction actions (the set of actions performed in a given transaction).
    • Undo table drops
    • Revert the entire database to a previous point in time.

Configuring the database for Flashback options
Database must be configured for Automatic Undo Management (default on 11g).
For this you need:
  • (1) Size undo tablespace; and
  • (2) Enable automatic undo management

(Step 1) Create or resize the undo tablespace appropriately
The Undo tablespace must have enough space to keep the data required for flashback operations
(a) check the size of the existing undo tablespaces

SQL> select t.tablespace_name, t.status, t.contents, t.extent_management, 
            t.allocation_type,   t.retention, s.sizemb 
     from dba_tablespaces t, 
           (select tablespace_name, sum(bytes)/1024/1024 sizemb  
              from dba_data_files
              group by tablespace_name) s
     where t.tablespace_name = s.tablespace_name
     and t.contents = 'UNDO';

TABLESPACE_NAME                STATUS    CONTENTS  EXTENT_MANAGEMENT ALLOCATION_TYPE RETENTION   SIZEMB                 
------------------------------ --------- --------- ----------------- --------------- ---------   ---------------------- 
UNDOTBS1                       ONLINE    UNDO      LOCAL             SYSTEM          NOGUARANTEE 60      

(b) Check the current retention period: 
Query v$undostat to check the amount of time (seconds) for which undo will not be recycled.
    
SQL> select to_char(begin_time, 'dd-MON  hh:mi') begin_time, 
       to_char(end_time, 'dd-MON  hh:mi') end_time, 
       tuned_undoretention undoretention_sec
from v$undostat;

BEGIN_TIME    END_TIME      UNDORETENTION_SEC      
------------- ------------- ---------------------- 
19-OCT  09:51 19-OCT  09:58 1911     --  At current workload, undo is preserved for about 30 minutes
19-OCT  09:41 19-OCT  09:51 1608                   
...
19-OCT  03:11 19-OCT  03:21 1787                   
19-OCT  03:01 19-OCT  03:11 1183       
(Step 2) Enable automatic undo management (undo parameters)
  • UNDO_MANAGEMENT = [ AUTO (default) | MANUAL ]
  • UNDO_TABLESPACE - specifies the name of the undo tablespace
  • UNDO_RETENTION - specifies the minimum amount of time that Oracle attempts to retain old undo information before overwriting it.
    • The value for UNDO_RETENTION is ignored for a fixed size undo tablespace
    • If AUTOEXTEND is enabled, database attempts to honor UNDO_RETENTION.
    • 'snapshot too old' errors indicate that undo retention is not enough for flashback operations.
    • To accomodate Oracle Flashback features, you can set the UNDO_RETENTION parameter to a value equal to the longest expected Oracle Flashback operation.
    • If retention is very important, use ALTER TABLESPACE...RETENTION GUARANTEE. Here the database never overwrites unexpired undo data even if it means that transactions fail due to lack of undo space.
    • To set: ALTER SYSTEM SET UNDO_RETENTION = 2400;
(a) check instance parameters

SQL> select name, value, isdefault, description
     from v$parameter
     where name like '%undo%';

NAME            VALUE    ISDEFAULT DESCRIPTION
--------------- ------   --------- ---------------------------------------------------
undo_management AUTO     TRUE      instance runs in SMU mode if TRUE, else RBU mode -- default: AUTO (11g), Manual (before 11g).
undo_tablespace UNDOTBS1 FALSE     use/switch undo tablespace
undo_retention  900      TRUE      undo retention in seconds


Notes on Flashback Query
  • Flashback query can be used with remote tables (table@dblink)
  • Queries on V$ views ALWAYS return current data
  • Flashback query can be used with static dictionary views
  • To query past data at a precise time, use an SCN.
  • With timestamp, the actual time queried might be up to 3 seconds earlier than the time you specify.
  • Oracle Flashback Drop - Reverse the effects of DROP TABLE

Using FLASHBACK QUERY
  • Use SELECT.. AS OF [ TIMESTAMP timestamp| SCN scn ]
  • The query returns committed data as it existed at that point.
  • To use flashback query on a table, the user needs to have FLASHBACK and SELECT privileges on that object.
Case:
(1) Some rows from emp table are deleted by mistake at 11 AM.
(2) Error discovered at 11:20AM.
(3) Use Flashback query to see the data as it was before the 11am delete.
(1) Check table rows at 11:00AM
SQL> select current_scn, to_char(systimestamp, 'mm-dd-yy  HH24:mi:ss') time
     from v$database;

CURRENT_SCN            TIME               
---------------------- ------------------ 
1385809                12-08-10  11:00:22

SQL> select employee_id, last_name,  salary
     from hr.emp
     where department_id =30;

EMPLOYEE_ID            LAST_NAME                 SALARY                 
---------------------- ------------------------- ---------------------- 
114                    Raphaely                  11000                  
...
119                    Colmenares                2500    


(2) At 11:20, rows are incorrectly deleted from emp.

SQL> delete from hr.emp where department_id=30;
SQL> commit;
SQL> select current_scn, to_char(systimestamp, 'mm-dd-yy  HH24:mi:ss') time
     from v$database;
CURRENT_SCN            TIME               
---------------------- ------------------ 
1386301                12-08-10  11:20:03

SQL> select employee_id, last_name,  salary
     from hr.emp
     where department_id =30;

EMPLOYEE_ID            LAST_NAME                 SALARY                 
---------------------- ------------------------- ---------------------- 

(3) Use flashback query to see emp as it was at 11 AM

SQL> select employee_id, last_name, salary
     from hr.emp AS OF TIMESTAMP to_timestamp('12-08-10 11:00:00', 'mm-dd-yy HH24:MI:SS')
     where department_id = 30;

EMPLOYEE_ID            LAST_NAME                 SALARY                 
---------------------- ------------------------- ---------------------- 
114                    Raphaely                  11000                  
...
119                    Colmenares                2500    

(4) Alternatively you can use the SCN 

SQL> select employee_id, last_name, salary
     from hr.emp AS OF SCN 1385809
     where department_id = 30;

EMPLOYEE_ID            LAST_NAME                 SALARY                 
---------------------- ------------------------- ---------------------- 
114                    Raphaely                  11000                  
...
119                    Colmenares                2500 

(5) Restore the data copying it from the earlier version of the table:

SQL> insert into hr.emp 
       (select * 
        from hr.emp AS OF TIMESTAMP to_timestamp('12-08-10 11:00:00', 'mm-dd-yy HH24:MI:SS')
        where department_id = 30 );

6 rows inserted.

Using Flashback Version Query
  • You can retrieve the different versions of a given row that existed during a given time interval.
  • Use VERSIONS { BETWEEN {SCN | TIMESTAMP} start AND end}
  • Returns one row for each version of the existed at any time in the specified interval
(a) check the initial values of the row data, current scn and timestamp
SQL> select employee_id, first_name, last_name, salary, commission_pct
     from hr.emp
     where employee_id = 116;

EMPLOYEE_ID            FIRST_NAME           LAST_NAME                 SALARY                 COMMISSION_PCT         
---------------------- -------------------- ------------------------- ---------------------- ---------------------- 
116                    Shelli               Baida                     2900                                          
                     
SQL> select current_scn, to_char(systimestamp, 'mm-dd-yy  HH24:mi:ss') time
     from v$database;

URRENT_SCN            TIME               
---------------------- ------------------ 
1400261                12-08-10  16:15:20

(b) update salary, and commission
SQL> update hr.emp
     set salary = salary*1.5
     where employee_id = 116;
SQL> commit;

SQL> update hr.emp
     set commission_pct = .12
     where employee_id = 116;
SQL> commit;

(c) Query table emp pseudocolumns (generated by the supplemental logging)
SQL> select versions_startscn as ver_startscn, 
            versions_starttime as ver_startime, 
            versions_endscn as ver_endscn, 
            versions_endtime as ver_endtime,
            versions_xid as ver_xid, 
            versions_operation as oper,
            employee_id as emp_id, salary, commission_pct as comm_pct
      from hr.emp
        versions between timestamp
             to_timestamp('12-08-10 16:14:00', 'mm-dd-yy HH24:MI:SS')
        and  to_timestamp('12-08-10 16:16:30', 'mm-dd-yy HH24:MI:SS')
     where employee_id = 116;

VER_STARTSCN VERS_STARTTIME        VER_ENDSCN  VER_ENDTIME            VER_XID          OPER EMP_ID  SALARY   COMM_PCT 
------------ --------------------- ----------- ---------------------- ---------------- ---- ------- -------- -------- 
1400294      08-DEC-10 04.16.13 PM  1400301    08-DEC-10 04.16.25 PM  0A00100044030000 U    116     4350     0.12 
1400273      08-DEC-10 04.15.49 PM  1400294    08-DEC-10 04.16.13 PM  0A00030043030000 U    116     4350
                                    1400273    08-DEC-10 04.15.49 PM                        116     2900
The pseudo columns are:
VERSIONS_[ STARTSCN | STARTTIME ] Display the SCN or TIMESTAMP when row took on this value.
If the row was created before the lower bound SCN ot TIMESTAMP, a NULL is returned.
VERSIONS_[ ENDSCN | ENDTIME ] Display the last SCN and TIMESTAMP when row had this value.
If the value is still current at the upper bound SCN or TIMESTAMP a NULL is returned.
VERSIONS_XID ID of the transaction that created the row version.
VERSIONS_OPERATION Operation performed by the transaction ((I)nsert, (U)pdate or (D)elete)


Using Flashback Transaction Query
  • Extension to the Flashback Query Version option
  • Used to retrieve information for a given transaction or for all transactions in a given time interval.
  • The transaction IDs (listed in the VERSIONS_XID column) are used to query the FLASHBACK_TRANSACTION_QUERY view
  • The database must have at least minimal supplemental logging enabled
(a) Query the transaction that updated salary of employee_id 116.
col logon_user format a10
col table_name format a10
col table_owner format a10
col operation format a10
col undo_sql format a50

SQL> select logon_user, operation, table_name, 
       to_char(start_timestamp, 'mm-dd-yy hh24:MI:ss') time, row_id, undo_sql
     from flashback_transaction_query
     where xid = hextoraw('0A00030043030000');

LOGON_USER OPERATION  TABLE_NAME TIME               ROW_ID              UNDO_SQL
---------- ---------- ---------- ------------------ ------------------- --------------------------------------------------
SYSTEM     UPDATE     EMP       12-08-10 16:15:46  AAASNxAAEAAAAJHAAB  update "HR"."EMP" set "SALARY" = '2900' where
                                                                         ROWID = 'AAASNxAAEAAAAJHAAB';
SYSTEM     BEGIN

Example: check the various actions performed by a given transaction
SQL> select current_scn from v$database;

CURRENT_SCN            
---------------------- 
1424010

SQL> create table emp
     (empno number, 
      empname varchar2(10),
      salary number);

SQL> select current_scn from v$database;
SQL> insert into emp values (1, 'John', 20000);
SQL> commit;

SQL> update emp set salary = salary*1.2 where empno=1;
SQL> commit;

SQL> delete from emp where empno = 1;
SQL> commit;

SQL> insert into emp values (1, 'Jack', 45000);   -- transaction performs multiple actions 
SQL> update emp set salary = salary*1.05;
SQL> commit;

SQL> update emp set salary = salary + 3320;
SQL> commit;

SQL> select current_scn from v$database;

CURRENT_SCN            
---------------------- 
1424312

(a) To retrieve the various versions of the row: 
SQL> select versions_xid xid, versions_startscn start_scn,
        versions_endscn end_scn, versions_operation operation,
        empname, salary
     from emp
       versions between scn minvalue and maxvalue
     where empno = 1;
/
XID              START_SCN              END_SCN                OPERATION EMPNAME    SALARY                 
---------------- ---------------------- ---------------------- --------- ---------- ---------------------- 
0200060044040000 1424201                                       U         Jack       50570                  
0600080064040000 1424197                1424201                I         Jack       47250                  
05000A006F040000 1424192                                       D         John       24000                  
                                        1424192                          John       24000


(b) The transaction '0600080064040000' inserted the new record with a wrong name. 
To audit all the changes made by this transaction: 

SQL> select xid, start_scn, commit_scn, operation, logon_user, undo_sql
     from flashback_transaction_query
     where xid = hextoraw('0600080064040000')

XID              START_SCN  COMMIT_SCN OPERATION  LOGON_USER UNDO_SQL
---------------- ---------- ---------- ---------- ---------- --------------------------------------------------
0600080064040000 1424195    1424197    UPDATE     SYSTEM     update "SYSTEM"."EMP" set "SALARY" = '45000' where
                                                             ROWID = 'AAASXdAABAAAVBJAAB';

0600080064040000 1424195    1424197    INSERT     SYSTEM    delete from "SYSTEM"."EMP" where ROWID = 'AAASXdAA
                                                            BAAAVBJAAB';

0600080064040000    1424195    1424197 BEGIN   SYSTEM


Using Flashback Table

Flashback Table x Flashback Drop
  • FLASHBACK TABLE to SCN uses undo data to revert DMLs:
    • Delete rows inserted since scn_num
    • Update back rows modified since scn_num
    • Insert rows deleted since scn_num
  • FLASHBACK TABLE to BEFORE DROP uses a recycle bin to restore dropped table.
  • None of them, however, recover TRUNCATED rows.
  • For this you need to use FLASHBACK DATABASE

Allows point-in-time recovery for a table:
  • restore point specified by timestamp or SCN
  • table can stay online while operation executes
  • maintains all table attributes (indexes, triggers, constraints, etc)
  • maintains any remote state in a distributed environment
  • maintains data integrity specified by constraints

Requisites:
  • User must have FLASHBACK ANY TABLE or FLASHBACK object privilege on the table
  • Must have SELECT, INSERT, DELTE and ALTER privileges on the table
  • Must have SELECT ANY DICTIONARY or FLASHBACK ANY TABLE system privilege or SELECT_CATALOG_ROLE role

For an object to be eligible to be flashed back:
  • Object MUST NOT be: part of a cluster, materialized views, AQ tables, static data dictionary tables, system tables, remote tables, object tables, nested tables, or individual partitions or subpartitions.
  • Table structure must not have been changed between now and the point of restore
  • Row movement must be enabled (rowids will change)
  • Enough undo data to perform flashback exists on the undo tablespaces.

(a) enable row movement

SQL> alter table emp enable row movement;

SQL> select * from emp;
EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
1                      Jack       50570        

(b) check current scn (before flashback)

SQL> select current_scn from v$database;
CURRENT_SCN            
---------------------- 
1426179

(c) Insert and update data row data

SQL> Insert into emp values (2, 'Mary', 2000);
SQL> commit;
SQL> select current_scn from v$database;

CURRENT_SCN            
---------------------- 
1426185 

SQL> insert into emp values(100, 'Joseph', 450);
SQL> commit;
SQL> select current_scn from v$database;

CURRENT_SCN            
---------------------- 
1426200   

SQL> update emp set salary = salary*2;
SQL> commit;
SQL> select current_scn from v$database;

CURRENT_SCN            
---------------------- 
1426208 

(d) ensure that enough undo data exists to flashback the table

SQL>  select name, value/60 minutes_retained
      from v$parameter
      where name = 'undo_retention';

NAME              MINUTES_RETAINED       
----------------- ---------------------- 
undo_retention    80                     
 
(e) determine whtehr the table has dependencies on other tables. 
If dependencies exist, these may need to be flashbacked also

SQL>  select other.owner, other.table_name
    from sys.all_constraints this
         sys.all_constraints other
    where this.owner = 'SYSTEM'
    and   this.table_name = 'EMP'
    and   this.r_owner = other.owner
    and   this.r_constraint_name = other.constraint_name
 and   this.constraint_type = 'R';


(f) Check new data

SQL>  select * from emp;

 EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
2                      Mary       4000                   
1                      Jack       101140                 
100                    Joseph     900  


(g) flashback table to point in time (SCN) before  data modification

SQL> flashback table emp to scn 1426179;

SQL> select * from emp;

EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
1                      Jack       50570                  


(h) flashback table forward to after data modification

SQL> flashback table emp to scn 1426208;

SQL>  select * from emp;
 EMPNO                  EMPNAME    SALARY                 
---------------------- ---------- ---------------------- 
2                      Mary       4000                   
1                      Jack       101140                 
100                    Joseph     900  


Data Movement: Transporting Partitions for Data Warehousing



            Transporting a partition


  • In this example, we see how to use the transportable tablespaces feature to periodically load data into a large fact table in a data warehousing database.
  • For this, suppose that an enterprise data warehouse has a large table sales.
  • The table is partitioned by date, with data for each month stored in its own partition.
  • In addition, local indexes are build so that you dont have to rebuild global indexes when dropping the oldest partition from the table.
  • Every month you need to load one month of data into the sales table, stored in its own partition.

Set up the fact table: sales. In the data warehouse database (dwr-db).
  • For simplicity, all the initial partitions are kept in the same tablespace.
  • As the oldest partitions are deleted and new are added, eventually each partition will be on its own tablespace.
  • A local index is also created in each partition.

SQL> create table dwr.sales (invoice_no number,
  2    sale_year int not null,
  3    sale_month int not null,
  4    sale_day int not null)
  5  partition by range (sale_year, sale_month, sale_day)
  6    (partition jan05 values less than (2005, 2, 1),
  7     partition feb05 values less than (2005, 3, 1),
  8     partition mar05 values less than (2005, 4, 1),
  9     partition apr05 values less than (2005, 5, 1),
 10     partition may05 values less than (2005, 6, 1),
 11     partition jun05 values less than (2005, 7, 1))
 12     tablespace users;

-- create local index.
SQL> create index dwr.sales_index on dwr.sales(invoice_no) local;

-- Now let's insert some data into table sales.
Run the PL/SQL block below to insert some sample data into sales
declare
 month number :=1;
 day   number :=1;
begin
 for i in 1..30 loop
   insert into dwr.sales values (i, 2005, month, day);
   if mod(i,5) = 0 then
     month := month +1;
     day := day +1;
   end if;
end loop;
end;

-- Check that all partitions (table and index) are in the same tablespace (users):
SQL> select tablespace_name, segment_name, segment_type, partition_name
  2  from dba_segments
  3  where owner = 'DWR';

TABLESPACE_NAME    SEGMENT_NAME    SEGMENT_TYPE       PARTITION_NAME 
------------------ --------------- ------------------ ------------------
USERS              SALES           TABLE PARTITION    JAN05  
USERS              SALES           TABLE PARTITION    FEB05  
USERS              SALES           TABLE PARTITION    MAR05  
USERS              SALES           TABLE PARTITION    APR05  
USERS              SALES           TABLE PARTITION    MAY05  
USERS              SALES           TABLE PARTITION    JUN05  
USERS              SALES_INDEX     INDEX PARTITION    JAN05  
USERS              SALES_INDEX     INDEX PARTITION    FEB05  
USERS              SALES_INDEX     INDEX PARTITION    MAR05  
USERS              SALES_INDEX     INDEX PARTITION    APR05  
USERS              SALES_INDEX     INDEX PARTITION    MAY05  
USERS              SALES_INDEX     INDEX PARTITION    JUN05  

12 rows selected.

Prepare the new data for loading in the warehouse

The month of July has ended and you need to load the July data into the sales table.
In the staging database, perform the steps below:
  1. Create a new tablespace (ts_jul). It will hold the table with july data and will be transported to the data warehouse database.
  2. Create a table jul_sales in the new tablespace. It will hold the july data. jul_sales should have the same structure as the sales table.
  3. Load data on jul_sales table.
  4. Create an index jul_sales_idx on the same columns as the local index on the sales table.
SQL> Create smallfile tablespace ts_jul 
  datafile '/u01/app/oracle/oradata/test112/ts_jul_1.dbf' 
  size 5M logging 
  extent management local 
  segment space management auto; 

SQL> create table hr.jul_sales (invoice_no number,
      sale_year int not null,
      sale_month int not null,
      sale_day int not null)
      tablespace ts_jul;

-- Load July data: (run the pl/sql script)
declare
 day   number :=1;
begin
 for i in 40..70 loop
   insert into hr.jul_sales values (i, 2005, 7, day);
     day := day + 1;
end loop;
end;

-- Create index on jul_sales table: 
SQL> create index hr.jul_sale_idx on hr.jul_sales(invoice_no) tablespace ts_jul;

-- Check table and index location
SQL> select tablespace_name, segment_name, segment_type, bytes
   from dba_segments
   where owner = 'HR'
   order by segment_name, segment_type;
TABLESPACE_NAME   SEGMENT_NAME   SEGMENT_TYPE       BYTES
----------------- --------------- ------------------ ----------
...
TS_JUL            JUL_SALES       TABLE              65536
TS_JUL            JUL_SALE_IDX    INDEX              65536
...


Transport tablespace to the data warehouse database
Perform the steps below:
  1. Make tablespace ts_jul read only
  2. Export metadata of the objects in the tablespace being transported
  3. Copy datafiles to the data warehouse server. Make sure to copy the files to its final location in the server. When you import the tablespace, the paths to the datafiles will be stored in the database.
  4. Copy the export file to the directory pointed by the data_pump_dir object (or to other directory pointed by some directory object in the database)
  5. Import the transported tablespace.
(1) Make tablespace read only
SQL> alter tablespace ts_jul read only;

Tablespace altered.

(2) export metadata
oracle@quark:~$ expdp system dumpfile=expts_jul.dmp directory=data_pump_dir transport_tablespaces=ts_jul logfile=tts_exptsjul.log

Export: Release 11.2.0.1.0 - Production on Fri Oct 08 15:34:45 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** dumpfile=expts_jul.dmp directory=data_pump_dir transport_tablespaces=ts_jul logfile=tts_exptsjul.log 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /u01/app/oracle/admin/tempdb/dpdump/expts_jul.dmp
******************************************************************************
Datafiles required for transportable tablespace TS_JUL:
  /u01/app/oracle/oradata/tempdb/ts_jul_1.dbf
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 15:35:35

(3)-(4) Copy export and tablespace data files to the dwr server

(5) Import the tablespace ts_jul to the dwr
oracle@dwr-srv:$ impdp system dumpfile=expts_jul.dmp directory=data_pump_dir transport_datafiles=/u01/app/oracle/oradata/dwr/ts_jul_1.dbf remap_schema=hr:dwr logfile=tts_import.log

Import: Release 11.2.0.1.0 - Production on Fri Oct 08 18:10:20 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** dumpfile=expts_jul.dmp directory=data_pump_dir transport_datafiles=/u01/app/oracle/oradata/dwr/ts_jul_1.dbf remap_schema=hr:crm1 logfile=tts_import.log 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 18:11:10

-- Check the import
SQL> select count(*) from dwr.jul_sales;

  COUNT(*)
----------
  30


Attach the July data to the Sales table
At this point the tablespace ts_jul has been transported and the table jul_sales is available in the dwr database.
It is not yet, however, available as a partition in the large sales table.
For this we will need to:
  1. Add a partition to the sales table for the July data. This will also create another partition for the local index.
  2. Attach the transported table jul_sales to the table sales by exchanging it with the new partition.
(1) In the dwr server, add a partition to the sales table for the July sales data.
SQL> alter table dwr.sales add partition jul05 values less than (2005, 8,1);

Table altered.

(2) Attach the transported table jul_sales to the table sales by exchanging it with the new partition:
SQL> alter table dwr.sales exchange partition jul05 with table dwr.jul_sales
  2  including indexes 
  3  without validation;

Table altered.
 

  • The exchange partition statement assigns the jul_sales data to the jul05 partition on the sales table.
  • The data is not physically moved to the new partition. Instead database pointers and metadata information are updated.
  • In fact, after the statement executes successfully, you can check that the data is now assigned to the sales table, and the jul_sales table is empty.
SQL> select count(*) from crm1.jul_sales;

  COUNT(*)
----------
  0

SQL> select sale_month, count(*) 
  2  from crm1.sales 
  3  group by sale_month;

SALE_MONTH   COUNT(*)
---------- ----------
  1     5
  6     5
  2     5
  4     5
  5     5
  3     5
  7    31

7 rows selected.

SQL>   select tablespace_name, segment_name, segment_type, partition_name
  2    from dba_segments
  3    where owner = 'DWR';

TABLESPACE_NAME   SEGMENT_NAME   SEGMENT_TYPE    PARTITION_NAME
----------------- -------------- --------------- ------------------------------
USERS             SALES          TABLE PARTITION  JAN05
USERS             SALES          TABLE PARTITION  FEB05
USERS             SALES          TABLE PARTITION  MAR05
USERS             SALES          TABLE PARTITION  APR05
USERS             SALES          TABLE PARTITION  MAY05
USERS             SALES          TABLE PARTITION  JUN05
TS_JUL            SALES          TABLE PARTITION  JUL05
USERS             SALES_INDEX    INDEX PARTITION  JAN05
USERS             SALES_INDEX    INDEX PARTITION  FEB05
USERS             SALES_INDEX    INDEX PARTITION  MAR05
USERS             SALES_INDEX    INDEX PARTITION  APR05
USERS             SALES_INDEX    INDEX PARTITION  MAY05
USERS             SALES_INDEX    INDEX PARTITION  JUN05
TS_JUL            SALES_INDEX    INDEX PARTITION  JUL05

14 rows selected.
  • You can now drop jul_sales table.
  • You can also, if needed, make the tablespace ts_jul read write.
SQL> alter tablespace ts_jul read write;

Tablespace altered.