Showing posts with label QandA. Show all posts
Showing posts with label QandA. Show all posts

Q&As: Data Warehousing and Partitioning (II)



(Q) What is a Partition-Wise Join? How can SQL statements benefit from Full Partition-Wise Joins?

  • A P-W Join divides a large join into smaller joins between a pair of partitions from the two joined tables.
  • To use P-W Joins:
    • (a) equipartition BOTH tables on their JOIN keys or
    • (b) use Reference partitioning
  • P-W Joins reduce query response time by minimizing the amount of data exchanged among parallel execution servers when joins execute in parallel
  • Full P-W Joins: Divide a large join into smaller joins between a pair of partitions from the two joined tables
  • Tables MUST be equipartitioned on their join keys, or use reference partitioning.


(Q) What are the two most common types of indexes in Data Warehouse environments?

  • B-tree and bitmap indexes
  • Both can be created as local indexes on a partitioned table, inheriting the table's partitining strategy
  • Local nonprefixed indexes can improve performance because many index partitions can be scanned in parallel by range queries on the index key.
SQL> create index cust_last_name_ix ON customers(last_name) LOCAL PARALLEL NOLOGGING;


(Q) How can bitmap index improve performance in a Data Warehouse environment?

  • Bitmap indexes use a very efficient storage mechanism for low cardinality columns.
  • Fact table foreign key column are ideal candidates for bitmap indexes, since generally there are few distinct values relative to the total number of rows in the fact table. (Because dimension tables should not have many rows)


(Q) What is Partition Exchange Load (PEL)? When is it used? What benefits does it provides?

  • Oracle strongly recommends that the larger tables or fact tables in a data warehouse are partitioned.
  • with a partitioned table, you can load data quickly and easily with minimal impact on business users by using the exchange partition command.
  • With the exchange partition command you can swap the data in a non-partitioned table into a particular partition in your partitioned table.
  • NO physical data movement occurs with the exchange partition command.
  • Instead the data dictionary is updated to exchange a pointer from the partition to the table and vice versa.
  • Because there is no physical movement of data, an exchange does not generate redo and undo, making it a VERY fast operation.
  • You can convert by exchanging their data segments:
    • (a) a partition (or subpartition) into a nonpartitioned table and
    • (b) a nonpartitioned table into a partition (or subpartition) of a partitioned table
  • In DWH environments, exchanging partitions facilitates high-speed data loading of new, incremental data into an existing partitioned table.
  • OLTP and DWH also can benefit from exchanging old data partitions out of a partitioned table.
  • In the latter case, the data is purged from the partitioned table without being deleted and can be archived separately afterwards.

(1) create a separate table that looks exactly like a single partition in the existing table (including indexes, constraints and partition schema)
(2) Load the data in the new table, build indexes, enable constraints. This will not impact current users of the existing table
(3) Exchange the existing table partition with the newly created table
i.e. (a) create new partition on existing table 
 SQL> Alter table sales add partition p_sales_jun_2007 
                                                    values less than (to_date('01-JUL-2007', 'DD-MON-YYYY');

(b) create temp table that looks like existing partition 
 SQL> create table sales_jun_2007 COMPRESS FOR OLTP as select * from sales where 1=0;

(c) Populate table and create indexes and constraints 
 SQL> create bitmap index time_id_jun_2007_bix ON sales_jun_2007(time_id) nologging;
 SQL> create bitmap index cust_id_jun_2007_bix ON sales_jun_2007(cust_id) nologging;
 SQL> create bitmap index prod_id_jun_2007_bix ON sales_jun_2007(prod_id) nologging;
 SQL> create bitmap index promo_id_jun_2007_bix ON sales_jun_2007(promo_id) nologging;
 SQL> create bitmap index channel_id_jun_2007_bix ON sales_jun_2007(channel_id) nologging;
 
 SQL> ALTER TABLE sales_jun_2007 ADD CONSTRAINT prod_id_fk 
                                            FOREIGN KEY (prod_id) REFERENCES products(prod_id);
 SQL> ALTER TABLE sales_jun_2007 ADD CONSTRAINT cust_id_fk 
                                            FOREIGN KEY (cust_id) REFERENCES customers(cust_id);
 SQL> ALTER TABLE sales_jun_2007 ADD CONSTRAINT promo_id_fk 
                                            FOREIGN KEY (promo_id) REFERENCES promotions(promo_id);
 SQL> ALTER TABLE sales_jun_2007 ADD CONSTRAINT time_id_fk 
                                            FOREIGN KEY (time_id) REFERENCES times(time_id);
 SQL> ALTER TABLE sales_jun_2007 ADD CONSTRAINT channel_id_fk 
                                            FOREIGN KEY (channel_id) REFERENCES channels(channel_id);

(d) Exchange partition 
 SQL> Alter table sales EXCHANGE PARTITION p_sales_jun_2007 WITH TABLE sales_jun_2007
   including indexes;

Partition Exchange Load (PEL)



Q&As: Data Warehosing and Partitioning (I)




(Q) In which ways partitions facilitate administrative operations?

  • Partitioning enable administrative operations to work on subsets of data.
  • You can add, organize or drop a partition with minimal or zero interruption to read-only applications
  • Adding a partition: you can defer segment creation, which is useful when the segment is very large
  • Changing partition granularity: You can split or merge partitions.
  • Swapping partitions: you can easily add, remove, or swap a large amount of data quickly
  • Partitioning enhances data access and improve overall application performance:
    • More users can query the system, since each SQL is likely to access smaller amounts of data
    • Complex queries run faster. Less IO access


(Q) List three advantages of using partitioned objects

(1) Backup and recovery can be performed on a lower level of granularity (at the partition level)
(2) Part of the database objects can be placed in compressed storage, while others remain uncompressed
(3) You can transparently store data on different storage tiers to lower costs


(Q) How can the database take advantage of partitioning when executing an SQL?

  • Using parallel executions to speed up queries, DML and DDL.
  • Individual parallel execution servers can work on their own data sets, identified by partition boundaries


(Q) what is Partition pruning?

  • When executing a SQL statement, the Optimizer analyzes FROM and WHERE clauses to eliminate unneeded partitions from the execution plan's access list.
  • In the example below, the database scans only four partitions when executing the query. In an non-partitioned table, all data would have to be scanned.


(Q) What is the difference between static pruning and dynamic pruning?

  • Static pruning - occurs at compile-time.
    • example: SQL stmt with WHERE clause with a constant literal on the partition key column
  • Dynamic pruning - occurs at run-time, when the exact partitions to be accessed are not known before hand
    • example: SQL stmt with operator or function in the WHERE condition


(Q) How can you identify when static pruning is used in an execution plan?

  • In the execution plan, the OPERATION column shows PARTITION RANGE SINGLE, indicating that only a single (or range of) partition(s) is being accessed.
  • If OPERATION = PARTITION RANGE ALL, then ALL partitions are being accessed
select * from sales where time_id = to_date('01-jan-2001', 'dd-mon-yyyy');
... Returns 766 rows

explain plan for select * from sales where time_id = to_date('01-jan-2001', 'dd-mon-yyyy');
plan FOR succeeded

select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------
Plan hash value: 3733446669

---------------------------------------------------------------------------------------------------------------------  
| Id | Operation                          | Name           | Rows | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------- 
|  0 | SELECT STATEMENT                   |                |  766 | 22980 |    27   (0)| 00:00:01 |       |       |
|  1 |  PARTITION RANGE SINGLE            |                |  766 | 22980 |    27   (0)| 00:00:01 |    17 |    17 |
|  2 |   TABLE ACCESS BY LOCAL INDEX ROWID| SALES          |  766 | 22980 |    27   (0)| 00:00:01 |    17 |    17 |
|  3 |    BITMAP CONVERSION TO ROWIDS     |                |      |       |            |          |       |       |
|* 4 |     BITMAP INDEX SINGLE VALUE      | SALES_TIME_BIX |      |       |            |          |    17 |    17 |
--------------------------------------------------------------------------------------------------------------------- 

Predicate Information (identified by operation id):                
----------------------------------------
   4 - access("TIME_ID"=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))      

 16 rows selected 


(Q) What types of SQL statements are candidates for partition pruning?

  • On Range or List partitioned columns: When you use range, LIKE, equality, and IN-list predicates
  • On Hash Partitioned columns: When you use equlity and IN-list predicates


(Q) How can datatype conversions affect the type of partition pruning that may be used?

  • Datatype conversion typically lead to dynamic, when static pruning would have otherwise been possible
  • Static pruning provides better performance than dynamic pruning
  • example: Proper use of TO_DATE function guarantees that the database can uniquely determine the date value and thus the partition that should be accessed
Compare the following queries:
SELECT SUM(amount_sold) total_revenue FROM sh.sales
(clause 1)   WHERE time_id between '01-JAN-00' and '31-DEC-00' ;

(clause 2)   WHERE time_id between '01-JAN-2000' and '31-DEC-2000' ;

(clause 3)   WHERE time_id between to_date('01-JAN-2000', 'dd-MON-yyyy') and 
                                   to_date('31-DEC-2000', 'dd-MON-yyyy');

(Clause 1:) 
Leads to dynamic pruning, because date needs to be implicitly converted and does not match nls_date_format

SQL> explain plan for 
 SELECT SUM(amount_sold) total_revenue 
 FROM sales
 WHERE time_id between '01-JAN-00' and '31-DEC-00';

plan FOR succeeded.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------
Plan hash value: 3772476237   

---------------------------------------------------------------------------------------------------- 
| Id  | Operation                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop | 
---------------------------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT           |       |     1 |    13 |   592  (19)| 00:00:08 |       |       | 
|   1 |  SORT AGGREGATE            |       |     1 |    13 |            |          |       |       | 
|*  2 |   FILTER                   |       |       |       |            |          |       |       | 
|   3 |    PARTITION RANGE ITERATOR|       |   230K|  2932K|   592  (19)| 00:00:08 |   KEY |   KEY | 
|*  4 |     TABLE ACCESS FULL      | SALES |   230K|  2932K|   592  (19)| 00:00:08 |   KEY |   KEY | 
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
                                                  
   2 - filter(TO_DATE('01-JAN-00')<=TO_DATE('31-DEC-00'))
   4 - filter("TIME_ID">='01-JAN-00' AND "TIME_ID"<='31-DEC-00')

 17 rows selected 


(Clause 2:) 
Uses STATIC pruning, because although date needs to be implicitly converted is DOES match nls_date_format

SQL> explain plan for 
 SELECT SUM(amount_sold) total_revenue 
 FROM sales
 WHERE time_id between '01-JAN-2000' and '31-DEC-2000';

plan FOR succeeded.

SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT 
-------------------------------------------------------------------------------------------------
Plan hash value: 1500327972 

--------------------------------------------------------------------------------------------------- 
| Id  | Operation                 | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop | 
--------------------------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT          |       |     1 |    13 |   136  (11)| 00:00:02 |       |       | 
|   1 |  SORT AGGREGATE           |       |     1 |    13 |            |          |       |       | 
|   2 |   PARTITION RANGE ITERATOR|       |   230K|  2932K|   136  (11)| 00:00:02 |    13 |    16 | 
|*  3 |    TABLE ACCESS FULL      | SALES |   230K|  2932K|   136  (11)| 00:00:02 |    13 |    16 | 
--------------------------------------------------------------------------------------------------- 

Predicate Information (identified by operation id):  
---------------------------------------------------  

   3 - filter("TIME_ID"<=TO_DATE(' 2000-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

 15 rows selected 

(Clause 3:) 
Uses STATIC pruning, because although date is explicitly converted

SQL> Explain plan for
SELECT SUM(amount_sold) total_revenue 
FROM sales
   WHERE time_id between to_date('01-JAN-2000', 'dd-MON-yyyy') and 
                         to_date('31-DEC-2000', 'dd-MON-yyyy');

plan FOR succeeded.

SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT 
-------------------------------------------
Plan hash value: 1500327972

---------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |       |     1 |    13 |   136  (11)| 00:00:02 |       |       |
|   1 |  SORT AGGREGATE           |       |     1 |    13 |            |          |       |       |
|   2 |   PARTITION RANGE ITERATOR|       |   230K|  2932K|   136  (11)| 00:00:02 |    13 |    16 |
|*  3 |    TABLE ACCESS FULL      | SALES |   230K|  2932K|   136  (11)| 00:00:02 |    13 |    16 |
---------------------------------------------------------------------------------------------------
                                                                                                  
Predicate Information (identified by operation id): 
--------------------------------------------------- 

   3 - filter("TIME_ID"<=TO_DATE(' 2000-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

 15 rows selected 

Q&As: Data Guard, Role Transitions (I)



(Q) Which 6 steps must be performed to prepare for a role transition?


(1) Verify that each database is properly configured for the role that it is about to assume

  • Check that necessary initialization parameters are properly configured in each database
    • FAL_SERVER, DB_FILE_NAME_CONVERT, LOG_FILE_NAME_CONVERT
    • LOG_ARCHIVE_DEST_n and LOG_ARVHIVE_DEST_STATE_n
  • Check that ONLINE REDO LOGs and STANDBY REDO LOGs exist and are properly configured on all databases
SQL> select status, type, member, group#
  2* from v$logfile;

STATUS TYPE MEMBER         GROUP#
------- ------- ------------------------------------------------------------ ----------
 ONLINE /u01/app/oracle/oradata/fresno/redo03.log         3
 ONLINE /u01/app/oracle/oradata/fresno/redo02.log         2
 ONLINE /u01/app/oracle/oradata/fresno/redo01.log         1
 STANDBY /u01/app/oracle/oradata/fresno/standby_redo01.log        4
 STANDBY /u01/app/oracle/oradata/fresno/standby_redo02.log        5
 STANDBY /u01/app/oracle/oradata/fresno/standby_redo03.log        6
 STANDBY /u01/app/oracle/oradata/fresno/standby_redo04.log        7

7 rows selected.


(2) Verify that there are no Redo Transport errors or Redo Gaps at the standby database

  • For LOG_ARCHIVE_DEST_2, STATUS should be VALID and GAP_STATUS should be NO GAP
(On the Physical Standby:)
SQL> select dest_name, status, type, gap_status
  2  from v$archive_dest_status;

DEST_NAME            STATUS    TYPE            GAP_STATUS
------------------- --------- --------------- ----------
LOG_ARCHIVE_DEST_1   VALID     LOCAL
LOG_ARCHIVE_DEST_2   VALID     UNKNOWN         NO GAP
...
STANDBY_ARCHIVE_DEST VALID     UNKNOWN         NO GAP

(On the primary database)
SQL> select dest_name, status, type, gap_status
  2* from v$archive_dest_status;

DEST_NAME            STATUS    TYPE           GAP_STATUS
-------------------- --------- -------------- ------------------------
LOG_ARCHIVE_DEST_1   VALID     LOCAL
LOG_ARCHIVE_DEST_2   VALID     PHYSICAL       NO GAP
LOG_ARCHIVE_DEST_3   INACTIVE  LOCAL


(3) Verify that temporary files exist on the Standby that match the temporary files on the primary database



(4) Remove any delay in applying redo that may be in effect on the standby database that will become the new primary database

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY DISCONNECT FROM SESSION;


(5) Before performing a switchover from an Oracle RAC primary databsae to a physical standby, shut down all but one primary db instance



(6) If the physical standby that will become primary on the switchover is in real-time query mode, consider bringing all instances of the standby database to the mounted but not open state. This will allow:

  • (a) the fastest possible role transition and/li>
  • (b) to cleanly terminate any user sessions connected to the physical standby prior to the role transition


Q&As: Fast Recovery Area (III)



(Q) What types of database files can be stored in the FAST RECOVERY AREA?

  • control file, online redo logs, archived redo logs, flashback logs, and RMAN backups


(Q) What two types of files, with respect to their use by the database, are kept in the FAST RECOVERY AREA?

  • Permanent files – Active files used by the database instance (control file, online redo logs)
  • Transient files – backups that may be deleted according to the retention policy



(Q) What happens if the instance cannot write to a multiplexed copy of the control file stored in the fast recovery area?

  • The instance fails.
  • Failure occurs EVEN if accessibel multiplexed copies are accessible outside the recovery area.


(Q) What is the difference between FLASH RECOVERY AREA and FAST RECOVERY AREA?

  • Just the name. FLASH became FAST with 11g


(Q) When configuring the FAST RECOVERY AREA, what happens if you specify DB_RECOVERY_FILE_DEST but DO NOT specify DB_RECOVERY_FILE_DEST_SIZE?

  • Specifying DB_RECOVERY_FILE_DEST without specifying DB_RECOVERY_FILE_DEST_SIZE is not allowed.





Q&As: Hot Backups


(Q) Which statements about Open (hot/online) database backups are correct?

  • ( ) To perform HOT backups, the database MUST be in ARCHIVELOG mode
  • ( ) To perform a user-managed HOT backup, you MUST use BEGIN Backup.... END Backup.
  • ( ) You CAN perform closed database backups using RMAN.
  • ( ) You CAN run a Data Pump Export to perform a CONSISTENT LOGICAL backup of the database.
  • ( ) INCREMENTAL backups are supported, using RMAN.

Q&As: Fast Recovery Area (I)



(Q) What is a Fast Recovery Area? What does it contain?

  • Is an Oracle managed space that can be used to hold RMAN disk backups, control file autobackups, and archived redo logs
  • Files here are maintained by Oracle and have an OMF format
  • Disk location in which the database can store and manage files related to backup and recovery. It contains:
  • (a) multiplexed copies of current control file and online redo logs, (b) archived redo logs (c) Foreign archived redo logs (d) flashback logs (e) image copies of datafiles and control files (f) backup pieces


(Q) What does Oracle recommends about the location of the Fast Recovery Area?

  • Should be on a separate disk from the database area (database files, control file, redo logs)
  • DB_RECOVERY_FILE_DEST should be different than DB_CREATE_FILE_DEST or any DB_CREATE_ONLINE_LOG_DEST_n



(Q) How can you find out the size of the Fast Recovery Area?

SQL> show parameter db_recovery

NAME        TYPE  VALUE
---------------------------- ----------- ------------------------------
db_recovery_file_dest      string  /u01/app/oracle/flash_recovery_area
db_recovery_file_dest_size   big integer 4977M


(Q) What is the default size of an online log created in the Fast Recovery Area?

  • 100 Mb


(Q) Which statements about Fast Recovery Area are correct?

  • ( ) Fast Recovery Area automates management of backup-related files.
  • ( ) It minimizes the need to manually manage disk space for backup-related files
  • ( ) It balances the use of space among the different types of files
  • ( ) When you create a Fast recovery area:
  • (a) you choose a location on disk and set an upper bound for storage space. (b) you set a backup retention policy
  • ( ) RMAN retains the data file backups required to satisfy the current retention policy AND any arquived redo logs required for complete recovery of those data file backups

Q&As: Cold database backups

(Q) Which statements about Closed (cold/offline) database backups are correct?

  • ( ) You CAN perform closed database backups using RMAN.
  • ( ) To perform closed db backups with RMAN, the instance MUST be MOUNTED
  • ( ) You CAN perform user-managed closed database backups (OFFLINE backups), following the steps below:
    • Run shell script to collect location of all datafiles, control files, redo log files, and parameter files
    • If using server parameter file, run command to backup spfile
    • Shutdown the database normal or immediate
    • Run OS script to copy all datafiles, control files, redo logs and parameter files to backup area
    • Startup the database
  • ( ) You CANNOT run a Data Pump Export on a closed database.

Q&As: Block Media Recovery


(Q) What is Block Media Recovery?

Recovery of specified blocks within a data file
RMAN> RECOVER ... BLOCK
Leaves the affected datafiles ONLINE and recovers only the damaged/corrupted data blocks.


$ rman
RMAN> connect target /
connected to target database: ORCL11G (DBID=123456789)

RMAN> recover datafile 2 block 13;
...

     (optionally)
RMAN> recover … 
        From backupset …
        Exclude flashback log

     (To recover ALL blocks logged in V$DATABASE_BLOCK_CORRUPTION)
RMAN> recover corruption list;

Q&As: Fast Recovery Area (II)


(Q) What initialization parameters are used to configure a Fast Recovery Area?

  • DB_RECOVERY_FILE_DEST - specifies default location for fast recovery area.
  • DB_RECOVERY_FILE_DEST_SIZE - specifies (in bytes) the hard limit on the total space to be used by target database recovery files created in the fast recovery area
  • (optional) DB_FLASHBACK_RETENTION_TARGET -Specifies the upper limit (minutes) on how far back in time the database may be flashed back.


(Q) Can you configure the Fast Recovery Area on an ASM disk group?

  • Yes


(Q) How large should you configure the Fast Recovery Area?


  • At an ABSOLUTE MINIMUM, it should be large enough to containt the archived redo logs not yet on tape.
  • Scenario:
    • If you use incrementally updated backups and set the backup retention policy to REDUNDANCY 1, you can determine the size of the fast recovery area as below:



Q&As: Backup with Recovery Manager



(Q) Which statements about using RMAN to perform backups are correct?


  • ( ) You CAN perform incremental backups with RMAN
  • ( ) RMAN detects corrupt blocks and logs in V$DATABASE_BLOCK_CORRUPTION
  • ( ) Automatically establishes name and locations of all files to be backed up
  • ( ) Backups are recorded in the CONTROL FILE, which is the main repository of RMAN metadata.
  • ( ) Optionally, you CAN store backup metadata in a recovery catalog, which should be a schema in a database different of the one backed up.
  • ( ) The primary storage for RMAN repository information for a database is always in the control file of the database. 
  • ( ) RMAN supports proxy copy, a feature that allows a media manager to manage completely the transfer of data between disk and backup media
  • ( ) RMAN backups init parameter file
  • ( ) RMAN DOES NOT backup password and networking files.
  • ( ) RMAN supports a platform-independent language for backups

Q&As: Incremental Backup



(Q) What are the characteristics of Incremental Backups?


  • Store only blocks changed since a previous incremental (Level 0 or Level 1) backup
  • Provide more compact backups and faster recovery
  • Less redo logs are applied during recovery
  • If you enable block change tracking, then full table scans on input files will not be performed during recovery
  • You can take a Level 0 or Level 1 Incremental Backup

$ rman
RMAN> CONNECT TARGET /
RMAN> BACKUP
       INCREMENTAL LEVEL 1 CUMULATIVE
       SKIP INACCESSIBLE
       DATABASE;

The example above:
  • Backs up all blocks changed since the most recent LEVEL 0 incremental backup.
  • If no LEVEL 0 backup exists, RMAN makes a LEVEL 0 automatically.
  • Inaccessible files are skipped

Q&As: Parallelism and Advanced Compression




(Q) How can you manually set the degree of parallelism at object level?

  • ALTER TABLE sales PARALLEL 8;
  • You can set a fixed DOP at a table or index level


(Q) Which of the operations below can make use of parallel execution?

(1) When accessing objects: table scans, index fast full scans, partitioned index range scans
(2) Joins: nested loops, sort merges, hash, start transformations
(3) DDL staements: CTAS, Create Index, Rebuild Index, Rebuild Index Partition, Move/Split/Coalesce Partition
(4) DML statements
(5) Parallel Query
(6) Other SQL operations: group by, not in, select distinct, union, union all, cube, and rollup, aggregate and tables functions
(7) SQL*Loader i.e. $sqlldr CONTROL=load1.ctl DIRECT=true PARALLEL=true


(Q) In which type of objects parallel executions CANNOT be used?

Parallel DDL cannot be used on tables with object or LOB columns


(Q) How can you gather I/O Calibration statistics? How often should it be done?

  • Use DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure
  • I/O calibration is a one-time action if the physical hardware does not change.

Advanced Compression


(Q) What is Advanced Compression?

Introduced in 11g, includes compression for
  • structured data (numbers, chars)
  • Unstructured data (documents, images, etc)
  • backups (RMAN and Data Pump) and
  • Network transport (redo log transport during Data Guard gap resolution)



(Q) What are the benefits of Advanced Compression?

(a) storage reduction – compression of all types
(b) performance improvement – compressed blocks result in higher I/O throughput
(c) Memory efficiency – oracle keeps data compressed in memory
(d) backups – enhanced compression capabilities
(e) Data Guard – allows faster synchronization of databases during gap resolution process.



(Q) What improvement Advanced Compression brings to the table compression feature introducted in Oracle9i?

With Table compression feature – data could be compressed ONLY during bulk load operations
With Advanced CompressionDuring Inserts and Updates also. Also Compression and Deduplication of SecureFiles


(Q) Does table data in compressed tables get decompressed before it is read?

No. Oracle reads directly from compressed blocks in memory.


(Q) What features are included in the Advanced Compression option?

  • OLTP table compression – improved query performance with minimal write perf overhead
  • SecureFiles – SecureFiles compression for any unstructured content. Deduplication to reduce redundancy
  • RMAN – Multiple backup compression levels (faster --- better ratio)
  • Data Pump Compression – Exports can be coompressed
  • Data Guard – Can compress redo data (reduced network traffic, faster gap resolution)


(Q) What types of data compression can be done with RMAN (using Advanced Compression Option)

  • HiGH – Good for backups over slower networks
  • MEDIUM – Recommended for most environments. (about the same as regular compression)
  • LOW – Least effect on backup throughput


(Q) How to enable Advanced Compression option?

  • Set parameter enable_option_advanced_compression = TRUE
  • With Advanced compression option enabled, you can:
    • RMAN> CONFIGURE COMPRESSION ALGORITHM [HIGH|MEDIUM|LOW
  • V$RMAN_COMPRESSION_ALGORITHM describes supported algorithms
(Q) How can the various features under Advanced Compression be turned on?
For table Compression - Methods of Table compression on 11gR2:
Basic compression – direct path load only
  • i.e. CREATE/ALTER table … COMPRESS [BASIC] – Direct-path only
OLTP compression – DML operations
  • i.e. CREATE/ALTER table … COMPRESS FOR OLTP – Direct-path only
Warehouse compression (hybrid Columnar Compression) Online archival compression (hybrid columnar compression)
For SecureFiles -
i.e CREATE TABLE t1 (a CLOB) LOB(a) STORE AS SECUREFILE( COMPRESS LOW [MEDIUM|HIGH] DEDUPLICATE [KEEP_DUPLICATES] )
For RMAN -
RMAN> BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
or
RMAN> CONFIGURE DEVICE TYPE [DISK | TAPE] BACKUP TYPE TO COMPRESSED BACKUPSET;
For Data Pump -
COMPRESSION = [ALL|DATA_ONLY|METADATA_ONLY|NONE]
  • ALL and DATA_ONLY requires ACO enabled.
  • i.e expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=DATA_ONLY

Q&As: Parallelism




(Q) What types of statements and tasks can be improved through parallel execution?

  • Queries requiring: large table scans, joins, or partitioned index scans
  • Creation of large indexes
  • Creation of large tables (and Materialized Views)
  • Bulk Inserts, Updates, Meges and Deletes
  • Scanning large objects (LOBs)


(Q) What are some of the characteristics of systems that can benefit from parallel execution?

  • Have SMPs, clusters, or MPPs
  • Sufficient IO bandwidth
  • Underutilized CPUs (<30%)
  • Enough memory free


(Q) When can OLTP systems mostly benefit from parallel execution?

  • OLTP systems may benefit during batch processing and schema maintenance operations.


(Q) What are three key parameters controlling automatic parallel execution?

  • PARALLEL_DEGREE_LIMIT
  • PARALLEL_DEGREE_POLICY = { MANUAL | LIMITED | AUTO } <== MANUAL => disabled, AUTO => Auto DOP on
  • PARALLEL_MIN_TIME_THRESHOLD = AUTO (default, about 10sec)


(Q) What is Intra-parallelism and Inter-parallelism?

  • Intra-operation parallelism: parallelism of an individal operation
  • Inter-operation parallelism: parallelism between operations in a data flow tree (in an execution plan)


(Q) What is the default degree of parallelism assumed by an Oracle Server?

  • By Default, PARALLEL_DEGREE_POLICY = Manual (NO PARALLELISM)
  • By default, the system only uses parallel execution when a parallel degree has been explicitly set on an object or if a parallel hint is specified in the SQL statement
  • If PARALLEL_DEGREE_POLICY = AUTO, then
    • Single Instance: DOP = PARALLEL_THREADS_PER_CPU X CPU_COUNT
    • RAC: DOP = PARALLEL_THREADS_PER_CPU X CPU_COUNT x INSTANCE_COUNT
    • important: in a multiuser environment, default parallelism is not recommended


(Q) What is the function of the PARALLEL_DEGREE_POLICY parameter? What vaules it takes?

  • PARALLEL_DEGREE_POLICY = { MANUAL | LIMITED | AUTO }
  • Enable/Disable (a) automatic degree of parallelism, (b) statement queueing, and (c) in-memory parallel execution
  • MANUAL => Default. Revert to behavior prior to 11g. No parallelism automatically enabled
  • LIMITED => Enables automatic degree of parallelism for some stmts only.
  • AUTO => all three enabled.


(Q) How does the optimizer determines the degree of parallelism for a statement?

  • Based on the resource requirements of the statement.
  • Limit on parallelization is set by
    • (a) PARALLEL_DEGREE_LIMIT (default = PARALLEL_THREADS_PER_CPU * CPU_COUNT * num instances)
    • (b) PARALLEL_MIN_TIME_THRESHOLD (default = 10sec
(Q) Which parameters affect the automatic degree of parallelism adopted in the system?
  • PARALLEL_DEGREE_POLICY = { MANUAL | LIMITED | AUTO }
  • PARALLEL_DEGREE_LIMIT
  • PARALLEL_MIN_TIME_THRESHOLD
==> The default degree of parallelism is MANUAL

SQL> show parameter parallel

NAME                           TYPE        VALUE
------------------------------ ----------- -------------
..
parallel_degree_limit           string      CPU
parallel_degree_policy      string      MANUAL    
...
parallel_min_time_threshold    string      AUTO
...

SQL> alter system set parallel_degree_policy=auto;
System altered.

SQL> conn sh/sh
Connected.

SQL> select degree from user_tables where table_name ='SALES';

DEGREE
----------
  1

SQL> alter table sales parallel 4;
Table altered.

SQL> select degree from user_tables where table_name ='SALES';

DEGREE
----------
  4

SQL> explain plan for select * from sales;
Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
Plan hash value: 1550251865

---------------------------------------------------------------------------------------------
| Id  | Operation     | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |       | 918K|  25M| 526   (9)| 00:00:07 |     |     |
|   1 |  PARTITION RANGE ALL|       | 918K|  25M| 526   (9)| 00:00:07 |   1 |  28 |
|   2 |   TABLE ACCESS FULL | SALES | 918K|  25M| 526   (9)| 00:00:07 |   1 |  28 |
---------------------------------------------------------------------------------------------
Note
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
   - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold
13 rows selected.

==> Note here that no parallel execution took plase. 
    The estimated serial execution time (7sec) is still below the 10sec threshold used when the parallel_min_time_threshold is set to AUTO.  


==> Now, lets change the threshold to 1sec:

SQL> conn / as sysdba
Connected.
SQL> show parameter parallel

NAME                           TYPE        VALUE
------------------------------ ----------- -------------
..
parallel_degree_limit           string      CPU
parallel_degree_policy      string      MANUAL    
...
parallel_min_time_threshold    string      AUTO
...

SQL> alter system set parallel_min_time_threshold=1;
System altered.

SQL> show parameter parallel_min
NAME                             TYPE        VALUE
-------------------------------- ----------- -------
parallel_min_time_threshold      string      1

SQL> Conn sh/sh
Connected
SQL> explain plan for select * from sales;
Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3060979429

------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |   918K|    25M|   291   (9)| 00:00:04 |       |       |        |      |            |
|   1 |  PX COORDINATOR      |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |   918K|    25M|   291   (9)| 00:00:04 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX BLOCK ITERATOR |          |   918K|    25M|   291   (9)| 00:00:04 |     1 |    28 |  Q1,00 | PCWC |            |
|   4 |     TABLE ACCESS FULL| SALES    |   918K|    25M|   291   (9)| 00:00:04 |     1 |    28 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------
Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2
15 rows selected.


Q&As: Partitioning




(Q) Which of the following sentences about partitioning and data compression are true?

Basic data distribution methods: Range, Hash, and List
(a) Single-Level Partitioning
  • Range partitioning
  • Hash Partitioning
  • List Partitioning
(b) Composite Partitioning
  • Range-Range
  • Range-Hash
  • Range-List
  • List-Range
  • List-Hash
  • List-List
(c) Interval Partitioning
(d) System Partitioning
(e) Reference Partitioning
(f) Virtual column-based partitioning


(Q) Which of the following sentences about partitioning and data compression are true?

( ) Data in a partitioned table can be compressed on a partition-by-partition basis
( ) When you alter a partition to enable compression, compression is applied only to future data to be inserted
( ) When you enable compression on a partition, all data in the partition, including previously inserted data, is compressed
( ) Before adding a compressed partition to a partitioned table, you must either drop or make UNUSABLE all existing bitmap indexes
( ) You DO NOT have to drop or make unusable b-tree indexes in a table before adding a compressed partition