Showing posts with label Oracle. Show all posts
Showing posts with label Oracle. Show all posts

SQL Functions (11gR2)





Single-row F Aggregate F Analytic F
Object Reference F Model F OLAP F


Single-row functions

  •  Numeric functions  (26, 11gR2)
  • ABS
  • ACOS
  •  ASIN
  • ATAN
  • ATAN2
  • BITAND
  • CEIL
  • COS
  • COSH
  • EXP
  • FLOOR
  • LN
  • LOG
  • MOD
  • NANVL
  • POWER
  • REMAINDER
  • ROUND (number)
  • SIGN
  • SIN
  • SINH
  • SQRT
  • TAN
  • TANH
  • TRUNC (number)
  • WIDTH BUCKET 
  •  Character F returning character values (22, 11gR2)
  • CHR
  • CONCAT
  • INITCAP
  • LOWER
  • LPAD
  • LTRIM
  • NCHR
  • NLS_INITCAP
  • NLS_LOWER
  • NLS_UPPER
  • NLSSORT
  • REGEXP_REPLACE
  • REGEXP_SUBSTR
  • REPLACE
  • RPAD
  • RTRIM
  • SOUNDEX
  • SUBSTR
  • TRANSLATE
  • TREAT
  • TRIM
  • UPPER
  •  Character F returning number values (5, 11gR2)
  • ASCII
  • INSTR
  • LENGTH
  • REGEXP_COUNT
  • REGEXP_INSTR

  •  NLS Character F (3, 11gR2)
  • NLS_CHARSET_DECL_LEN
  • NLS_CHARSET_ID
  • NLS_CHARSET_NAME

  • Datetime F (28, 11GR2)
  •  ADD_MONTHS
  • CURRENT_DATE
  • CURRENT_TIMESTAMP
  • DBTIMEZONE
  • EXTRACT (datetime)
  • FROM_TZ
  • LAST_DAY
  • LOCALTIMESTAMP
  • MONTHS_BETWEEN
  • NEW_TIME
  • NEXT_DAY
  • NUMTODSINTERVAL
  • NUMTOYMINTERVAL
  • ORA_DST_AFFECTED
  • ORA_DST_CONVERT
  • ORA_DST_ERROR
  • ROUND (date)
  • SESSIONTIMEZONE
  • SYS_EXTRACT_UTC
  • SYSDATE
  • SYSTIMESTAMP
  • TO_CHAR (datetime)
  • TO_DSINTERVAL
  • TO_TIMESTAMP
  • TO_TIMESTAMP_TZ
  • TO_YMINTERVAL
  • TRUNC (date)
  • TZ_OFFSET

  • General comparison F (2, 11gR2)
  • GREATEST
  • LEAST

  •  Conversion F (38, 11gR2)
  • ASCIISTR
  • BIN_TO_NUM
  • CAST
  • CHARTOROWID
  • COMPOSE
  • CONVERT
  • DECOMPOSE
  • HEXTORAW
  • NUMTODSINTERVAL
  • NUMTOYMINTERVAL
  • RAWTOHEX
  • RAWTONHEX
  • ROWIDTOCAR
  • ROWIDTONCHAR
  • SCN_TO_TIMESTAMP
  • TIMESTAMP_TO_SCN
  • TO_BINARY_FLOAT
  • TO_BINARY_DOUBLE
  • TO_BLOB
  • TO_CHAR (character)
  • TO_CHAR (datetime)
  • TO_CHAR (number)
  • TO_CLOB
  • TO_DATE
  • TO_DSINTERVAL
  • TO_LOB
  • TO_MULTI_BYTE
  • TO_NCHAR (character)
  • TO_NCHAR (datetime)
  • TO_NCHAR (number)
  • TO_NCLOB
  • TO_NUMBER
  • TO_SINGLE_BYTE
  • TO_TIMESTAMP
  • TO_TIMESTAMP_TZ
  • TO_YMINTERVAL
  • TRANSLATE ... USING
  • UNISTR

  •  Large Object F (3, 11gR2)
  • BFILENAME
  • EMPTY_BLOB
  • EMPTY_CLOB

  •  Collection F (5, 11gR2)
  • CARDINALITY
  • COLLECT
  • POWERMULTISET
  • POWERMULTISET_BY_CARDINALITY
  • SET

  •  Hierarchical F (1, 11gR2)
  • SYS_CONNECT_BY_PATH
  •  Data Mining F (12, 11gR2)
  • CLUSTER_ID
  • CLUSTER_PROBABILITY
  • CLUSTER_SET
  • FEATURE_ID
  • FEATURE_SET
  • FEATURE_VALUE
  • PREDICTION
  • PREDICTION_BOUNDS
  • PREDICTION_COST
  • PREDICTION_DETAILS
  • PREDICTION_PROBABILITY
  • PREDICTION_SET

  •  XML F (26, 11gR2)

  • Encoding and Decoding F (4, 11gR2)
  • DECODE
  • DUMP
  • ORA_HASH
  • VSIZE

  • NULL-related F (6, 11gR2)
  • COALESCE
  • LNNVL
  • NANVL
  • NULLIF
  • NVL
  • NVL2

  • Environment and Identifier F (6, 11gR2)
  • SYS_CONTEXT
  • SYS_GUID
  • SYS_TYPEID
  • UID
  • USER
  • USERENV


    Aggregate Functions
    (41, 11gR2)
    Operate on groups of rows
    Appear in SELECT, ORDER BY and HAVING

    • AVG
    • COLLECT
    • CORR
    • CORR *
    • COUNT
    • COVAR POP
    • COVAR SAMP
    • CUME DIST
    • DENSE RANK
    • FIRST
    • GROUP_ID
    • GROUPING
    • GROUPING_ID
    • LAST
    • LISTAGG
    • MAX
    • MEDIAN
    • MIN
    • PERCENT_RANK
    • PERCENTILE_CONT
    • PERCENTILE_DISC
    • RANK
    • STATS_BINOMIAL_TEST
    • STATS_CROSSTAB
    • STATS_F_TEST
    • STATS_KS_TEST
    • STATS_MODE
    • STATS_MW_TEST
    • STATS_ONEW_WAY_ANOVA
    • STATS_T_TEST *
    • STATS_WSR_TEST
    • STDDEV
    • STDDEV_POP
    • STDDEV_SAMP
    • SUM
    • SYS_XMLAGG
    • VAR_POP
    • VAR_SAMP
    • VARIANCE
    • XMLAGG
    • Linear Regression Functions (9, 11gR2)
    • REGR_SLOPE
    • REGR_INTERCEPT
    • REGR_COUNT
    • REGR_R2
    • REGR_AVGX
    • REGR_AVGY
    • REGR_SXX
    • REGR_SYY
    • REGR_SXY


    Analytic Functions
    (32, 11gR2)
    Commonly used on Data Warehousing environments
    Analytic functions are the last set of operations performed in a query, followed only by the ORDER BY clause
    All joins and the WHERE, GROUP BY and HAVING clauses are completed prior to the analytic functions are processed
    Good to compute cumulative, moving, centered, and reporting aggregates

    • AVG *
    • CORR *
    • COUNT *
    • COVAR_POP *
    • COVAR_SAMP *
    • CUME_DIST
    • DENSE_RANK
    • FIRST
    • FIRST_VALUE *
    • LAG
    • LAST
    • LAST_VALUE *
    • LEAD
    • LISTAGG
    • MAX *
    • MIN *
    • NTH VALUE *
    • NTILE
    • PERCENT_RANK
    • PERCENTILE_CONT
    • PERCENTILE_DISC
    • RANK
    • RATIO_TO_REPORT
    • ROW_NUMBER
    • STDDEV *
    • STDDEV_POP *
    • STDDEV_SAMP *
    • SUM *
    • VAR_POP *
    • VAR_SAMP *
    • VARIANCE *
    • Linear Regression Functions *

    • AVG



    (a) Reporting  each employee's salary and the average salary of the employees within the same department.
    
    SELECT first_name || ' ' || last_name AS emp_name,
           department_id,
           salary,
           AVG (salary) OVER (PARTITION BY department_id) AS avg_dept_sal
      FROM hr.employees;
    
    EMP_NAME       DEPARTMENT_ID   SALARY  AVG_DEPT_SAL
    ------------------  -------------  ------- ------------
    Adam Fripp     50  8200 3475.55556
    Alana Walsh     50  3100 3475.55556
    Alberto Errazuriz 80  12000 8955.88235
    Alexander Hunold 60  9000 5760
    Alexander Khoo  30  3100 4150
    Alexis Bull  50  4100 3475.55556
    Allan McEwen  80  9000 8955.88235
    Alyssa Hutton  80  8800 8955.88235
    ...
    


    (a) Lists manager_id, employee's name, hire_date, salary and the average salary of all employees that have the same manager
    
    -- note the AVG is calculated over all rows within each partition.
    -- no windowing clause is present
      SELECT manager_id,
             last_name,
             hire_date,
             salary,
             AVG (salary) OVER (PARTITION BY manager_id) AS c_mavg
        FROM hr.employees
    ORDER BY manager_id, hire_date, salary;
    
    MANAGER_ID LAST_NAME                 HIRE_DATE     SALARY     C_MAVG
    ---------- ------------------------- --------- ---------- ----------
           100 De Haan                   13-JAN-01      17000      11100
           100 Raphaely                  07-DEC-02      11000      11100
           100 Kaufling                  01-MAY-03       7900      11100
           100 Hartstein                 17-FEB-04      13000      11100
           100 Weiss                     18-JUL-04       8000      11100
           ...
           100 Zlotkey                   29-JAN-08      10500      11100
           101 Mavris                    07-JUN-02       6500     8983.2
           ...
           101 Whalen                    17-SEP-03       4400     8983.2
           102 Hunold                    03-JAN-06       9000       9000
           103 Austin                    25-JUN-05       4800       4950
           ...
    
    
    
    (b)Lists employee salary and average salary of the employees reporting to the same manager who were hired in the range just before through just after the employee.
    
    -- note the query has
    -- 1. a partition clause (partition rows by manager)
    -- 2. a windowing clause (within each partition, AVG considers only the rows immediately preceding and following each employees' row).
    
      SELECT manager_id,
             last_name,
             hire_date,
             salary,
             AVG (
                salary)
             OVER (PARTITION BY manager_id
                   ORDER BY hire_date
                   ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING)
                AS c_mavg
        FROM hr.employees
    ORDER BY manager_id, hire_date, salary;
    
    MANAGER_ID LAST_NAME                 HIRE_DATE     SALARY     C_MAVG
    ---------- ------------------------- --------- ---------- ----------
           100 De Haan                   13-JAN-01      17000      14000
           100 Raphaely                  07-DEC-02      11000 11966.6667
           ...
           100 Cambrault                 15-OCT-07      11000 7766.66667
           100 Mourgos                   16-NOV-07       5800       9100
           100 Zlotkey                   29-JAN-08      10500       8150
           101 Mavris                    07-JUN-02       6500       8250
           101 Baer                      07-JUN-02      10000 9502.66667
           ...
           101 Whalen                    17-SEP-03       4400       8204
           102 Hunold                    03-JAN-06       9000       9000
           103 Austin                    25-JUN-05       4800       4800
           ...
    




    • FIRST_VALUE and LAST_VALUE
    (a) For each employee, return employee salary and the name of the employee that works in the same department and has the lowest salary 
    
      SELECT department_id,
             last_name,
             salary,
             FIRST_VALUE (last_name)
             OVER (PARTITION BY department_id
                   ORDER BY salary ASC, hire_date
                   ROWS UNBOUNDED PRECEDING)
                AS lowest_sal
        FROM hr.employees
    ORDER BY department_id, last_name;
    
    DEPARTMENT_ID LAST_NAME                     SALARY LOWEST_SAL               
    ------------- ------------------------- ---------- -------------------------
               10 Whalen                          4400 Whalen                   
               20 Fay                             6000 Fay                      
               20 Hartstein                      13000 Fay                      
               30 Baida                           2900 Colmenares               
               30 Colmenares                      2500 Colmenares               
               30 Himuro                          2600 Colmenares               
               30 Khoo                            3100 Colmenares               
               30 Raphaely                       11000 Colmenares               
               30 Tobias                          2800 Colmenares               
               40 Mavris                          6500 Mavris                   
     ...
               80 Zlotkey                        10500 Kumar                    
               90 De Haan                        17000 De Haan                  
               90 King                           24000 De Haan                  
               90 Kochhar                        17000 De Haan                  
              100 Chen                            8200 Popp                     
              100 Faviet                          9000 Popp                     
              100 Greenberg                      12008 Popp                     
              100 Popp                            6900 Popp                     
              100 Sciarra                         7700 Popp                     
              100 Urman                           7800 Popp                     
    ...
    
    
    (b) For each employee of department 50, return salary of employees on the previous and following rows of the record set.
     SELECT department_id,
             last_name,
             salary,
             FIRST_VALUE (salary)
             OVER (ORDER BY salary ASC
                   ROWS between 1 preceding and current row) as previous_sal,
             LAST_VALUE (salary)
             OVER (ORDER BY salary ASC
                   ROWS between current row and 1 following) as next_sal
        FROM hr.employees
    where department_id =50;
    
    DEPARTMENT_ID LAST_NAME                     SALARY PREVIOUS_SAL   NEXT_SAL
    ------------- ------------------------- ---------- ------------ ----------
               50 Olson                           2100         2100       2200
               50 Philtanker                      2200         2100       2200
               50 Markle                          2200         2200       2400
               50 Gee                             2400         2200       2400
               50 Landry                          2400         2400       2500
               50 Patel                           2500         2400       2500
               50 Vargas                          2500         2500       2500
               50 Marlow                          2500         2500       2500
               50 Perkins                         2500         2500       2500
               50 Sullivan                        2500         2500       2600
    ...
    







    • RANK
    SELECT department_id,
           last_name,
           salary,
           commission_pct,
           RANK ()
           OVER (PARTITION BY department_id
                 ORDER BY salary DESC, commission_pct DESC)
              "Rank"
      FROM hr.employees
     WHERE department_id = 80;
    


    Object Reference Functions
    • DEREF
    • MAKE_REF
    • REF
    • REFTOHEX
    • VALUE

    Model Functions
    • CV
    • ITERATION_NUMBER
    • PRESENTNNV
    • PRESENTV
    • PREVIOUS

    OLAP Functions
    • CUBE_TABLE

    Data Cartridge Functions
    • DATAOBJ_TO_PARTITION


    User-Defined Functions















































    Q&As: Hot Backups


    (Q) Which statements about Open (hot/online) database backups are correct?

    • ( ) To perform HOT backups, the database MUST be in ARCHIVELOG mode
    • ( ) To perform a user-managed HOT backup, you MUST use BEGIN Backup.... END Backup.
    • ( ) You CAN perform closed database backups using RMAN.
    • ( ) You CAN run a Data Pump Export to perform a CONSISTENT LOGICAL backup of the database.
    • ( ) INCREMENTAL backups are supported, using RMAN.

    Q&As: Fast Recovery Area (I)



    (Q) What is a Fast Recovery Area? What does it contain?

    • Is an Oracle managed space that can be used to hold RMAN disk backups, control file autobackups, and archived redo logs
    • Files here are maintained by Oracle and have an OMF format
    • Disk location in which the database can store and manage files related to backup and recovery. It contains:
    • (a) multiplexed copies of current control file and online redo logs, (b) archived redo logs (c) Foreign archived redo logs (d) flashback logs (e) image copies of datafiles and control files (f) backup pieces


    (Q) What does Oracle recommends about the location of the Fast Recovery Area?

    • Should be on a separate disk from the database area (database files, control file, redo logs)
    • DB_RECOVERY_FILE_DEST should be different than DB_CREATE_FILE_DEST or any DB_CREATE_ONLINE_LOG_DEST_n



    (Q) How can you find out the size of the Fast Recovery Area?

    SQL> show parameter db_recovery
    
    NAME        TYPE  VALUE
    ---------------------------- ----------- ------------------------------
    db_recovery_file_dest      string  /u01/app/oracle/flash_recovery_area
    db_recovery_file_dest_size   big integer 4977M
    


    (Q) What is the default size of an online log created in the Fast Recovery Area?

    • 100 Mb


    (Q) Which statements about Fast Recovery Area are correct?

    • ( ) Fast Recovery Area automates management of backup-related files.
    • ( ) It minimizes the need to manually manage disk space for backup-related files
    • ( ) It balances the use of space among the different types of files
    • ( ) When you create a Fast recovery area:
    • (a) you choose a location on disk and set an upper bound for storage space. (b) you set a backup retention policy
    • ( ) RMAN retains the data file backups required to satisfy the current retention policy AND any arquived redo logs required for complete recovery of those data file backups

    Q&As: Cold database backups

    (Q) Which statements about Closed (cold/offline) database backups are correct?

    • ( ) You CAN perform closed database backups using RMAN.
    • ( ) To perform closed db backups with RMAN, the instance MUST be MOUNTED
    • ( ) You CAN perform user-managed closed database backups (OFFLINE backups), following the steps below:
      • Run shell script to collect location of all datafiles, control files, redo log files, and parameter files
      • If using server parameter file, run command to backup spfile
      • Shutdown the database normal or immediate
      • Run OS script to copy all datafiles, control files, redo logs and parameter files to backup area
      • Startup the database
    • ( ) You CANNOT run a Data Pump Export on a closed database.

    Q&As: Block Media Recovery


    (Q) What is Block Media Recovery?

    Recovery of specified blocks within a data file
    RMAN> RECOVER ... BLOCK
    Leaves the affected datafiles ONLINE and recovers only the damaged/corrupted data blocks.


    $ rman
    RMAN> connect target /
    connected to target database: ORCL11G (DBID=123456789)
    
    RMAN> recover datafile 2 block 13;
    ...
    
         (optionally)
    RMAN> recover … 
            From backupset …
            Exclude flashback log
    
         (To recover ALL blocks logged in V$DATABASE_BLOCK_CORRUPTION)
    RMAN> recover corruption list;
    

    Q&As: Fast Recovery Area (II)


    (Q) What initialization parameters are used to configure a Fast Recovery Area?

    • DB_RECOVERY_FILE_DEST - specifies default location for fast recovery area.
    • DB_RECOVERY_FILE_DEST_SIZE - specifies (in bytes) the hard limit on the total space to be used by target database recovery files created in the fast recovery area
    • (optional) DB_FLASHBACK_RETENTION_TARGET -Specifies the upper limit (minutes) on how far back in time the database may be flashed back.


    (Q) Can you configure the Fast Recovery Area on an ASM disk group?

    • Yes


    (Q) How large should you configure the Fast Recovery Area?


    • At an ABSOLUTE MINIMUM, it should be large enough to containt the archived redo logs not yet on tape.
    • Scenario:
      • If you use incrementally updated backups and set the backup retention policy to REDUNDANCY 1, you can determine the size of the fast recovery area as below:



    Q&As: Incremental Backup



    (Q) What are the characteristics of Incremental Backups?


    • Store only blocks changed since a previous incremental (Level 0 or Level 1) backup
    • Provide more compact backups and faster recovery
    • Less redo logs are applied during recovery
    • If you enable block change tracking, then full table scans on input files will not be performed during recovery
    • You can take a Level 0 or Level 1 Incremental Backup

    $ rman
    RMAN> CONNECT TARGET /
    RMAN> BACKUP
           INCREMENTAL LEVEL 1 CUMULATIVE
           SKIP INACCESSIBLE
           DATABASE;
    

    The example above:
    • Backs up all blocks changed since the most recent LEVEL 0 incremental backup.
    • If no LEVEL 0 backup exists, RMAN makes a LEVEL 0 automatically.
    • Inaccessible files are skipped

    Oracle Scheduler: Use credentials for a detached job that performs cold backups



    Here are the steps to create a daily detached job that performs a cold backup of an oracle 11g database.
    This example is a bit more detailed than the one provided in the administration guide.

    Steps:
    (1) Create the shell script that invokes RMAN.
    • This script will be executed by the detached job.
    • The script should be executable by the user who has backup privileges in the OS. (usually oracle).
    (2) create the RMAN script
    (3) Write a PL/SQL block that performs the following steps:
    3.1 create a credential object with username/password of the OS user who should have permission to run the shell script.
    3.2 create a program object, which should use the credential just created to run the shell script.
    3.3 alter the program object setting its attribute to detached.
    3.4 create a job object


    (1) Create the script that invokes RMAN
    • Create a shell script that calls an RMAN script to perform a cold backup.
    • The shell script is located in $ORACLE_BASE/scripts/coldbackup.sh.
    • It must be executable by the user who installed Oracle Database (typically the user oracle).
    #!/bin/sh
    export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
    export ORACLE_SID=orcl
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
    $ORACLE_HOME/bin/rman TARGET / @$ORACLE_HOME/scripts/coldbackup.rman trace /u01/app/oracle/backup/coldbackup.out &
    exit 0
    

    (2)Create the RMAN Script
    • Create an RMAN script that performs the cold backup and then ends the job.
    • At the end, the script should call the Scheduler routine to inform the job completion status
    • The script is located in $ORACLE_BASE/scripts/coldbackup.rman.
    run {
    # Shut down database for backups and put into MOUNT mode
    shutdown immediate
    startup mount
    
    # Perform full database backup
    backup full format "/u01/app/oracle/backup/%d_FULL_%U" (database) ;
    
    # Open database after backup
    alter database open;
    
    # Call notification routine to indicate job completed successfully
    sql " BEGIN DBMS_SCHEDULER.END_DETACHED_JOB_RUN(''sys.backup_job'', 0, null); END; ";
    }
    

    (3) Create credentials, detached program and job with inline schedule
    Submit the following PL/SQL block:
    -- 3.1 create a credential for the job to use
    -- 3.2 create the program object, specifying the shell script created earlier as the external executable. Specify it as detached program.
    -- 3.3 create the job, specifying the credential object created earlier.
    Begin 
     dbms_scheduler.create_credential('credrman', 'oracle', '');
    
     dbms_scheduler.create_program (
       program_name   => 'sys.backup_program',
       program_type   => 'executable',
       program_action => '/u01/app/oracle/scripts/coldbackup.sh',
       enabled        => TRUE,
       detached       => TRUE);
       
     dbms_scheduler.create_job (
       job_name     => 'sys.backup_job',
       program_name => 'sys.backup_program',
       repeat_interval => 'FREQ=DAILY; BYHOUR=13; BYMINUTE=20',
       credential_name => 'credrman');
       
     dbms_scheduler.enable('sys.backup_job');
    end;
    

    You can check the status of the created job with the query below:
    SQL> select owner, job_name, job_style, program_name, schedule_type, 
           to_char(start_date, 'yyyy-mm-dd hh:mi:ss') start_date_, enabled, state, 
           run_count runs, failure_count failures, 
    from dba_scheduler_jobs
    where job_name = 'BACKUP_JOB';
    
    JOB_NAME     JOB_STYLE   PROGRAM_NAME     SCHEDULE_TYPE  START_DATE_         ENABLED STATE     RUNS FAILURES
    ------------ ----------- ---------------- ------------- -------------------- ------- --------- ---- --------
    BACKUP_JOB   REGULAR     BACKUP_PROGRAM   CALENDAR       2011-12-09 01:12:49 TRUE    SCHEDULED 0    0       
    
    

    Some troubelshooting:
    • If a credential object is not created and assigned to the job, it will fail.
    • The result will be an error entry in the alert file and a trace generated:
    -//- alert.log  -//-
    (...)
    Fri Dec 09 14:25:51 2011
    Errors in file /u01/app/oracle/diag/rdbms/orcl11r2/orcl11r2/trace/orcl11r2_j000_20165.trc:
    ORA-12012: error on auto execute of job 12693
    ORA-27369: job of type EXECUTABLE failed with exit code: 274670
    (...)
    

    • If a credential object is not created, Oracle will try to run the external program using the credentials specified in the $ORACLE_HOME/rdbms/admin/externaljob.ora file.
    • The credentials specified in that file (in a unix/linux installation) are user:nobody, group:nobody.
    • The consequence is that the job will likely fail with the error 'ORA-27369: job of type EXECUTABLE failed with exit code: 274670.'
    • This somewhat criptical error message suggests (as discussed here) that the problem is due to OS permissions in the execution of the external program.



    SQL> select status, error#, run_duration, credential_name
         from dba_scheduler_job_run_details 
         where job_name='BACKUP_JOB';
    STATUS                         ERROR#                 RUN_DURATION CREDENTIAL_NAME
    ------------------------------ ---------------------- ------------ ------------------ 
    FAILED                         274670                 0 0:0:2.0 
    SUCCEEDED                      0                      0 0:7:26.0   CREDRMAN                                                                                                                                                                                           
    
    SQL> select log_id, operation, status, credential_name 
         from dba_scheduler_job_log 
         where job_name='BACKUP_JOB';
    LOG_ID                 OPERATION                      STATUS     CREDENTIAL_NAME                                                   
    ---------------------- ------------------------------ ---------- ------------------ 
    192                    RUN                            FAILED
    194                    RUN                            SUCCEEDED  CREDRMAN                                                          

    More on Oracle 11g ASM and Grid Configuration

    "Como a veces no puede dormir, en vez de contar corderitos contesta mentalmente la correspondencia atrasada, porque su mala conciencia tiene tanto insomnio como él."
    Un tal Lucas (Cortázar)




    Oracle Grid Infrastructure and Oracle ASM: Configuration changes in 11g R2



    • Oracle 11g Release 2 introduced the Oracle Grid Infrastructure installation.
    • Prior to 11g R2, Oracle Automatic Storage Management (ASM) software was automatically installed when you installed the Oracle database software.
    • Since 11g R2, if you you need to use Oracle ASM you need first to install the Oracle Grid Infrastructure Software.

    In a single instance database environment
    • An Oracle Grid Infrastructure installation includes:
    1. Oracle Automatic Storage Management (ASM),
    2. the Listener and
    3. Oracle Restart


    • A New method of installing Automatic Storage Management (ASM) with Oracle 11g R2:
      • In a cluster configuration: Oracle ASM shares an Oracle home with Oracle Clusterware
      • In a single instance datbase configuration: Oracle ASM shares an Oracle home with Oracle Restart.
    • To upgrade an existing Oracle ASM installation:
      • Upgrade Oracle ASM by running an OGI upgrade

    For a clustered environment
    • An Oracle Grid Infrastructure installation includes:
      1. Oracle Clusterware
      2. Oracle Automatic Storage Management (ASM), and
      3. the Listener









    About Oracle ASM:
    • Oracle ASM is a volume manager and file system.
    • Like other volume managers, Oracle ASM group disks into one or more disk groups.
    • While the administrator manages the disk groups, these operate like black boxes and the placement of datafiles within each disk group is automatically managed by Oracle.

    • Separation between database and ASM administration:
      • Oracle ASM administration requires SYSASM privilege.
      • Besides creating a division of responsibilities between ASM and Database administration, this also helpt to prevent that different databases using the same storage accidentally overwrite each others files.

    • Starting in Oracle 11g R2, Oracle ASM can also store Oracle Cluster Registry and voting disks files.
    • Oracle ASM Cluster File System (ACFS) is a new file system and storage management design that extentds Oracle ASM technology to support data that cannot be stroed in Oracle ASM (both in single instance and cluster configurations).
    • ACFS is installed with Oracle Grid Infrastructure.


    Q&As: Parallelism and Advanced Compression




    (Q) How can you manually set the degree of parallelism at object level?

    • ALTER TABLE sales PARALLEL 8;
    • You can set a fixed DOP at a table or index level


    (Q) Which of the operations below can make use of parallel execution?

    (1) When accessing objects: table scans, index fast full scans, partitioned index range scans
    (2) Joins: nested loops, sort merges, hash, start transformations
    (3) DDL staements: CTAS, Create Index, Rebuild Index, Rebuild Index Partition, Move/Split/Coalesce Partition
    (4) DML statements
    (5) Parallel Query
    (6) Other SQL operations: group by, not in, select distinct, union, union all, cube, and rollup, aggregate and tables functions
    (7) SQL*Loader i.e. $sqlldr CONTROL=load1.ctl DIRECT=true PARALLEL=true


    (Q) In which type of objects parallel executions CANNOT be used?

    Parallel DDL cannot be used on tables with object or LOB columns


    (Q) How can you gather I/O Calibration statistics? How often should it be done?

    • Use DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure
    • I/O calibration is a one-time action if the physical hardware does not change.

    Advanced Compression


    (Q) What is Advanced Compression?

    Introduced in 11g, includes compression for
    • structured data (numbers, chars)
    • Unstructured data (documents, images, etc)
    • backups (RMAN and Data Pump) and
    • Network transport (redo log transport during Data Guard gap resolution)



    (Q) What are the benefits of Advanced Compression?

    (a) storage reduction – compression of all types
    (b) performance improvement – compressed blocks result in higher I/O throughput
    (c) Memory efficiency – oracle keeps data compressed in memory
    (d) backups – enhanced compression capabilities
    (e) Data Guard – allows faster synchronization of databases during gap resolution process.



    (Q) What improvement Advanced Compression brings to the table compression feature introducted in Oracle9i?

    With Table compression feature – data could be compressed ONLY during bulk load operations
    With Advanced CompressionDuring Inserts and Updates also. Also Compression and Deduplication of SecureFiles


    (Q) Does table data in compressed tables get decompressed before it is read?

    No. Oracle reads directly from compressed blocks in memory.


    (Q) What features are included in the Advanced Compression option?

    • OLTP table compression – improved query performance with minimal write perf overhead
    • SecureFiles – SecureFiles compression for any unstructured content. Deduplication to reduce redundancy
    • RMAN – Multiple backup compression levels (faster --- better ratio)
    • Data Pump Compression – Exports can be coompressed
    • Data Guard – Can compress redo data (reduced network traffic, faster gap resolution)


    (Q) What types of data compression can be done with RMAN (using Advanced Compression Option)

    • HiGH – Good for backups over slower networks
    • MEDIUM – Recommended for most environments. (about the same as regular compression)
    • LOW – Least effect on backup throughput


    (Q) How to enable Advanced Compression option?

    • Set parameter enable_option_advanced_compression = TRUE
    • With Advanced compression option enabled, you can:
      • RMAN> CONFIGURE COMPRESSION ALGORITHM [HIGH|MEDIUM|LOW
    • V$RMAN_COMPRESSION_ALGORITHM describes supported algorithms
    (Q) How can the various features under Advanced Compression be turned on?
    For table Compression - Methods of Table compression on 11gR2:
    Basic compression – direct path load only
    • i.e. CREATE/ALTER table … COMPRESS [BASIC] – Direct-path only
    OLTP compression – DML operations
    • i.e. CREATE/ALTER table … COMPRESS FOR OLTP – Direct-path only
    Warehouse compression (hybrid Columnar Compression) Online archival compression (hybrid columnar compression)
    For SecureFiles -
    i.e CREATE TABLE t1 (a CLOB) LOB(a) STORE AS SECUREFILE( COMPRESS LOW [MEDIUM|HIGH] DEDUPLICATE [KEEP_DUPLICATES] )
    For RMAN -
    RMAN> BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
    or
    RMAN> CONFIGURE DEVICE TYPE [DISK | TAPE] BACKUP TYPE TO COMPRESSED BACKUPSET;
    For Data Pump -
    COMPRESSION = [ALL|DATA_ONLY|METADATA_ONLY|NONE]
    • ALL and DATA_ONLY requires ACO enabled.
    • i.e expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=DATA_ONLY

    Oracle Flashback Technology (II) - Flashback Transactions




    Oracle Flashback technology
    Logical Flashback (do not depend on RMAN, rely on undo data)
    Physical flashback
    New on Oracle 11g:





    Oracle Flashback Transaction
    • Allow you to rollback changes made by one or more transactions.
    • Flashback Transaction generates and executes the DMLs necessary to undo a given transactions and (optionally) all its dependent transactions.
    • Requisites:
      • ARCHIVELOG mode
      • At least one archive log already generated
      • Minimal and PK (plus FK) supplemental logging enabled
      Privileges required:
      • FLASHBACK and SELECT on the object or
      • FLASHBACK ANY TABLE.

    Configuring the database to use Flashback Transaction
    
    (1) Check the database log mode
    
    SQL> Select log_mode from v$database;
    
    LOG_MODE     
    ------------ 
    NOARCHIVELOG
    
    (2) Change Log mode to archivelog:
    
    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP MOUNT;
    SQL> ALTER DATABASE ARCHIVELOG;
    SQL> ALTER DATABASE OPEN;
    
    SQL> Select log_mode from v$database;
    
    LOG_MODE     
    ------------ 
    ARCHIVELOG
    
    (3) Switch logfile
    
    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    
    (4) Check log files
    
    SQL> select group#, thread#, bytes/1024/1024 Mb, members, archived, status
         from v$log;
    
    GROUP#                 THREAD#                MB                     MEMBERS                ARCHIVED STATUS           
    ---------------------- ---------------------- ---------------------- ---------------------- -------- ---------------- 
    1                      1                      50                     1                      YES      ACTIVE           
    2                      1                      50                     1                      NO       CURRENT          
    3                      1                      50                     1                      YES      INACTIVE       
    
    SQL> select name, archived, applied, status, completion_time, compressed
         from v$archived_log;
    
    NAME                                                      ARCHIVED APPLIED   STATUS COMPLETION_TIME           COMPRESSED 
    --------------------------------------------------------- -------- --------- ------ ------------------------- ---------- 
    /u01/.../flash_recovery_area/SID/archivelog/date/name.arc YES      NO        A      08-DEC-10                 NO         
    
    (5) Enable Minimal and PK (and FK) supplemental logging 
    
    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
    
    (3) Check the current setting for supplemental logging
      (V$DATABASE, DBA_LOG_GROUPS, DBA_LOG_GROUP_COLUMNS)
    
    SQL> Select supplemental_log_data_min, supplemental_log_data_all,
                supplemental_log_data_UI,  supplemental_log_data_FK
         from v$database;
    
    SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_ALL SUPPLEMENTAL_LOG_DATA_UI SUPPLEMENTAL_LOG_DATA_FK 
    ------------------------- ------------------------- ------------------------ ------------------------ 
    YES                       NO                        NO                       YES                     
    

    Using Flashback Transaction
    • Use DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure to rollback a transaction and its dependent transactions.
    • Can be done with the database online.
    Restrictions
    • DDL operations cannot be flashed back
    • Transactions that use LOB datatypes (BFILE, BLOB, CLOB, NCLOB) cannot be flashed back
    • Transactions cannot use features or datatypes not supported by the Log Miner
    • Ensure that undo segments are large enough to maintain undo information for the period you may need to flashback.

    Example 1: Flashback transactions that have no dependent transactions.
    1. Drop and create tables dept and emp
    2. Transaction 1 and Transaction 2 insert into dept and emp
    3. To flashback Transaction 2 you need:
      • Find out the transaction ID (XID): you can use OEM and Log Miner, or use a flashback transaction query.
      • Identify the transaction to be flashed back and use dbms_flashback.transaction_backout

    (1) Drop and recreate dept and emp tables.
    SQL> drop table emp;
    SQL> drop table dept;
    
    -- Check the current database SCN
    SQL> select current_scn scn_before_tblcreation 
         from v$database;
    
    SCN_BEFORE_TBLCREATION 
    ---------------------- 
    1473663               
    
    
    SQL> create table dept
        (deptno number primary key,
         deptname varchar2(20));
      
    SQL> create table emp
         (empno number primary key, 
          empname varchar2(10),
          deptno number, 
          salary number,
          constraint fk_dept foreign key(deptno)
                             references dept(deptno));
    
    -- Check the current database SCN
    SQL> select current_scn scn_before_transactions 
         from v$database;
    
    SCN_BEFORE_TRANSACTIONS 
    ----------------------- 
    1473743   
    
    (2) Insert data into emp and dept
    -- Transaction 1: Insert new row (id=1)
    insert into dept values (1, 'accounting');
    insert into dept values (2, 'sales');
    insert into emp values (1, 'John', 1, 2000);
    commit;
    
    -- Transaction 2: Insert new rows (ids=2,3)
    insert into emp values (2, 'Mary', 1, 2500);
    insert into emp values (3, 'Jack', 2, 2000);
    insert into dept values (3, 'IT');
    commit;
    
    -- Check the current database SCN
    SQL> select current_scn scn_after_transactions 
         from v$database;
    
    SCN_AFTER_TRANSACTIONS 
    ---------------------- 
    1473781 
    
    -- Check inserted data
    SQL> select * from dept;
    
    DEPTNO                 DEPTNAME             
    ---------------------- -------------------- 
    1                      accounting           
    2                      sales                
    3                      IT                   
    
    SQL> select * from emp;
    
    EMPNO                  EMPNAME    DEPTNO                 SALARY                 
    ---------------------- ---------- ---------------------- ---------------------- 
    1                      John       1                      2000                   
    2                      Mary       1                      2500                   
    3                      Jack       2                      2000  
    
    (3) Now use the flashback transaction query to find information about all transactions in a time or SCN interval
    SQL> select versions_xid xid, versions_startscn start_scn,
                versions_endscn end_scn, versions_operation operation,
                empno, empname, deptno, salary
         from emp
            versions between scn 1473743 and 1473781;
    
    XID              START_SCN   END_SCN  OPERATION EMPNO   EMPNAME    DEPTNO   SALARY  
    ---------------- ----------- -------- --------- ------- ---------- -------- -------- 
    05000D0099040000 1473777              I         3       Jack       2         2000   
    05000D0099040000 1473777              I         2       Mary       1         2500   -- Transaction 2
    07000A006A030000 1473747              I         1       John       1         2000   
    

    Identify the transaction you need to flashback. For example, if you choose to flashback transaction 2 only its three insert statements will be flashed back. Also, transaction 2 has no dependents, since:
    • No other subsequent transaction modified any data touched by transaction 2
    • No other subsequent transaction modified data linked through referential integrity with the rows touched by transaction 2.

    To flashback a transaction, use DBMS_FLASHBACK.TRANSACTION_BACKOUT (numtxns, xids, options).
    The procedure accepts one or more transaction IDs (XIDs).
    It has also an OPTIONS parameter to indicate how dependent transactions should be treated. The possible values are:
    • NOCASCADE - The transaction should have no dependencies. If a dependency is found, this raises an error.
    • NOCASCADE_FORCE - The transaction is backed out without affecting dependent transactions. This option succeeds only if no constraints are violated in the process.
    • NONCONFLICT_ONLY - This option backout only the changes to the non-conflicting rows of the given transactions.
    • CASCADE - The transaction and all it's dependencies will be removed.
    SQL> conn / as sysdba;
    
    SQL> declare
     v_xid sys.xid_array;
    begin
     v_xid := sys.xid_array('05000D0099040000');
     dbms_flashback.transaction_backout( numtxns => 1,
                                         xids    => v_xid,
                                         options => dbms_flashback.cascade);
    end;
    /
    anonymous block completed
    
    • TRANSACTION_BACKOUT analyzes the transactional dependencies, performs DML operations, and generates reports.
    • It also DOES NOT commit the actions performed.
    • To make the transaction backout permanent, you need to explicitly commit the transaction.
    (4) Check the effects of the flashback. Query the table contents. Commit the flashback transaction.
    SQL> select * from dept;
    
    DEPTNO                 DEPTNAME             
    ---------------------- -------------------- 
    1                      accounting           
    2                      sales                
    
    
    SQL> select * from emp;
    
    EMPNO                  EMPNAME    DEPTNO                 SALARY                 
    ---------------------- ---------- ---------------------- ---------------------- 
    1                      John       1                      2000           
    
    SQL> commit;
    

    TRANSACTION_BACKOUT Reports
    The backout procedure populates two static data dictionary views.
    • DBA_FLASHBACK_TXN_STATE
    • Shows information about the transactions backedout by the procedure.
      • COMPENSATING_XID - Transaction of ID of the compensating transaction. You will need to explictly commit this transaction in order to make the backout permanent.
      • XID - Backedout transaction
      • DEPENDENT_XID - Dependent transaction
    (a) Query DBA_FLASHBACK_TXN_STATE view
    
    SQL> select *
    from dba_flashback_txn_state
    where xid = '05000D0099040000';
    
    COMPENSATING_XID XID              DEPENDENT_XID    BACKOUT_MODE     USERNAME                       
    ---------------- ---------------- ---------------- ---------------- ------------------------------ 
    0800160056050000 05000D0099040000                  CASCADE          SYSTEM                
    

    (b) Use the compensating_xid returned to query the DBA_FLASHBACK_TXN_REPORT view:
    
    sql> select xid_report
         from dba_flashback_txn_report
         where compensating_xid = '0800160056050000';
    
    XID_REPORT   
    ------------------------------------------------------------------------------------------------
    <?xml version="1.0" encoding="ISO-8859-1"?> 
    <COMP_XID_REPORT XID="0800160056050000" NAME="_SYS_COMP_TXN_1114255_TIM_1319522919"> 
     <TRANSACTION XID="05000D0099040000"> 
     <CHARACTERISTICS> 
     </CHARACTERISTICS> 
     <UNDO_SQL> 
      <USQL\="yes"> 
       delete from "SYSTEM"."DEPT" where "DEPTNO" = '3' and "DEPTNAME" = 'IT' 
      </USQL>
      <USQL exec="yes">
       delete from "SYSTEM"."EMP" where "EMPNO" = '3' and "EMPNAME" = 'Jack' and "DEPTNO" = '2' and "SALARY" = '2000'
      </USQL> 
      <USQL exec="yes"> 
       delete from "SYSTEM"."EMP" where "EMPNO" = '2' and "EMPNAME" = 'Mary' and "DEPTNO" = '1' and "SALARY" = '2500' 
      </USQL>  
     </UNDO_SQL> 
     <DEPENDENT_XIDS> 
     </DEPENDENT_XIDS>
     </TRANSACTION>
    <EXECUTED_UNDO_SQL>
    <EXEC_USQL>delete from "SYSTEM"."DEPT" where "DEPTNO" = '3' and "DEPTNAME" = 'IT' 
    </EXEC_USQL> 
    <EXEC_USQL>delete from "SYSTEM"."EMP" where "EMPNO" = '3' and "EMPNAME" = 'Jack' and "DEPTNO" = '2' and "SALARY" = '2000' 
    </EXEC_USQL>
    <EXEC_USQL>delete from "SYSTEM"."EMP" where "EMPNO" = '2' and "EMPNAME" = 'Mary' and "DEPTNO" = '1' and "SALARY" = '2500'
    </EXEC_USQL>             
    </EXECUTED_UNDO_SQL>
    </COMP_XID_REPORT>  
    


    Oracle Database Security


    source: xkcd

    Oracle default security features
    User accounts
    • naming convention, password profiles, password policies
    • Unlike SQL Server (2005/2008), there is no clear separation between database user and database schema
    Authentication methods
    • database level, OS level, network level
    Privileges and roles
    • restrict data access
    Application security
    • User session information using Application CONTEXT
    • Application Context: name-value pair that holds session info.
    • You can retrieve info about a user (i.e., username/terminal, username/deptid) and restrict database and application access based on this information.
    Virtual Private Database: restrict database access on the row and column levels.
    • VPD policy: dynamically imbeds a WHERE clause into SQL statements
    Encryption: DBMS_CRYPTO and DBMS_SQLHASH
    Audit database





    Additional database security resources
    1. Oracle Advanced Security
      • TDE, Wallet Mgmt, Network Encryption, RADIUS, Kerberos, Secure Sockets Layer authentication
    2. Oracle Label Security
      • Secures database tables at the row level
    3. Oracle Database Vault
      • Provides fine-grained access control to sensitive data
    4. Oracle Audit Vault
      • Collect data from audit trail tables, OS audit files and redo logs.
    5. Oracle Enterprise User Security
    6. Oracle Total Recall
    7. Oracle Database Firewall
    8. Oracle Data Masking


    Managing Oracle Security
    Database Security involves Authentication, Authorization, Data Access, and Audit


    (1) Authentication Methods
    • Database Authentication (Default)
      • Oracle implements authentication through a user ID and password.
    • Operating System authentication
      • Sometimes referred as OPS$ accounts.
      • Oracle looks for a user ID that matches your OS login ID and prepends an OPS$ to it.
      • Additional benefit: OS scripts can log into the database without hard coding user names and passwords.
    • Third Party Authentication
      • You can tie authentication to third-party providers like Kerberos or DCE (called network authentication) (Needs EE+Security Pack)
      • or provide it from the middle tier (called multitier authentication).


    (2) Authorization
    • Giving access to certain objects, processes or resources.
    • Implemented using GRANTS on objects to ROLES or USERS.
    • Oracle implements authorization via users, roles and permissions




    (3) Securing Data Access
    • The ability to access specific data values associated with an object (across records or columns)
    • STORED PROCEDURES are commonly used to secure data access.
    • Benefits of using stored procedures for all programmatic data access: (a) secure the data; (b) provide consistent application access to the data; (c) Hide the data structure.
    • Secure VIEWS can also be used to restrict access. (use WITH CHECK OPTION Constraint)
    • Controlling access at the record level:
      • Previous versions (before Oracle 10g): using secure views
      • More recently:
    • (a) Use Virtual Private Databases (VPDs)
    • (b) Use Fine-Grained Access Control (FGAC) - uses DBMS_RLS package (Row Level Security)
    • With FGAC (DBMS_RLS), you can transparently include a WHERE clause on any combinations of DML and SELECT when a user queries the database.

    Securing DATA ACCESS
    (a) Option: use Secure Views
    userA owns EMPLOYEES table.
    userB needs access to EMPLOYEES but cant see SALARY or MANAGER records.
    userA may create a view and grant access to userB on that view.

    -- User A:
     SQL> create or replace view emp_no_sal as 
          select employee_id, last_name, first_name, email, job_id, manager_id
          from iranmr.employees a
          where employee_id NOT IN (
               select nvl(manager_id, -1)
               from iranmr.employees b);
    
     SQL> grant select on emp_no_sal to userB;
    

    (b) Option: Use Virtual Private Database (VPD)
    • With Virtual Private Databases (VPDs), Oracle allows column masking to hide columns.
    • When you select the row, Oracle will only display NULL for the secure columns.
    • If you're securing at the row level and column level, it's probably easier to just implement VPDs and not the secure views.

    What's a Virtual Private Databases (VPDs)?
    • A VPD is just asking Oracle to put a where clause on DML against an object with a security policy on it.
    • A security policy is defined with DBMS_RLS package.
    • A security policy is normally defined in a CONTEXT (a piece of data that says how the where clause should be built).


    (4) Audit
    • Older versions: audit implemented through triggers.
    • Trigger limitations: (a) triggers can easily be disabled or modified; (b) triggers do not fire on SELECT; (c) require extensive coding and testing.
    • Newer versions: Use Fine Grained Auditing (FGA). (DBMS_FGA package)