Showing posts with label Administration. Show all posts
Showing posts with label Administration. Show all posts
Data Movement: Transporting Partitions for Data Warehousing
Transporting a partition |
- In this example, we see how to use the transportable tablespaces feature to periodically load data into a large fact table in a data warehousing database.
- For this, suppose that an enterprise data warehouse has a large table
sales
. - The table is partitioned by date, with data for each month stored in its own partition.
- In addition, local indexes are build so that you dont have to rebuild global indexes when dropping the oldest partition from the table.
- Every month you need to load one month of data into the
sales
table, stored in its own partition.
Set up the fact table: sales . In the data warehouse database (dwr-db).
|
SQL> create table dwr.sales (invoice_no number, 2 sale_year int not null, 3 sale_month int not null, 4 sale_day int not null) 5 partition by range (sale_year, sale_month, sale_day) 6 (partition jan05 values less than (2005, 2, 1), 7 partition feb05 values less than (2005, 3, 1), 8 partition mar05 values less than (2005, 4, 1), 9 partition apr05 values less than (2005, 5, 1), 10 partition may05 values less than (2005, 6, 1), 11 partition jun05 values less than (2005, 7, 1)) 12 tablespace users; -- create local index. SQL> create index dwr.sales_index on dwr.sales(invoice_no) local; -- Now let's insert some data into table sales. Run the PL/SQL block below to insert some sample data into sales declare month number :=1; day number :=1; begin for i in 1..30 loop insert into dwr.sales values (i, 2005, month, day); if mod(i,5) = 0 then month := month +1; day := day +1; end if; end loop; end; |
-- Check that all partitions (table and index) are in the same tablespace (users):
SQL> select tablespace_name, segment_name, segment_type, partition_name 2 from dba_segments 3 where owner = 'DWR'; TABLESPACE_NAME SEGMENT_NAME SEGMENT_TYPE PARTITION_NAME ------------------ --------------- ------------------ ------------------ USERS SALES TABLE PARTITION JAN05 USERS SALES TABLE PARTITION FEB05 USERS SALES TABLE PARTITION MAR05 USERS SALES TABLE PARTITION APR05 USERS SALES TABLE PARTITION MAY05 USERS SALES TABLE PARTITION JUN05 USERS SALES_INDEX INDEX PARTITION JAN05 USERS SALES_INDEX INDEX PARTITION FEB05 USERS SALES_INDEX INDEX PARTITION MAR05 USERS SALES_INDEX INDEX PARTITION APR05 USERS SALES_INDEX INDEX PARTITION MAY05 USERS SALES_INDEX INDEX PARTITION JUN05 12 rows selected.
Prepare the new data for loading in the warehouse The month of July has ended and you need to load the July data into the sales table. In the staging database, perform the steps below:
|
SQL> Create smallfile tablespace ts_jul datafile '/u01/app/oracle/oradata/test112/ts_jul_1.dbf' size 5M logging extent management local segment space management auto; SQL> create table hr.jul_sales (invoice_no number, sale_year int not null, sale_month int not null, sale_day int not null) tablespace ts_jul; -- Load July data: (run the pl/sql script) declare day number :=1; begin for i in 40..70 loop insert into hr.jul_sales values (i, 2005, 7, day); day := day + 1; end loop; end; -- Create index on jul_sales table: SQL> create index hr.jul_sale_idx on hr.jul_sales(invoice_no) tablespace ts_jul; -- Check table and index location SQL> select tablespace_name, segment_name, segment_type, bytes from dba_segments where owner = 'HR' order by segment_name, segment_type; TABLESPACE_NAME SEGMENT_NAME SEGMENT_TYPE BYTES ----------------- --------------- ------------------ ---------- ... TS_JUL JUL_SALES TABLE 65536 TS_JUL JUL_SALE_IDX INDEX 65536 ...
Transport tablespace to the data warehouse database Perform the steps below:
|
SQL> alter tablespace ts_jul read only; Tablespace altered.
(2) export metadata
oracle@quark:~$ expdp system dumpfile=expts_jul.dmp directory=data_pump_dir transport_tablespaces=ts_jul logfile=tts_exptsjul.log Export: Release 11.2.0.1.0 - Production on Fri Oct 08 15:34:45 2010 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Password: Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/******** dumpfile=expts_jul.dmp directory=data_pump_dir transport_tablespaces=ts_jul logfile=tts_exptsjul.log Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/INDEX Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is: /u01/app/oracle/admin/tempdb/dpdump/expts_jul.dmp ****************************************************************************** Datafiles required for transportable tablespace TS_JUL: /u01/app/oracle/oradata/tempdb/ts_jul_1.dbf Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 15:35:35
(3)-(4) Copy export and tablespace data files to the dwr server
(5) Import the tablespace ts_jul to the dwr
oracle@dwr-srv:$ impdp system dumpfile=expts_jul.dmp directory=data_pump_dir transport_datafiles=/u01/app/oracle/oradata/dwr/ts_jul_1.dbf remap_schema=hr:dwr logfile=tts_import.log Import: Release 11.2.0.1.0 - Production on Fri Oct 08 18:10:20 2010 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Password: Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** dumpfile=expts_jul.dmp directory=data_pump_dir transport_datafiles=/u01/app/oracle/oradata/dwr/ts_jul_1.dbf remap_schema=hr:crm1 logfile=tts_import.log Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/INDEX Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 18:11:10 -- Check the import SQL> select count(*) from dwr.jul_sales; COUNT(*) ---------- 30
Attach the July data to the Sales table At this point the tablespace ts_jul has been transported and the table jul_sales is available in the dwr database. It is not yet, however, available as a partition in the large sales table. For this we will need to:
|
SQL> alter table dwr.sales add partition jul05 values less than (2005, 8,1); Table altered.
(2) Attach the transported table jul_sales to the table sales by exchanging it with the new partition:
SQL> alter table dwr.sales exchange partition jul05 with table dwr.jul_sales 2 including indexes 3 without validation; Table altered.
- The exchange partition statement assigns the jul_sales data to the jul05 partition on the sales table.
- The data is not physically moved to the new partition. Instead database pointers and metadata information are updated.
- In fact, after the statement executes successfully, you can check that the data is now assigned to the sales table, and the jul_sales table is empty.
SQL> select count(*) from crm1.jul_sales; COUNT(*) ---------- 0 SQL> select sale_month, count(*) 2 from crm1.sales 3 group by sale_month; SALE_MONTH COUNT(*) ---------- ---------- 1 5 6 5 2 5 4 5 5 5 3 5 7 31 7 rows selected. SQL> select tablespace_name, segment_name, segment_type, partition_name 2 from dba_segments 3 where owner = 'DWR'; TABLESPACE_NAME SEGMENT_NAME SEGMENT_TYPE PARTITION_NAME ----------------- -------------- --------------- ------------------------------ USERS SALES TABLE PARTITION JAN05 USERS SALES TABLE PARTITION FEB05 USERS SALES TABLE PARTITION MAR05 USERS SALES TABLE PARTITION APR05 USERS SALES TABLE PARTITION MAY05 USERS SALES TABLE PARTITION JUN05 TS_JUL SALES TABLE PARTITION JUL05 USERS SALES_INDEX INDEX PARTITION JAN05 USERS SALES_INDEX INDEX PARTITION FEB05 USERS SALES_INDEX INDEX PARTITION MAR05 USERS SALES_INDEX INDEX PARTITION APR05 USERS SALES_INDEX INDEX PARTITION MAY05 USERS SALES_INDEX INDEX PARTITION JUN05 TS_JUL SALES_INDEX INDEX PARTITION JUL05 14 rows selected.
- You can now drop
jul_sales
table. - You can also, if needed, make the tablespace ts_jul read write.
SQL> alter tablespace ts_jul read write; Tablespace altered.
Labels:
Administration,
DataPump,
Oracle,
Transportable Tablespaces
Data Movement: Transporting Tablespaces
Requirements:
- Oracle 8i or later: Enterprise Edition required to generate a transportable tablespace set
- Since 9i: Block size may be different across Databases
- Since 10g: Tablespaces can be transported across platforms (both databases need to have the COMPATIBLE parameter set to at least 10.0.0)
- Since 11gR1: You must use DataPump utilitties
(expdb, impdp)
to transport tablespaces
When to use:
- To export/import partitions of large tables (see example here)
- For publishing structured data on CDs
- Copying multiple read-only version of a tablespace to multiple databases
- Archive historical data
- Perform tablespace point-in-time recovery
Procedure for Transporting a tablespace set: |
- Determine the Transportable Set. A transportable set is formed by the datafiles of all tablespaces being transported plus the export file containing metadata for the entire set.
- Check that the transportable set is self-contained (there should be no references from objects within the tablespace set to objects outside the set).
- If you will be migrating across platforms, check whether you will need to convert the datafiles for the same endianess.
- Make the tablespaces that will be transported READ ONLY
- Export the metadata for the transportable set
- Copy the datafiles and export file to the destination host.
- Import the metadata and datafiles
- Make tablespaces READ WRITE
Transporting tablespaces. Example:
(1) Check the endian format of both platforms (if they are different):
Query the
V$TRANSPORTABLE_PLATFORM
view to see the platforms that are supported, and to determine each platform's endian format (byte ordering).You can check the endian_format for your specific platform:
SQL> select d.platform_name, endian_format 2 from v$transportable_platform tp, v$database d 3 where tp.platform_name = d.platform_name; PLATFORM_NAME ENDIAN_FORMAT ---------------------------------------- -------------- Linux IA (32-bit) Little
(2) setup the tablespaces to be transported.
This example create two tablespaces on database DB1. It creates a table and index on the tablespaces to be transported to database DB2.
SQL> Create smallfile tablespace HR1 datafile '/u01/app/oracle/oradata/db1/hr1_1.dff' size 5M logging extent management local segment space management auto; SQL> Create smallfile tablespace HR2 datafile '/u01/app/oracle/oradata/db1/hr2_1.dbf' size 5M reuse autoextend on next 1m maxsize 10m logging extent management local segment space management auto; SQL> create table hr.hr_emp 2 tablespace hr1 3 as select * from hr.employees; SQL> create index hr.idx_hr_emp on hr.hr_emp(first_name, last_name) tablespace hr2; SQL> alter table hr.hr_emp 2 add constraint fk_hr_emp_dept 3 foreign key (department_id) references hr.departments(department_id); -- check the location of the created objects. -- We'll transport HR1 and HR2 between two Oracle 11gr2 running on Linux 32-bit. SQL> select owner, segment_name, segment_type, tablespace_name 2 from dba_segments 3 where tablespace_name in ('HR1', 'HR2'); OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME ------- --------------- ------------------ ------------------------------ HR HR_EMP TABLE HR1 HR IDX_HR_EMP INDEX HR2
(3) Select the self-contained tablespace set.
- You can only transport a self-contained tablespace set, which means that there may be no references from an object inside the set of tablespaces to another tablespace not included in the set.
- In our example, because table HR_EMP on tablespace HR1 has an index on tablespace HR2, both tablespaces need to be transported.
- In addition, since HR.HR_EMP has a foreign key references to HR.DEPARTMENTS (in another tablespace) the third tablespace should also be included in the set. Otherwise a violation will occur.
Violations of self-containement can be caused by:
|
(4) Check if set is self contained.
- Use
DBMS_TTS.TRANSPORT_SET_CHECK
to validate the set of tablespaces.
- If any violation if found it will be listed in the
TRANSPORT_SET_VIOLATIONS
view. - In this case,
HR.HR_EMP
has a foreign key referencingHR.DEPARTMENTS
(in tablespace EXAMPLE). An violation will be found, since we are not including the third tablespace in the transportable set.
SQL> execute dbms_tts.transport_set_check('hr1, hr2', true); PL/SQL procedure successfully completed. SQL> select * from transport_set_violations; VIOLATIONS ---------------------------------------------------------------------------------------------------- ORA-39906: Constraint FK_HR_EMP_DEPT between table HR.DEPARTMENTS in tablespace EXAMPLE and table HR.HR_EMP in tablespace HR1. So let's drop the foreign key constraint: SQL> alter table hr.hr_emp drop constraint fk_hr_emp_dept; Table altered.
(4) Make tablespaces read only
SQL> alter tablespace hr1 read only; SQL> alter tablespace hr2 read only;
(5) Invoke data pump as system and export the tablespace in the transportable set.
- directory: IS the directory objects pointing the directory where you want the export file to be generated.
- transport_tablespaces: parameter indicating a transportable tablespace export. Only the metadata for the tables and dependent objects within the transportable set are exported.
oracle@quark:~$ expdp system dumpfile=exphr1_2.dmp directory=data_pump_dir transport_tablespaces=hr1, hr2 logfile=tts_exphr1_2.log Export: Release 11.2.0.1.0 - Production on Thu Oct 07 15:22:51 2010 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Password: Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/******** dumpfile=exphr1_2.dmp directory=data_pump_dir transport_tablespaces=hr1, hr2 logfile=tts_exphr1_2.log Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/INDEX Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is: /u01/app/oracle/admin/db1/dpdump/exphr1_2.dmp ****************************************************************************** Datafiles required for transportable tablespace HR1: /u01/app/oracle/oradata/db1/hr1_1.dff Datafiles required for transportable tablespace HR2: /u01/app/oracle/oradata/db1/hr2_1.dbf Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 15:24:01
(6) If the tablespaces are being transported to a different platform, you need to convert the datafiles before copying them.
For example, if the destination database in on a Windows machine, then:
oracle@quark:/$ rman target / Recovery Manager: Release 11.2.0.1.0 - Production on Thu Oct 07 15:31:55 2010 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: DB1 (DBID=64992127) RMAN> convert tablespace hr1, hr2 2> to platform 'Microsoft Windows IA (32-bit)' 3> format '/tmp/%U'; Starting conversion at source at 07-Oct-10 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=18 device type=DISK channel ORA_DISK_1: starting datafile conversion input datafile file number=00007 name=/u01/app/oracle/oradata/db1/hr1_1.dff converted datafile=/tmp/data_D-DB1_I-54881118_TS-HR1_FNO-6_01mrt1cn channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01 channel ORA_DISK_1: starting datafile conversion input datafile file number=00008 name=/u01/app/oracle/oradata/db1/hr2_1.dbf converted datafile=/tmp/data_D-DB1_I-54881118_TS-HR2_FNO-7_02mrt1co channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01 Finished conversion at source at 07-OCT-10
(7) Transport the Tablespace set
- Step 1: Copy the export file to the directory pointed by the DATA_PUMP_DIR directory object
- Step 2: Copy the data files to the location of the existing datafiles of the destination database.
(8) Import the tablespaces
oracle@proton:$ impdp system dumpfile=exphr1_2.dmp directory=data_pump_dir transport_datafiles=/u01/app/oracle/oradata/DB2/hr1_1.dff, /u01/app/oracle/oradata/DB2/tts/hr2_1.dbf remap_schema=hr:crm1 logfile=tts_import.log Import: Release 11.2.0.1.0 - Production on Thu Oct 07 17:18:56 2010 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Password: Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** dumpfile=exphr1_2.dmp directory=data_pump_dir transport_datafiles=/u01/app/oracle/oradata/DB2/hr1_1.dff, /u01/app/oracle/oradata/DB2/hr2_1.dbf remap_schema=hr:crm1 logfile=tts_import.log Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/INDEX Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 17:19:16 -- Check the imported tablespaces SQL> select owner, segment_name, segment_type, tablespace_name 2 from dba_segments 3* where tablespace_name in ('HR1' , 'HR2'); OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME -------- -------------------- ------------------ ------------------------------ CRM1 HR_EMP TABLE HR1 CRM1 IDX_HR_EMP INDEX HR2
Labels:
Administration,
DataPump,
Oracle,
Transportable Tablespaces
Oracle Memory Management Methods
Automatic SGA Management
- You specify the total amount of SGA availabe using
SGA_TARGET
- Oracle automatically distributes the memory among the various SGA components.
- SGA Components: Pools of memory used to satisfy different classes of memory allocation requests.
- Shared Pool (Library cache, Data Dictionary Cache, Server Result cache, reserved pool, Private sql areas, etc)
- Large Pool
- Database Buffer Cache
- Java Pool
- Streams Pool
- Fixed SGA
- If you are using an SPFILE, the database remember the sizes of the various components across instance shutdowns (no need to learn workload characteristics at each startup).
- Oracle allocates/deallocates memory in chuncks of allocation units (granules)
- If total sga size < 1G ==> granule = 4Mb.
- If total_sga_size > 1G ==> granule = 16Mb
1. Automatic Instance tuning
|
2. Automatic SGA and PGA tuning
|
- You can query the
V$SSGAINFO
view to check the size of the various SGA components and see the allocation unit size (called granule size) that is being used by the instance. - granule size is platform-specific and is determined by the total SGA size
SQL> select name, bytes, resizeable from v$sgainfo; NAME BYTES RES ------------------------------- ---------- --- Fixed SGA Size 1336260 No Redo Buffers 12582912 No Buffer Cache Size 318767104 Yes Shared Pool Size 385875968 Yes Large Pool Size 16777216 Yes Java Pool Size 16777216 Yes Streams Pool Size 33554432 Yes Shared IO Pool Size 0 Yes Granule Size 16777216 No Maximum SGA Size 1272213504 No Startup overhead in Shared Pool 67108864 No Free SGA Memory Available 486539264 12 rows selected.
3. Automatic SGA and Manual PGA tuning
|
4. Manual SGA and Automatic PGA tuning
|
5. Fully manual ( SGA and PGA ) tuning
|
Labels:
Administration,
Memory,
Oracle
Memory Management on Oracle 11g
About the Shared Global Area (SGA) |
SQL> show parameter sga_max_size NAME TYPE VALUE -------------- ----------- ------ sga_max_size big integer 1216M
|
- The entire SGA should fit in real memory. If virtual memory needs to be used performance decreases dramatically.
- Use
V$SGA_TARGET_ADVICE
to help decide on a value forSGA_TARGET
To set the value for SGA_TARGET:
(1) find out how much space the SGA is using by:
SQL> select ( 2 ( select sum(value) from v$sga) - 3 ( select current_size from v$sga_dynamic_free_memory) 4 ) "sga_target" 5 from dual; sga_target ---------- 785674240
(2) Edit the text initialization parameter file and restart the database, or issue the statements below:
(a) SQL> ALTER SYSTEM SET SGA_TARGET=value [SCOPE={SPFILE|MEMORY|BOTH}] (b) SQL> ALTER SYSTEM SET <sga_component_parameter> = 0; -- for every component.
Database Buffer Cache
- Optimizes physical I/O:
- After a COMMIT: writes redo to disk
- DBWn performs lazy writes in the background (writes cold, dirty blocks to disk)
- Buffer states can be:
- Unused - buffer is available for use
- Clean - Used earlier and now contains a read-consistent version of a block as of a point in time. Does not need to be checkpointed. Can be pinned to the cache so it does not age out of memory.
- Dirty - Contains modified data not yet written to disk. Must be checkpointed befor reusing.
- Buffer Modes
- Current mode get - same as db block get: read block as is in buffer, including when contains uncommitted changes
- Consistent mode - provides a read-consistent version of a block (read undo data if needed)
- Buffer reads
- Cache hit - block is on buffer cache. Logical read occurs. If flash cache list is used, an optimized physical read takes place.
- Cache miss - block is not in memory. A physical read is required.
- Buffer caceh hit ratio - The relative frequency of logical versus physical reads .
- Buffer pools
- Default pool - where blocks are normally cached. (only pool by default)
- Keep pool - To keep frequently accessed blocks that would have to be aged out for lack of space. Pinned blocks stay here.
- Recycle pool - for infrequently used blocks. Full table scan blocks go here.
- non-default block size pools - tablespaces with different block sizes have their blocks stored on size-specific pools.
Log Buffer
- LWGR writes redo sequentially to disk.
- DBWn performs scattered writes of data blocks to disk.
- LOG_BUFFER parameter specifies the amount of memory used for the Redo log Buffer.
- This value is taken out of the the area specified in SGA_TARGET
Shared Pool
- Stores parsed SQL, PL/SQL code, system parameters and data dictionary information
- Library Cache - Stores executable SQL, parsed PL/SQL code, locks and library cache handles and private sql areas (in shared server environment).
- Library cache hit - also known as soft parse: Occurs when a parsed representation of a SQL statements exists in the library cache and is reused.
- Libray cache miss (hard parse): Parsed representation is not found in the library cache and has to be recreated.
- Use
ALTER SYSTEM FLUSH SHARED_POOL
to manually remove all information in the shared pool - Use
RESULT_CACHE
hint to indicate that the results of a SQL query should be stored in the result cache. RESULT_CACHE_MODE
parameter specify whether the SQL query result cache is used for all queries or only for annontated queries.
Large Pool
- Optional area intended for large memory allocations (i.e. buffers for RMAN I/O slaves)
Java Pool
- Stores all session-sepcific Java code and data within the Java Virtual Machine (JVM).
- It includes java objects that are migrated to the Java session space at end-of-call.
- Includes the share part of each java class (methods an dread-only memory such as code vectors). (dedicated server connections only).
Streams Pool
- Stores buffered queue messages and provides memory for Oracle Streams capture and apply processes
Labels:
Administration,
Memory,
Oracle
Subscribe to:
Posts (Atom)