11g

diff-hidden parameter between 10g and 11g

April 16, 2012 11g, oracle No comments

看到刘兄的一篇文章转载下

Reference:http://www.oracledatabase12g.com/archives/diff-hidden-parameter-between-10g-and-11g.html

Create a database link with the new host:port/service syntax

March 8, 2012 11g, maintain, oracle 1 comment

Create a database link with the new host:port/service syntax

Submitted by tanelpoder on Mon, 2012-03-05 15:57

Share
I just noticed that (finally) in 11.2 this syntax is supported:

SQL> CREATE DATABASE LINK demo_x2
2 CONNECT TO tanel IDENTIFIED BY password
3 USING ‘exadb03:1521/DEMO’;
Database link created.
This just makes life a bit easier as there’s no need to use the long TNS format entry (or a tnsnames.ora/LDAP alias). It might work in 11.1 too (haven’t tested) but it didn’t work on 10.2.0.4 …

Update: This feature works for dblinks in 10.2 onwards – when I tested it on my 10.2, I got an error initially, but it was because the hostname I used didn’t resolve to an IP. Thanks to Randolf Geist for pointing this out.

In case you didn’t know, the sqlplus supports such an easy connect method since 10g:

tanel@mac02:~$ sqlplus tanel/password@exadb03/DEMO
SQL*Plus: Release 10.2.0.4.0 – Production on Mon Mar 5 09:51:27 2012
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL>
If you omit the port from the syntax, it defaults to 1521.

eg:MY test

SQL> create database link test11g connect to user_data2 identified by xxxxxx using ‘10.0.0.42:1521/user’;

Database link created.

SQL> select * from global_name@test11g;

GLOBAL_NAME
——————————————————————————–
USER

11g new feature Index Invisible

February 15, 2012 11g, oracle No comments

11g为我们提供了Index Invisible特性,使用这个特性可以轻松的将index置为不可用状态,对比之前的将索引置为unusable
最大的改进是,当index设置为unusable时,索引将不再随着表数据的更新而更新,导致如果重新使用index 需要rebuild等操作
这对大型的OLTP系统是极大的挑战,而invisible则不然

下面做个测试:

SQL> create table t_test1 as select * from all_objects;

Table created.

SQL> create index t_test1_idx on t_test1(object_name)invisible ;

Index created.

SQL> analyze table t_test1 compute statistics for table for all indexes;

Table analyzed.

SQL> set autot trace exp

SQL> select * from t_test1 where object_name like ‘T_%’;

SQL> select * from t_test1 where object_name like ‘T_%’;

Execution Plan
———————————————————-
Plan hash value: 1883417357

—————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————–
| 0 | SELECT STATEMENT | | 1805 | 170K| 137 (1)| 00:00:02 |
|* 1 | TABLE ACCESS FULL| T_TEST1 | 1805 | 170K| 137 (1)| 00:00:02 |
—————————————————————————–

Predicate Information (identified by operation id):
—————————————————

1 – filter(“OBJECT_NAME” LIKE ‘T_%’)

SQL>

此时CBO走了全表扫描

SQL> alter index t_test1_idx visible;

Index altered.

将index置为 visible

SQL> select * from t_test1 where object_name like ‘T_%’;

Execution Plan
———————————————————-
Plan hash value: 3466041839

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1805 | 170K| 135 (0)| 00:00:02 |
| 1 | TABLE ACCESS BY INDEX ROWID| T_TEST1 | 1805 | 170K| 135 (0)| 00:00:02 |
|* 2 | INDEX RANGE SCAN | T_TEST1_IDX | 325 | | 3 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“OBJECT_NAME” LIKE ‘T_%’)
filter(“OBJECT_NAME” LIKE ‘T_%’)

还有一种方法可以使CBO 使用到index

SQL> alter index t_test1_idx invisible;

Index altered.

SQL> alter session set optimizer_use_invisible_indexes=true;

Session altered.

SQL> select * from t_test1 where object_name like ‘T_%’;

Execution Plan
———————————————————-
Plan hash value: 3466041839

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1805 | 170K| 135 (0)| 00:00:02 |
| 1 | TABLE ACCESS BY INDEX ROWID| T_TEST1 | 1805 | 170K| 135 (0)| 00:00:02 |
|* 2 | INDEX RANGE SCAN | T_TEST1_IDX | 325 | | 3 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“OBJECT_NAME” LIKE ‘T_%’)
filter(“OBJECT_NAME” LIKE ‘T_%’)

通过修改 optimizer_use_invisible_indexes同样可以强制使用 invisible index,同样需要注意的是即使使用hint 如果index 为 invisible hint将无法生效。


Beginning with Release 11g, you can create invisible indexes. An invisible index is an index that is ignored by the optimizer unless you explicitly set the OPTIMIZER_USE_INVISIBLE_INDEXES initialization parameter to TRUE at the session or system level. Making an index invisible is an alternative to making it unusable or dropping it. Using invisible indexes, you can do the following:

Test the removal of an index before dropping it.

Use temporary index structures for certain operations or modules of an application without affecting the overall application.

Unlike unusable indexes, an invisible index is maintained during DML statements.

11g nested loop faster than 10g

February 3, 2012 11g, oracle No comments

在pythian看到了一个测试,是否证明了oracle 11g is x% faster than 10g?(oracle官方的说法)

First, you have to create and fill a table to run your query :

create table gark
(id1 number not null,
id2 number not null,
id3 number not null);

begin
for i in 1..100000 loop
insert into gark(id1, id2, id3)
values (i, i, i);
end loop;
commit;
end;
/

create unique index gark_idx on gark(id1, id3);

begin
dbms_stats.gather_table_stats(
user,
‘GARK’,
cascade=>true,
estimate_percent=>100,
method_opt=>’FOR ALL COLUMNS SIZE 254′,
no_invalidate=> false);
end;
/
Then, run the query below on a 10g database :

set autotrace traceonly

select /*+ use_nl(A B) */count(a.id3)
from gark a, gark b
where a.id1=b.id2;

Execution Plan
—————————-
Plan hash value: 3137705415

——————————————————–
| Id | Operation | Name | Rows | Cost |
——————————————————–
| 0 | SELECT STATEMENT | | 1 | 100K |
| 1 | SORT AGGREGATE | | 1 | |
| 2 | NESTED LOOPS | | 100K| 100K |
| 3 | TABLE ACCESS FULL| GARK | 100K| 65 |
|* 4 | INDEX RANGE SCAN | GARK_IDX | 1 | 1 |
——————————————————–

Predicate Information (identified by operation id):
—————————————————

4 – access(“A”.”ID1″=”B”.”ID2″)

Statistics
———————————————————-
0 recursive calls
0 db block gets
100556 consistent gets
0 physical reads
0 redo size
415 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Run the same query on 11g (There is no trick here, the plan is the right plan, I’ve check it with dbms_xplan.display_cursor) :

set autotrace traceonly

select /*+ use_nl(A B) */count(a.id3)
from gark a, gark b
where a.id1=b.id2;

Execution Plan
———————————————————-
Plan hash value: 3137705415

——————————————————-
| Id | Operation | Name | Rows | Cost |
——————————————————-
| 0 | SELECT STATEMENT | | 1 | 100K |
| 1 | SORT AGGREGATE | | 1 | |
| 2 | NESTED LOOPS | | 100K| 100K |
| 3 | TABLE ACCESS FULL| GARK | 100K| 105 |
|* 4 | INDEX RANGE SCAN | GARK_IDX | 1 | 11 |
——————————————————-

Predicate Information (identified by operation id):
—————————————————

4 – access(“A”.”ID1″=”B”.”ID2″)

Statistics
———————————————————-
0 recursive calls
0 db block gets
3373 consistent gets
0 physical reads
0 redo size
422 bytes sent via SQL*Net to client
415 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
If you know how the NL algorithm works in 10g, it’s quite easy to understand how it has been modified. The good news with that is that it won’t only speed up queries with optimal plans but also speed up queries with sub optimal plans that are using Nested Loops.

这里所谓的算法是什么算法,我们通过查找oracle文档,发现11g nested loop 与 10g 的区别:


Original and New Implementation for Nested Loop Joins

Oracle Database 11g Release 1 (11.1) introduces a new implementation for nested loop joins. As a result, execution plans that include nested loops might appear different than they did in previous releases of Oracle Database. Both the new implementation and the original implementation for nested loop joins are possible in Oracle Database 11g Release 1 (11.1). So, when analyzing execution plans, it is important to understand that the number of NESTED LOOPS join row sources might be different.

Original Implementation for Nested Loop Joins

Consider the following query:

SELECT e.first_name, e.last_name, e.salary, d.department_name
FROM hr.employees e, hr.departments d
WHERE d.department_name IN (‘Marketing’, ‘Sales’)
AND e.department_id = d.department_id;
before Oracle Database 11g Release 1 (11.1), the execution plan for this query might appear similar to the following execution plan:

————————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————————————-
| 0 | SELECT STATEMENT | | 19 | 722 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 10 | 220 | 1 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 19 | 722 | 3 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | DEPARTMENTS | 2 | 32 | 2 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 10 | | 0 (0)| 00:00:01 |
————————————————————————————————-

Predicate Information (identified by operation id):
—————————————————
3 – filter(“D”.”DEPARTMENT_NAME”=’Marketing’ OR “D”.”DEPARTMENT_NAME”=’Sales’)
4 – access(“E”.”DEPARTMENT_ID”=”D”.”DEPARTMENT_ID”)
In this example, the outer side of the join consists of a scan of the hr.departments table that returns the rows that match the condition department_name IN (‘Marketing’, ‘Sales’). The inner loop retrieves the employees in the hr.employees table that are associated with those departments.

New Implementation for Nested Loop Joins
Oracle Database 11g Release 1 (11.1) introduces a new implementation for nested loop joins to reduce overall latency for physical I/O. When an index or a table block is not in the buffer cache and is needed to process the join, a physical I/O is required. In Oracle Database 11g Release 1 (11.1), Oracle Database can batch multiple physical I/O requests and process them using a vector I/O instead of processing them one at a time. As part of the new implementation for nested loop joins, two NESTED LOOPS join row sources might appear in the execution plan where only one would have appeared in prior releases. In such cases, Oracle Database allocates one NESTED LOOPS join row source to join the values from the table on the outer side of the join with the index on the inner side. A second row source is allocated to join the result of the first join, which includes the rowids stored in the index, with the table on the inner side of the join.

Consider the query in “Original Implementation for Nested Loop Joins”. In Oracle Database 11g Release 1 (11.1), with the new implementation for nested loop joins, the execution plan for this query might appear similar to the following execution plan:

————————————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost(%CPU)| Time |
————————————————————————————————-
| 0 | SELECT STATEMENT | | 19 | 722 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 19 | 722 | 3 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | DEPARTMENTS | 2 | 32 | 2 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 10 | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 10 | 220 | 1 (0)| 00:00:01 |
————————————————————————————————-

Predicate Information (identified by operation id):
—————————————————
3 – filter(“D”.”DEPARTMENT_NAME”=’Marketing’ OR “D”.”DEPARTMENT_NAME”=’Sales’)
4 – access(“E”.”DEPARTMENT_ID”=”D”.”DEPARTMENT_ID”)
In this case, the rows from the hr.departments table constitute the outer side of the first join. The inner side of the first join is the index emp_department_ix. The results of the first join constitute the outer side of the second join, which has the hr.employees table as its inner side.

There are cases where a second join row source is not allocated, and the execution plan looks the same as it did in prior releases. The following list describes such cases:

All of the columns needed from the inner side of the join are present in the index, and there is no table access required. In this case, Oracle Database allocates only one join row source.

The order of the rows returned might be different than it was in previous releases. Hence, when Oracle Database tries to preserve a specific ordering of the rows, for example to eliminate the need for an ORDER BY sort, Oracle Database might use the original implementation for nested loop joins.

The OPTIMIZER_FEATURES_ENABLE initialization parameter is set to a release before Oracle Database 11g Release 1 (11.1). In this case, Oracle Database uses the original implementation for nested loop joins.


在11g中,oracle通过得到的rowid 批量的从磁盘上得到数据,速度>>10g 时候nested loop 的行为。

再回到上面的测试:在11g中 oracle在取得数据时候采用了不同的方法(Oracle Database can batch multiple physical I/O requests and process them using a vector I/O instead of processing them one at a time)使用multiple physical I/O取代了一次返回一行的行为。

11g partition new features

February 3, 2012 11g, oracle No comments

总结了11g partition new feature
11G Introduced partition extensions:
-Interval partitioning
-REF partitioning
-More Composite Partitioning
-Virtual Column-based partitioning
-System Partitioning
-Introduced Partition Advisor.


Partition Types
Let us discuss each of the above features briefly:

Range partitioning: The data is distributed based on a range of values of the partitioning key. For example, if we choose a date column as the partitioning key, the partition “JAN-2007” will contain all the rows that have the partition key values between 01-JAN-2007 and 31-JAN-2007 (assuming the range of the partition is from first of the month to the last date in the month).

Hash Partitioning: A hash algorithm is applied to the partitioning key to determine the partition for a given row. This provides I/O balancing, but cannot be used for range or inequality queries.

List Partitioning: The data distribution is defined by a list of values of the partitioning key. This is useful for discrete lists. e.g: Regions, States etc.

Composite Partitioning: A combination of 2 data distribution methods are used to create a composite partition. The table is initially partitioned by the first data distribution method and then each partition is sub-partitioned by the second data distribution method. The following composite partitions are available:
Range-Hash, Range-List, Range-Range, List-Range, List-List and List-Hash.

Index organized tables (tables where index and data are stored together) supports partitioning by Range, List or Hash in 10G. However, composite partitions are not supported on Index organized tables.

1.Interval Partitioning: Introduced in 11g, interval partitions are extensions to range partitioning. These provide automation for equi-sized range partitions. Partitions are created as metadata and only the start partition is made persistent. The additional segments are allocated as the data arrives. The additional partitions and local indexes are automatically created.


CREATE TABLE SALES_PART
(TIME_ID NUMBER,
REGION_ID NUMBER,
ORDER_ID NUMBER,
ORDER_DATE DATE,
SALES_QTY NUMBER(10,2),
SALES_AMOUNT NUMBER(12,2)
)

PARTITION BY RANGE (ORDER_DATE)
INTERVAL (NUMTOYMINTERVAL(1,’month’) store in (x,y)
(PARTITION p_first VALUES LESS THAN (’01-JAN-2006′);

The numtoyminterval function converts a number to an INTERVAL YEAR TO MONTH literal (‘YEAR’ or ’MONTH’).
The numtodsinterval function converts a number to an INTERVAL DAY literal (‘x’,’DAY’).
还没有详细的资料表明oracle如何控制分区属性,如自增长分区在tablespace之中的分配,以及大小等等

2.REF Partitioning: This partitioning scheme has been introduced with the assumption that related tables would benefit from same partitioning strategy. The detail table inherits the partitioning strategy of the master table through PK-FK relationship. There is no need for the partitioning key to be stored in the detail table and by specifying “PARTITION BY REFERENCE” keyword, the detail table inherits the partitioning strategy of the master table.


CREATE TABLE orders
( order_id NUMBER(12),
order_date DATE,
order_mode VARCHAR2(8),
customer_id NUMBER(6),
CONSTRAINT orders_pk PRIMARY KEY(order_id)
)
PARTITION BY RANGE(order_date)
(PARTITION Q1_2005 VALUES LESS THAN
(TO_DATE(’2005-4-1′,’yyyy-mm-dd’)),
PARTITION Q2_2005 VALUES LESS THAN
(TO_DATE(’2005-7-1′,’yyyy-mm-dd’)),
PARTITION Q3_2005 VALUES LESS THAN
(TO_DATE(’2005-10-1′,’yyyy-mm-dd’)),
PARTITION Q4_2005 VALUES LESS THAN
(TO_DATE(’2006-1-1′,’yyyy-mm-dd’)))

CREATE TABLE order_items
( order_id NUMBER(12) NOT NULL,
line_item_id NUMBER(3) NOT NULL,
product_id NUMBER(6) NOT NULL,
unit_price NUMBER(8,2),
quantity NUMBER(8),
CONSTRAINT order_items_fk
FOREIGN KEY(order_id) REFERENCES orders(order_id)
)
PARTITION BY REFERENCE(order_items_fk)


父表的分区发生变化,子表分区也会自动适应,而单独修改子表分区则不被允许。

3.Virtual Column Based partitioning: In the previous versions of Oracle, a table could be partitioned only if the partition key physically existed in the table. The new functionality in Oracle 11G, “Virtual columns”, removes this restriction and allows partitioning key to be defined by an expression that uses one or more columns of a table. The virtual columns are stored as metadata only.

借用一个网友的例子:


create table sales
(
sales_id number,
cust_id number,
sales_amt number,
sale_category varchar2(6)
generated always as
(
case
when sales_amt <= 10000 then 'LOW' when sales_amt > 10000
and sales_amt <= 100000 then case when cust_id < 101 then 'LOW' when cust_id between 101 and 200 then 'MEDIUM' else 'MEDIUM' end when sales_amt > 100000
and sales_amt <= 1000000 then case when cust_id < 101 then 'MEDIUM' when cust_id between 101 and 200 then 'HIGH' else 'ULTRA' end else 'ULTRA' end ) virtual ) partition by list (sale_category) ( partition p_low values ('LOW'), partition p_medium values ('MEDIUM'), partition p_high values ('HIGH'), partition p_ultra values ('ULTRA') ) ; Table created. SQL>
SQL> insert into sales (sales_id,cust_id,sales_amt) values (1,1,100);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from sales partition (p_low);

SALES_ID CUST_ID SALES_AMT SALE_C
———- ———- ———- ——
1 1 100 LOW

4.System Partitioning

CREATE TABLE syspar (c1 int, c2 int)
PARTITION BY SYSTEM
(
PARTITION p1 TABLESPACE tbs_1,
PARTITION p2 TABLESPACE tbs_2,
PARTITION p3 TABLESPACE tbs_3,
PARTITION p4 TABLESPACE tbs_4
);


由SQL语句决定插入哪一个分区 like: insert into syspar partition (x) values (1,2); 这种类型的分区将不支持Partition Split操作,也不支持create table as select操作。

5.More Composite Partitioning

现在Range,List,Interval都可以作为Top level分区,而Second level则可以是Range,List,Hash,也就是在11g中可以有3*3=9种复合分区,满足更多的业务需求。

6.Introduced Partition Advisor

The Partition Advisor is part of the SQL Access Advisor of EM in 11g . The Partition Advisor can recommend a partitioning strategy for a table based on a supplied workload of SQL statements which can be supplied by the SQL Cache, a SQL Tuning set, or be defined by the user.

11g Oradebug Infrastructure (Internal only)

January 2, 2012 11g, Internals, oracle No comments

11g Oracle 加强了oradebug 可以使用oradebug doc命令查看internal trace command,下面简述各种命令的输出,由于是undocumented,所以很多东西有待研究


Contents
ORADEBUG DOC
ORADEBUG DOC EVENT
ORADEBUG DOC EVENT NAME
ORADEBUG DOC EVENT NAME
ORADEBUG DOC EVENT SCOPE
ORADEBUG DOC EVENT SCOPE SQL
ORADEBUG DOC EVENT FILTER
ORADEBUG DOC EVENT ACTION
ORADEBUG DOC COMPONENT


Refer to the SQL*Plus User’s Guide and Reference for more information.
[oracle@testdb ~]$ sqlplus ‘/as sysdba’

SQL*Plus: Release 11.2.0.2.0 Production on Mon Jan 2 15:04:43 2012

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ORADEBUG DOC:

SQL> oradebug doc

Internal Documentation
**********************

EVENT Help on events (syntax, event list, …)
COMPONENT [] List all components or describe

ORADEBUG DOC EVENT:

SQL> oradebug doc event

Event Help:
***********

Formal Event Syntax
——————–
::= ‘ []
[]
[]
[]
[off]’

::= []

::= [: scope_parameters]

::= {: filter_parameters}

::= (action_parameters)

<*_parameters> ::= = [, ]

Some Examples
————-
* Set event 10235 level 1:
alter session set events ‘10235’;

* Set events SQL_TRACE (a.k.a. 10046) level 1:
alter session set events ‘sql_trace’;

* Turn off event SQL_TRACE:
alter session set events ‘sql_trace off’;

* Set events SQL_TRACE with parameter set to ‘never’
and parameter set to ‘true’:
alter session set events ‘sql_trace wait=true, plan_stat=never’;

* Trace in-memory the SQL_MONITOR component (the target) and all its
sub-components at level high. Get high resolution time for each
trace:
alter session set events ‘trace[sql_mon.*] memory=high,
get_time=highres’;

* On-disk trace PX servers p000 and p005 for components ‘sql_mon’
and ‘sql_optimizer’ (including sub-components) at level highest:
alter system set events ‘trace[sql_mon | sql_optimizer.*]
{process: pname = p000 | p005}’;

* Same as above but only when SQL id ‘7ujay4u33g337’ is executed:
alter system set events ‘trace[sql_mon | sql_optimizer.*]
[sql: 7ujay4u33g337]
{process: pname = p000 | p005}’;

* Execute an action immediatly by using ‘immediate’ for the event
name:
alter session set events ‘immediate eventdump(system)’

* Create an incident labeled ‘table_missing’ when external error
942 is signaled by process id 14534:
alter session set events ‘942 {process: 14534}
incident(table_missing)’;

Notes
—–
* Implicit parameter level is 1 by default
e.g. ‘10053’ is same as ‘10053 level 1’

* Event target (see [] construct) is only
supported by specific events like the TRACE[] event

* and/or are constructs
that can be used for any event

* Same event can be set simultaneously for a different scope or
target but not for different filters.

* ‘|’ character can be used to select multiple targets, scope or
filters.

E.g. ‘sql_trace [sql: sql_id=g3yc1js3g2689 | sql_id=7ujay4u33g337]’

* ‘=’ sign is optional in <*_parameters>

E.g. ‘sql_trace level 12’;

* Like PL/SQL, no need to specify the parameter name for target,
scope, filters and action. Resolution is done by position in
that case:

E.g. ‘sql_trace [sql: g3yc1js3g2689 | 7ujay4u33g337]’

Help sub-topics
—————

NAME [] List all events or describe
SCOPE [] List all scopes or describe
FILTER [] List all filters or describe
ACTION [] List all actions or describe

ORADEBUG DOC EVENT NAME:

SQL> oradebug doc event name

Events in library DIAG:
——————————
trace[] Main event to control UTS tracing
disable_dde_action[] Event used by DDE to disable actions
ams_trace[] Event to dump ams performance trace records
ams_rowsrc_trace[] Event to dump ams row source tracing
sweep_verification Event to enable sweep file verification
enable_xml_inc_staging Event to enable xml incident staging format

Events in library RDBMS:
——————————
alert_text event for textual alerts
trace_recursive event to force tracing recursive SQL statements
clientid_overwrite event to overwrite client_identifier when client_info is set
sql_monitor event to force monitoring SQL statements
eventsync_tac Event posted from events syncing tac
sql_trace event for sql trace
pmon_startup startup of pmon process
background_startup startup of background processes
db_open_begin start of db open operation
test_gvtf test GV$() Table Tunction

Events in library GENERIC:
——————————
kg_event[] Support old error number events (use err# for short)

Events in library CLIENT:
——————————
oci_trace event for oci trace

Events in library LIBCELL:
——————————
libcell_stat libcell statistics level specification
cellclnt_skgxp_trc_ops Controls to trace SKGXP operations
cellclnt_ossnet_trc Controls to trace IP affinity in ossnet
cellclnt_high_lat_ops Control to trace High-latency I/O operations

Events in library ADVCMP:
——————————
arch_comp_level[] arch_comp_level[]
ccmp_debug columnar compression debug event
ccmp_align columnar compression enable alignment
ccmp_countstar columnar compression enable count(*) optimization
ccmp_dumpunaligned columnar compression dump dbas of unaligned CUs

ORADEBUG DOC EVENT NAME

SQL> ORADEBUG DOC EVENT NAME sql_trace

sql_trace: event for sql trace

Usage
——-
sql_trace
wait < false | true >,
bind < false | true >,
plan_stat < never | first_execution | all_executions | adaptive >,
level

SQL> ORADEBUG DOC EVENT NAME db_open_begin

db_open_begin: start of db open operation

Usage
——-
db_open_begin
delay

ORADEBUG DOC EVENT SCOPE:

SQL> oradebug doc event scope

Event scopes in library RDBMS:
——————————
SQL[] sql scope for RDBMS

ORADEBUG DOC EVENT SCOPE SQL:

SQL> oradebug doc event scope sql

SQL: sql scope for RDBMS

Usage
——-
[SQL: sql_id ]

ORADEBUG DOC EVENT FILTER

SQL> ORADEBUG DOC EVENT FILTER

Event filters in library DIAG:
——————————
occurence filter to implement counting for event checks
callstack filter to only fire an event when a function is on the stack
tag filter to only fire an event when a tag is set

Event filters in library RDBMS:
——————————
process filter to set events only for a specific process
pgadep filter to only fire an event when the pgadep matches a given value or falls within a range

Event filters in library GENERIC:
——————————
errarg filter to set error events only for a specific error argument

ORADEBUG DOC EVENT ACTION

SQL> ORADEBUG DOC EVENT ACTION

Actions in library DIAG:
—————————
dumpFrameContext – Dump Frame Context contents
dumpBuckets
kgsfdmp
dumpDiagCtx
dumpDbgecPopLoc
dumpDbgecMarks
dumpGenralConfiguration
dumpADRLockTable
act1
action1
action2
UTDumpGC
dbgvci_action_signal_crash

Actions in library RDBMS:
—————————
incident – Create an Incident
sqlmon_dump – SQL Monitor Dump SGA Action
flashfreeze
oradebug – debug process using ORADEBUG
debugger – debug process using System Debugger
debug
– alias for ‘debugger’ – debug process using System Debugger
crash – crash the instance
eventdump – list events that are set in the group
kdlut_bucketdump_action
kzxt_dump_action
dumpKernelDiagState
HMCHECK (async)
DATA_BLOCK_INTEGRITY_CHECK (async)
CF_BLOCK_INTEGRITY_CHECK (async)
DB_STRUCTURE_INTEGRITY_CHECK (async)
REDO_INTEGRITY_CHECK (async)
TRANSACTION_INTEGRITY_CHECK (async)
SQL_TESTCASE_REC (async)
SQL_TESTCASE_REC_DATA (async)
ORA_12751_DUMP
sqladv_dump_dumpctx
ORA_4030_DUMP
– dump summary of PGA memory usage, largest allocations
HNGDET_MEM_USAGE_DUMP_NOARGS – dump hang detection memory usage
kcfis_action – kcfis actions
exadata_dump_modvers – Exadata dump module versions
QUERY_BLOCK_DUMP – Debug action for dumping a qbcdef tree
ASM_MOUNT_FAIL_CHECK (async)
ASM_ALLOC_FAIL_CHECK (async)
ASM_ADD_DISK_CHECK (async)
ASM_FILE_BUSY_CHECK (async)
KJZN_ASYNC_SYSTEM_STATE (async)
KSI_GET_TRACE – Get lmd0 traces for ksi issues
TRACE_BUFFER_ON – Allocate trace output buffer for ksdwrf()
TRACE_BUFFER_OFF
– Flush and deallocate trace output buffer for ksdwrf()
LATCHES – Dump Latches
XS_SESSION_STATE – Dump XS session state
PROCESSSTATE – Dump process state
SYSTEMSTATE – Dump system state
INSTANTIATIONSTATE – Dump instantiation state
CONTEXTAREA – Dump cursor context area
HEAPDUMP
– Dump memory heap (1-PGA, 2-SGA, 4-UGA, +1024-Content)
POKE_LENGTH – Set length before poking value
POKE_VALUE – Poke a value into memory
POKE_VALUE0 – Poke 0 value into memory
GLOBAL_AREA
– Dump fixed global area(s) (1=PGA/2=SGA/3=UGA, add +8 for pointer content)
REALFREEDUMP – Dump PGA real free memory allocator state
FLUSH_JAVA_POOL – Flush Java pool
PGA_DETAIL_GET
– Ask process to publish PGA detail info (level is pid)
PGA_DETAIL_DUMP
– Dump PGA detail information for process (level is pid)
PGA_DETAIL_CANCEL – Free PGA detail request (level is pid)
PGA_SUMMARY – Summary of PGA memory usage, largest allocations
MODIFIED_PARAMETERS – Dump parameters modifed by session (level unused)
ERRORSTACK
– Dump state (ksedmp). Use INCIDENT action to create incident
CALLSTACK – Dump call stack (level > 1 to dump args)
RECORD_CALLSTACK
– Record or dump call stack, level = #frames (level += 1000000 go to trc)
BG_MESSAGES – Dump routine for background messages
ENQUEUES
– Dump enqueues (level >=2 adds resources, >= 3 adds locks)
KSTDUMPCURPROC
– Dump current process trace buffer (1 for all events)
KSTDUMPALLPROCS
– Dump all processes trace buffers (1 for all events)
KSTDUMPALLPROCS_CLUSTER
– Dump all processes (cluster wide) trace buffers (1 for all events)
KSKDUMPTRACE – Dumping KSK KST tracing (no level)
DBSCHEDULER – Dump ressource manager state
LDAP_USER_DUMP – Dump LDAP user mode
LDAP_KERNEL_DUMP – Dump LDAP kernel mode
DUMP_ALL_OBJSTATS – Dump database objects statistics
DUMPGLOBALDATA – Rolling migration DUMP GLOBAL DATA
HANGANALYZE – Hang analyze
HANGANALYZE_PROC – Hang analyze current process
HANGANALYZE_GLOBAL – Hang analyze system
HNGDET_MEM_USAGE_DUMP – dump hang detection memory usage
GES_STATE – Dump DML state
OCR – OCR client side tracing
CSS – CSS client side tracing
CRS – CRS client side tracing
SYSTEMSTATE_GLOBAL – Perform cluster wide system state dump (via DIAG)
DUMP_ALL_COMP_GRANULE_ADDRS
– MMAN dump all granule addresses of all components (no level)
DUMP_ALL_COMP_GRANULES
– MMAN dump all granules of all components (1 for partial list)
DUMP_ALL_REQS
– MMAN dump all pending memory requests to alert log
DUMP_TRANSFER_OPS – MMAN dump transfer and resize operations history
DUMP_ADV_SNAPSHOTS
– MMAN dump all snapshots of advisories (level unused)
CONTROLF – DuMP control file info
FLUSH_CACHE
– Flush buffer cache without shuting down the instance
BUFFERS – Dump all buffers in the buffer cache at level l
SET_TSN_P1
– Set tablespace # for buffer dump (level = ts# + 1)
BUFFER
– Dump all buffers for full relative dba at lvl 10
BC_SANITY_CHECK
– Run buffer cache sanity check (level = 0xFF for full)
SET_NBLOCKS – Set number of blocks for range reuse checks
CHECK_ROREUSE_SANITY – Check range/object reuse sanity (level = ts#)
DUMP_PINNED_BUFFER_HISTORY
– kcb Dump pinned buffers history (level = # buffers)
REDOLOGS – Dump all online logs according to the level
LOGHIST
– Dump the log history (1: dump earliest/latest entries, >1: dump most recent 2**level entries)
REDOHDR – Dump redo log headers
LOCKS – Dump every lock element to the trace file
GC_ELEMENTS – Dump every lock element to the trace file
FILE_HDRS – Dump database file headers
KRB_TRACE – Set krb trace options
FBINC
– Dump flashback logs of the current incarnation and all its ancestors.
FBHDR – Dump all the flashback logfile headers
FLASHBACK_GEN – Dump flashback generation state
KTPR_DEBUG
– Parallel txn recovery (1: cleanup check, 2: dump ptr reco ctx, 3: dump recent smon runs)
DUMP_TEMP – Dump temp space management state (no level)
DROP_SEGMENTS – Drop unused temporary segments
TREEDUMP
– Dump an index tree rooted at dba BLOCKDBA ()
KDLIDMP – Dump 11glob inodes states (level = what to dump)
ROW_CACHE – Dump all cache objects
LIBRARY_CACHE
– Dump the library cache (level > 65535 => level = obj @)
CURSORDUMP – Dump session cursors
CURSOR_STATS – Dump all statistics information for cursors
SHARED_SERVER_STATE – Dump shared server state
LISTENER_REGISTRATION – Dump listener registration state
JAVAINFO – Dump Oracle Java VM
KXFPCLEARSTATS – Clear all Parallel Query messaging statistics
KXFPDUMPTRACE – Dump Parallel Query in-memory traces
KXFXSLAVESTATE
– Dump PX slave state (1: uga; 2: current cursor state; 3: all cursors)
KXFXCURSORSTATE – Dump PX slave cursor state
WORKAREATAB_DUMP – Dump SQL Memory Manager workarea table
OBJECT_CACHE – Dump the object cache
SAVEPOINTS – Dump savepoints
RULESETDUMP – Dump rule set
FAILOVER – Set condition failover immediate
OLAP_DUMP – Dump OLAP state
AWR_FLUSH_TABLE_ON
– Enable flush of table id (ids in X$KEWRTB)
AWR_FLUSH_TABLE_OFF
– Disable flush of table id (ids in X$KEWRTB)
ASHDUMP – Dump ASH data (level = # of minutes)
ASHDUMPSECONDS – Dump ASH data (level = # of seconds)
HM_FW_TRACE – DIAG health monitor set tracing level
IR_FW_TRACE – DIAG intelligent repair set/clear trace
HEAPDUMP_ADDR
– Heap dump by address routine (level > 1 dump content)
POKE_ADDRESS – Poke specified address (level = value)
CURSORTRACE
– Trace cursor by hash value (hash value is address)
RULESETDUMP_ADDR – Dump rule set by address

Actions in library GENERIC:
—————————
xdb_dump_buckets
dumpKGERing – Dump contents of KGE ring buffer
dumpKGEState – Dump KGE state information for debugging

Actions in library CLIENT:
—————————
kpuActionDefault – dump OCI data
kpuActionSignalCrash
– crash and produce a core dump (if supported and possible)
kpudpaActionDpapi – DataPump dump action

ORADEBUG DOC COMPONENT

SQL> ORADEBUG DOC COMPONENT

Components in library DIAG:
————————–
diag_uts Unified Tracing Service (dbgt, dbga)
uts_vw UTS viewer toolkit (dbgtp, dbgtn)
diag_adr Automatic Diagnostic Repository (dbgr)
ams_comp ADR Meta-data Repository (dbgrm)
ame_comp ADR Export/Import Services (dbgre)
ami_comp ADR Incident Meta-data Services (dbgri)
diag_ads Diagnostic Directory and File Services (dbgrf, sdbgrf, sdbgrfu, sdbgrfb)
diag_hm Diagnostic Health Monitor
diag_ips Diagnostic Incident Packaging System
diag_dde Diagnostic Data Extractor (dbge)
diag_fmwk Diagnostic Framework (dbgc)
diag_ilcts Diagnostic Inter-Library Compile-time Service (dbgf)
diag_attr Diagnostic Attributes Management
diag_comp Diagnostic Components Management
diag_testp Diagnostic component test parent (dbgt)
diag_testc1 Diagnostic component test child 1
diag_testc2 Diagnostic component test child 2
KGSD Kernel Generic Service Debugging (kgsd)
diag_events Diagnostic Events (dbgd)
diag_adl Diagnostic ARB Alert Log (dbgrl, dbgrlr)
diag_vwk Diagnostic viewer toolkit (dbgv)
diag_vwk_parser Diagnostic viewer parser (dbgvp, dbgvl)
diag_vwk_uts Diagnostic viewer for UTS traces and files (dbgvf)
diag_vwk_ams Diagnostic viewer for AMS metadata (dbgvm)
diag_vwk_ci Diagnostic viewer for command line (dbgvci)
kghsc KGHSC Compact Stream (kghsc)
dbgxtk DBGXTK xml toolkit (dbgxtk)

Components in library RDBMS:
————————–
SQL_Compiler SQL Compiler
SQL_Parser SQL Parser (qcs)
SQL_Semantic SQL Semantic Analysis (kkm)
SQL_Optimizer SQL Optimizer
SQL_Transform

SQL_MVRW SQL Materialized View Rewrite
SQL_VMerge SQL View Merging (kkqvm)
SQL_Virtual SQL Virtual Column (qksvc, kkfi)
SQL_APA SQL Access Path Analysis (apa)
SQL_Costing SQL Cost-based Analysis (kko, kke)
SQL_Parallel_Optimization SQL Parallel Optimization (kkopq)
SQL_Code_Generator SQL Code Generator (qka, qkn, qke, kkfd, qkx)
SQL_Parallel_Compilation SQL Parallel Compilation (kkfd)
SQL_Expression_Analysis SQL Expression Analysis (qke)
SQL_Plan_Management SQL Plan Managment (kkopm)
MPGE MPGE (qksctx)
SQL_Execution SQL Execution (qer, qes, kx, qee)
Parallel_Execution Parallel Execution (qerpx, qertq, kxfr, kxfx, kxfq, kxfp)
PX_Messaging Parallel Execution Messaging (kxfp)
PX_Group Parallel Execution Slave Group (kxfp)
PX_Affinity Parallel Affinity (ksxa)
PX_Buffer Parallel Execution Buffers (kxfpb)
PX_Granule Parallel Execution Granules (kxfr)
PX_Control Parallel Execution Control (kxfx)
PX_Table_Queue Parallel Execution Table Queues (kxfq)
PX_Scheduler Parallel Execution Scheduler (qerpx)
PX_Queuing Parallel Execution Queuing (kxfxq)
Bloom_Filter Bloom Filter (qerbl, qesbl)
PX_Blackbox Parallel Execution Blackbox (kxf)
PGA_Manage PGA Memory Management
PGA_Compile PGA Memory Compilation
PGA_IMM PGA Memory Instance Manage
PGA_CMM PGA Memory Cursor Manage
PGA_ADV PGA Memory Advisor
rdbms_dde RDBMS Diagnostic Data Extractor (dbke)
VOS VOS (ks)
hang_analysis Hang Analysis (ksdhng)
background_proc Background Processes (ksb, ksbt)
system_param System Parameters (ksp, kspt)
ksu Kernel Service User (ksu)
ksutac KSU Timeout Actions
ksv_trace Kernel Services Slave Management (ksv)
sql_mon SQL Monitor (keswx)
sql_mon_deamon SQL Monitor Deamon
sql_mon_query SQL Monitor Query
CACHE_RCV Cache Recovery (kcv, kct, kcra, kcrp, kcb)
DIRPATH_LOAD Direct Path Load (kl, kdbl, kpodp)
DIRPATH_LOAD_BIS Direct Path Kpodpbis Routine (kpodp)
RAC Real Application Clusters
GES Global Enqueue Service
GCS Global Cache Service (kjb)
GSIPC Global Enqueue/Cache Service IPC
KSI Kernel Service Instance locking (ksi)
RAC_ENQ Enqueue Operations
RAC_RCFG Reconfiguration
RAC_DRM Dynamic Remastering
RAC_MRDOM Multiple Recovery Domains
CGS Cluster Group Services (kjxg)
CGSIMR Instance Membership Recovery (kjxgr)
DD GES Deadlock Detection
GCS_BSCN Broadcast SCN (kjb, kcrfw)
RAC_WLM Work Load Management (wlm)
RAC_MLMDS RAC Multiple LMS (kjm)
GCS_READMOSTLY GCS Read-mostly (kjb)
GCS_READER_BYPASS GCS Reader Bypass (kjb)
GCS_DELTAPUSH GCS Delta Push (kjb)
db_trace RDBMS server only tracing
kst server trace layer tracing (kst)
ddedmp RDBMS Diagnostic Data Extractor Dumper (dbked)
cursor Shared Cursor (kxs, kks)
Bind_Capture Bind Capture Tracing
KSM Kernel Service Memory (ksm)
KSE Kernel Service Error Manager (kse)
explain SQL Explain Plan (xpl)
rdbms_event RDBMS Events (dbkd)
LOB_INODE Lob Inode (kdli)
rdbms_adr RDBMS ADR (dbkr)
ASM Automatic Storage Management (kf)
KFK KFK (kfk)
KFKIO KFK IO (kfkio)
KFKSB KFK subs (kfksubs)
KFN ASM Networking subsystem (kfn)
KFNU ASM Umbillicus (kfnm, kfns, kfnb)
KFNS ASM Server networking (kfns)
KFNC ASM Client networking (kfnc)
KFIS ASM Intelligent Storage interfaces (kfis)
KFM ASM Node Monitor Interface Implementation (kfm)
KFMD ASM Node Monitor Layer for Diskgroup Registration (kfmd)
KFMS ASM Node Monitor Layers Support Function Interface (kfms)
KFFB ASM Metadata Block (kffb)
KFFD ASM Metadata Directory (kffd)
KFZ ASM Zecurity subsystem (kfz)
KFC ASM Cache (kfc)
KFR ASM Recovery (kfr)
KFE ASM attributes (kfe)
KFDP ASM PST (kfdp)
KFG ASM diskgroups (kfg)
DML DML Drivers (ins, del, upd)
Health_Monitor Health Monitor
DRA Data Repair Advisor
DIRACC Direct access to fixed tables (kqfd)
ASH Active Session History (kewa)
PART Partitioning (kkpo, qespc, qesma, kkpa, qergi)
PART_IntPart Interval Partitioning
PART_Dictionary Partitioning Dictionary (kkpod)
LOB_KDLW Lob kdlw (kdlw)
LOB_KDLX Lob xfm (kdlx)
LOB_KDLXDUP Lob dedup (kdlxdup)
LOB_KDLRCI Lob rci (kdlrci)
LOB_KDLA SecureFile Archive (kdla)
SQL_Manage SQL Manageability (kes)
SQL_Manage_Infra Other SQL Manageability Infrastructure (kesai, kesqs, kesatm, kesutl, kessi, keswat, keswts, keswsq)
SQL_Tune SQL Tuning Advisor (kest)
SQL_Tune_Auto SQL Tuning Advisor (auto-tune) (kestsa)
SQL_Tune_Index SQL Tuning Advisor (index-tune) (kestsi)
SQL_Tune_Plan SQL Tuning Advisor (plan node analysis) (kestsp)
SQL_Tune_Px SQL Tuning Advisor (parallel execution) (kestsa)
SQL_Tune_Fr SQL Tuning Advisor (fix regression) (kestsa)
SQL_Test_Exec SQL Test-Execute Service (kestse)
SQL_Perf SQL Performance Analyzer (kesp, keswpi)
SQL_Repair SQL Repair Advisor (kesds)
Auto_Tune_Opt Auto Tuning Optimizer (kkoat)
SQL_trace_parser SQL trace parser (kesstp)
SQL_Analyze SQL Analyze (qksan)
SQL_DS SQL Dynamic Sampling Services (qksds)
SQL_DDL SQL DDL (atb, ctc, dtb)
RAT_WCR Real Application Test: Workload Capture and Replay (kec)
Spatial Spatial (md)
Spatial_IND Spatial Indexing (mdr)
Spatial_GR Spatial GeoRaster (mdgr)
Text Text (dr)
rdbms_gc RDBMS Diagnostic Generic Configuration (dbkgc)
XS XS Fusion Security (kzx)
XSSESSION XS Session (kzxs)
XSPRINCIPAL XS Principal (kzxu)
XSSECCLASS XS Security Class (kzxc)
XSXDS XS Data Security (kzxd)
XSVPD XS VPD
XSXDB_DEFAULT XS XDB
XS_MIDTIER XS Midtier (kpuzxs)
AQ Streams Advanced Queuing (kwq, kkcn, kpon, kpoaq, kpce, kpcm, kpun, kpuaq)
AQ_DEQ Streams Advanced Queuing Dequeue (kwqid, kwqdl)
AQ_TM Streams Advanced Queuing Time Manager (kwqit, kwqmn)
KSFM Kernel Service File Mapping (ksfm)
KXD Exadata specific Kernel modules (kxd)
KXDAM Exadata Disk Auto Manage (kxdam)
KCFIS Exadata Predicate Push (kcfis)

Components in library GENERIC:
————————–
Generic_VOS Generic VOS
VOS_Heap_Manager VOS Heap Manager
VOS_Latches VOS Latches
VOS_GST VOS Generic Stack Trace (kgds)
XML XML (qmxt, qmxq)
Generic_XDB Generic XDB
XDB_Repository XDB Repository (qme)
XDB_Protocols XDB Protocols (qmh, qmf, qmn)
XDB_Query XDB Query (qmxtra, qerxp)
XDB_XMLIndex XDB XMLIndex (qmxtri, qmixq)
XDB_Schema XDB Schema (qmxtri, qmixq)
XDB_XOB XDB XOB (qmx)
XDB_CSX XDB CSX (qmcxe, qmcxd)
XDB_Default XDB Default
LOB LOB (koll, kola)
LOB_Refcount LOB Refcount (kolr)
LOB_Default LOB Default (kole, kokl, koxs, kokla, koklm, koklv)
KGH KGH Memory Allocator (kgh)
KGF ASM Automatic Storage Management (kgf)
LIBCACHE LIBCACHE (kgl, kql)
OBJECTS OBJECTS
OBJECTS_DDL OBJECTS DDL (kokt)
OBJECTS_Types OBJECTS Types (kot, ko, ort)
OBJECTS_Images OBJECTS Images (koke, kot, kad)
OBJECTS_Anydata OBJECTS Anydata (kokla, kolo, kot, kad)
OBJECTS_Streams OBJECTS Streams (koxs)
OBJECTS_Dict OBJECTS Dictionary (kkdo, qcdo)
OBJECTS_Semanal OBJECTS Semantic Analysis (koks, qcso, qcto)
OBJECTS_Default OBJECTS Default

Components in library CLIENT:
————————–
Client_KPU Client KPU
KPU_Memory KPU Memory
KPU_TTC KPU TTC
KPU_Relational KPU Relational
KPU_Objects KPU Objects
KPU_LOBS KPU LOBS
SQLLDR_Load SQLLDR Load (ul)
DPAPI_Load DPAPI Load (kpudp)

Components in library LIBCELL:
————————–
Client_Library Client Library
Disk_Layer Disk Layer
Network_Layer Network Layer
IPC_Layer IPC Layer

Components in library ORANET:
————————–
TNSLSNR OraNet Listener
NET_NSGR Network Service Generic Registration
NET_NSGI TNI Network Service Generic Listener User-defined class
CMAN OraNet Connection Manager
NET OraNet Services
NET_NI Network Interface Layer
NET_NS Network Session Layer
NET_NT Network Transport Layer
NET_NTM Network Transport Mailbox Layer
NET_NTP Network Transport IPC Layer
NET_NTT Network Transport TCP/IP Layer
NET_NTUS Network Transport Unix Domain Sockets Layer
NET_NL Network Library
NET_NA Network Authentication
NET_NZ Network Zecurity
NET_NTZ Network SSL
NET_NU Network Trace Route Utility
NET_NN Network Names

Components in library ADVCMP:
————————–
ADVCMP_MAIN Archive Compression (kdz)
ADVCMP_COMP Archive Compression: Compression (kdzc, kdzh, kdza)
ADVCMP_DECOMP Archive Compression: Decompression (kdzd, kdzs)

结合 oradebug doc event name trace:

SQL> oradebug doc event name trace

trace: Main event to control UTS tracing

Usage
——-
trace [ component ]
disk < default | lowest | low | medium | high | highest | disable >,
memory < default | lowest | low | medium | high | highest | disable >,
get_time < disable | default | seq | highres | seq_highres >,
get_stack < disable | default | force >,
operation ,
function ,
file ,
line

example:

alter session set events ‘trace[RDBMS.SQL_Transform] [SQL: 548447mzsjars] disk=high’;

How to create very big index

December 29, 2011 11g, maintain, oracle No comments

how to create very big index

我们总会碰到各种case 需要建立索引 有些索引大至1T,这些索引的创建是个非常棘手的问题,oracle 11g 在建立partition index时,给出了不错的解决方案


Let’s now re-create the index as a globally partitioned index, with one partition defined to contain all values of interest and another partition defined to contain the vast number of processed values. Initially, the index is created in an unusable state so no segments and no storage is allocated to any of the partitions

在创建partition index 时候使用unusable 可以不立刻分配storage和segment空间,从而解决了空间分配这个问题:

通过Richard Foote提供的例子,我们可以很好的看出效果:

SQL> create table bowie_stuff (id number, processed varchar2(10));

Table created.

SQL> insert into bowie_stuff select rownum, ‘YES’ from dual connect by level <= 1000000; 1000000 rows created. SQL> commit;

Commit complete.

SQL> update bowie_stuff set processed = ‘NO’ where id in (999990, 999992, 999994, 999996, 999998);

5 rows updated.

SQL> commit;

Commit complete.

SQL> create index bowie_stuff_i on bowie_stuff(processed) pctfree 0;

Index created.

SQL> select index_name, leaf_blocks from dba_indexes where index_name = ‘BOWIE_STUFF_I’;

INDEX_NAME LEAF_BLOCKS

—————————— ———–

BOWIE_STUFF_I 1877

SQL> select segment_name, blocks from dba_segments where segment_name = ‘BOWIE_STUFF_I’;

SEGMENT_NAME BLOCKS

——————– ———-

BOWIE_STUFF_I 1920

建立了一个normal index 分配了1920个块

——————————————————


SQL> drop index bowie_stuff_i;

Index dropped.

SQL> create index bowie_stuff_i on bowie_stuff(processed)

2 global partition by range (processed)

3 (partition not_processed_part values less than (‘YES’),

4 partition processed_part values less than (MAXVALUE))

5 unusable;

Index created.

SQL> alter index bowie_stuff_i rebuild partition not_processed_part;

Index altered.

SQL> select index_name, partition_name, leaf_blocks from dba_ind_partitions where index_name = ‘BOWIE_STUFF_I’;

INDEX_NAME PARTITION_NAME LEAF_BLOCKS

——————– ——————– ———–

BOWIE_STUFF_I PROCESSED_PART 0

BOWIE_STUFF_I NOT_PROCESSED_PART 1

SQL> select segment_name, partition_name, blocks from dba_segments where segment_name = ‘BOWIE_STUFF_I’;

SEGMENT_NAME PARTITION_NAME BLOCKS

——————– ——————– ———-

BOWIE_STUFF_I NOT_PROCESSED_PART 8

这里仅仅分配了 8个block

根据Richard Foote 的分析,在index create 完成之后alter index xx unusable 在10g,11g 也是截然不同的:

Starting with a simple Oracle 10g example, we create a table and associated index:

SQL> create table bowie as select rownum id, ‘BOWIE’ name from dual connect by level <= 1000000; Table created. SQL> create index bowie_id_i on bowie(id);

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>’BOWIE’, cascade=> true, estimate_percent=> null, method_opt=> ‘FOR ALL COLUMNS SIZE 1’);

PL/SQL procedure successfully completed.

If we now make the index unusable:

view sourceprint?

SQL> alter index bowie_id_i unusable;

Index altered.

SQL> select index_name, blevel, leaf_blocks, num_rows, status, dropped from dba_indexes where index_name = ‘BOWIE_ID_I’;

INDEX_NAME BLEVEL LEAF_BLOCKS NUM_ROWS STATUS DRO

———- ———- ———– ———- ——– —

BOWIE_ID_I 2 2226 1000000 UNUSABLE NO

SQL> select segment_name, bytes, blocks, extents from dba_segments where segment_name = ‘BOWIE_ID_I’;

SEGMENT_NAME BYTES BLOCKS EXTENTS

———— ———- ———- ———-

BOWIE_ID_I 18874368 2304 18

当index 置于unusable时候 blocks 依然存在

Using the same demo as before but running Oracle11g R2:

SQL> create table bowie as select rownum id, ‘BOWIE’ name from dual connect by level <= 1000000; Table created. SQL> create index bowie_id_i on bowie(id);

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>’BOWIE’, cascade=> true, estimate_percent=> null, method_opt=> ‘FOR ALL COLUMNS SIZE 1’);

PL/SQL procedure successfully completed.

SQL> alter index bowie_id_i unusable;

Index altered.

SQL> select index_name, blevel, leaf_blocks, num_rows, status, dropped from dba_indexes where index_name = ‘BOWIE_ID_I’;

INDEX_NAME BLEVEL LEAF_BLOCKS NUM_ROWS STATUS DRO

———- ———- ———– ———- ——– —

BOWIE_ID_I 2 2226 1000000 UNUSABLE NO

SQL> select segment_name, bytes, blocks, extents from dba_segments where segment_name = ‘BOWIE_ID_I’;

no rows selected

在11gR2中 alter index unusable 发现index 所占用的blocks 完全消失了

以下是分区表的测试:

SQL> CREATE TABLE big_album_sales(id number, album_id number, country_id number,

release_date date, total_sales number) PARTITION BY RANGE (release_date)

(PARTITION ALBUMS_2007 VALUES LESS THAN (TO_DATE(’01-JAN-2008′, ‘DD-MON-YYYY’)),

PARTITION ALBUMS_2008 VALUES LESS THAN (TO_DATE(’01-JAN-2009′, ‘DD-MON-YYYY’)),

PARTITION ALBUMS_2009 VALUES LESS THAN (TO_DATE(’01-JAN-2010′, ‘DD-MON-YYYY’)),

PARTITION ALBUMS_2010 VALUES LESS THAN (MAXVALUE));

Table created.

SQL> INSERT INTO big_album_sales SELECT rownum, mod(rownum,5000)+1, mod(rownum,100)+1, sysdate-mod(rownum,2000), ceil(dbms_random.value(1,500000)) FROM dual CONNECT BY LEVEL <= 1000000; 1000000 rows created. SQL> commit;

Commit complete.

We first create a Non-Partitioned Index:

SQL> CREATE INDEX big_album_tot_sales_i ON big_album_sales(total_sales);

Index created.

Next a Global Partitioned Index:

SQL> CREATE INDEX big_album_country_id_i ON big_album_sales(country_id)

GLOBAL PARTITION BY RANGE (country_id)

(PARTITION TS1 VALUES LESS THAN (26),

PARTITION TS2 VALUES LESS THAN (51),

PARTITION TS3 VALUES LESS THAN (76),

PARTITION TS4 VALUES LESS THAN (MAXVALUE));

Index created.

Finally, a Local Partitioned index:

SQL> CREATE INDEX big_album_album_id_i ON big_album_sales(album_id) local;

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=> ‘BOWIE’, tabname=> ‘BIG_ALBUM_SALES’, estimate_percent=> null, method_opt=> ‘FOR ALL COLUMNS SIZE 1′);

PL/SQL procedure successfully completed.

If we now split the last table partition, this will effectively make the:

1) Non-Partitioned Unusable
2) All partitions of the Global Partitioned index unusable
3) Just the last 2 partitions of the Local Partitioned Index unusable

SQL> ALTER TABLE big_album_sales SPLIT PARTITION ALBUMS_2010

AT (TO_DATE(’01-JAN-2011’, ‘DD-MON-YYYY’))

INTO (PARTITION ALBUMS_2010, PARTITION ALBUMS_2011);

Table altered.

SQL> select index_name, status from dba_indexes where table_name = ‘BIG_ALBUM_SALES’;

INDEX_NAME STATUS

———————— ——–

BIG_ALBUM_TOT_SALES_I UNUSABLE

BIG_ALBUM_COUNTRY_ID_I N/A

BIG_ALBUM_ALBUM_ID_I N/A

SQL> select index_name, partition_name, status, leaf_blocks from dba_ind_partitions where index_name like ‘BIG_ALBUM_%’;

INDEX_NAME PARTITION_NAME STATUS LEAF_BLOCKS

———————– ————– ——– ———–

BIG_ALBUM_ALBUM_ID_I ALBUMS_2007 USABLE 807

BIG_ALBUM_ALBUM_ID_I ALBUMS_2008 USABLE 381

BIG_ALBUM_ALBUM_ID_I ALBUMS_2009 USABLE 383

BIG_ALBUM_ALBUM_ID_I ALBUMS_2010 UNUSABLE

BIG_ALBUM_ALBUM_ID_I ALBUMS_2011 UNUSABLE

BIG_ALBUM_COUNTRY_ID_I TS1 UNUSABLE 629

BIG_ALBUM_COUNTRY_ID_I TS2 UNUSABLE 629

BIG_ALBUM_COUNTRY_ID_I TS3 UNUSABLE 629

BIG_ALBUM_COUNTRY_ID_I TS4 UNUSABLE 629

SQL> select segment_name, partition_name, bytes, blocks from dba_segments where segment_name like ‘BIG_ALBUM_%’ and segment_type like ‘INDEX%’;

SEGMENT_NAME PARTITION_NAME BYTES BLOCKS

——————— ————– ——– ——

BIG_ALBUM_ALBUM_ID_I ALBUMS_2007 7340032 896

BIG_ALBUM_ALBUM_ID_I ALBUMS_2008 3145728 384

BIG_ALBUM_ALBUM_ID_I ALBUMS_2009 4194304 512

BIG_ALBUM_TOT_SALES_I 23068672 2816

可以发现 BIG_ALBUM_ALBUM_ID_I 状态为UNUSABLE的 blocks已经消失,BIG_ALBUM_COUNTRY_ID_I 由于所有partition都变成了 UNUSABLE 所以整个segment占用的blocks都被取消
BIG_ALBUM_TOT_SALES_I由于是normal index blocks 依然存在,这跟上面的heap表测试出现了矛盾 heap table 在index unusable 时候 blocks 将被drop掉,而在分区表中blocks依然存在!

快速升级Oracle 11.2.0.2 RAC到11.2.0.3 on Windows 2008

December 18, 2011 11g, oracle, RAC No comments

基于windows 2008平台升级rac 11.2.0.2.0->11.2.0.3.0

OS环境windows 2008 x64 数据库环境oracle 11g rac ,version 11.2.0.2.0 64bit

SQL> conn / as sysdba
已连接。
SQL> select * from v$version;

BANNER
——————————————————————————–
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
PL/SQL Release 11.2.0.2.0 – Production
CORE 11.2.0.2.0 Production
TNS for 64-bit Windows: Version 11.2.0.2.0 – Production
NLSRTL Version 11.2.0.2.0 – Production

1.升级CRS

下载11.2.0.3版本的grid数据库软件,解压到相应目录,此次使用滚动升级,不需要停CRS和database 服务,在升级前备份数据库和CRS。
点击setup.exe来进行安装,选择跳过软件更新,如下图所示:

选择升级Oracle Grid Infrastructure 或Oracle自动存储管理,点击下一步,如下图所示:

选择产品语言,简体中文和英文,点击下一步,如下图所示:

选择要升级的数据库节点,如下图所示:

这里选择grid的安装路径要不同于11.2.0.2 grid的安装路径,如下图所示:

检查先决条件,如下图所示:

问题一,防火墙

问题二:12539000补丁包
此时需要下载p12849789和最新的opatch工具
首先备份以前版本的Opatch,然后将最新的Opatch补丁p6880880解压到grid目录下

C:\app\11.2.0\grid_2\OPatch>opatch version

OPatch Version: 11.2.0.1.9

OPatch succeeded.

在两节点同时需要使用最新的opatch工具,接下来打12539000补丁,使用滚动升级的方式打补丁,首先在节点一上关闭数据库服务
C:\Users\Administrator>sqlplus /nolog

SQL*Plus: Release 11.2.0.2.0 Production on 星期五 12月 16 14:33:55 2011

Copyright (c) 1982, 2010, Oracle. All rights reserved.

SQL> conn / as sysdba
已连接。
SQL> alter system checkpoint local;

系统已更改。

C:\Users\Administrator>srvctl stop instance -d honcho -i honcho1 -o abort

C:\Users\Administrator>crsctl stop crs

使用windows管理工具或者net stop来关闭所有Oracle相关服务

C:\Users\Administrator>cd /

C:\>cd app

C:\app>set ORACLE_HOME=c:\app\11.2.0\grid_2

C:\app>cd 12849789

C:\app\12849789>c:\app\11.2.0\grid_2\OPatch\opatch apply

racle 中间补丁程序安装程序版本 11.2.0.1.9
版权所有 (c) 2011, Oracle Corporation。保留所有权利。

Oracle Home : c:\app\11.2.0\grid_2
Central Inventory : C:\Program Files\Oracle\Inventory
from : n/a
OPatch version : 11.2.0.1.9
OUI version : 11.2.0.2.0
Log file location : c:\app\11.2.0\grid_2\cfgtoollogs\opatch\opatch2011-12-16_14-51-24下午.log

Applying interim patch ‘12849789’ to OH ‘c:\app\11.2.0\grid_2’
Verifying environment and performing prerequisite checks…
Patch 12849789: Optional component(s) missing : [ oracle.ntoledb.odp_net_2, 11.2.0.2.0 ] , [ oracle.ctx, 11.2.0.2.0 ] ,
[ oracle.rdbms.oci, 11.2.0.2.0 ] , [ oracle.rdbms.ic, 11.2.0.2.0 ] , [ oracle.sysman.console.db, 11.2.0.2.0 ] , [ oracle
.sysman.oms.core, 10.2.0.4.3 ] , [ oracle.rdbms.dv, 11.2.0.2.0 ] , [ oracle.oraolap, 11.2.0.2.0 ] , [ oracle.precomp.com
mon, 11.2.0.2.0 ] , [ oracle.precomp.lang, 11.2.0.2.0 ] , [ oracle.odbc.ic, 11.2.0.2.0 ] , [ oracle.ntoledb, 11.2.0.2.0
] , [ oracle.ntoramts, 11.2.0.2.0 ] , [ oracle.sysman.plugin.db.main.repository, 11.2.0.2.0 ] , [ oracle.rdbms.tg4db2, 1
1.2.0.2.0 ] , [ oracle.rdbms.tg4msql, 11.2.0.2.0 ]
All checks passed.

此节点是 Oracle Real Application Cluster 的一部分。
远程节点: ‘rac2’
本地节点: ‘rac1’
请关闭本地系统上在此 ORACLE_HOME 之外运行的 Oracle 实例。
(Oracle 主目录 = ‘c:\app\11.2.0\grid_2’)

本地系统是否已准备打补丁? [y|n]
y
User Responded with: Y
Backing up files…

正在为组件 oracle.rdbms.rsf, 11.2.0.2.0 打补丁…

正在为组件 oracle.rdbms, 11.2.0.2.0 打补丁…

正在为组件 oracle.rdbms.dbscripts, 11.2.0.2.0 打补丁…

正在为组件 oracle.rdbms.plsql, 11.2.0.2.0 打补丁…

正在为组件 oracle.rdbms.rsf.ic, 11.2.0.2.0 打补丁…

正在为组件 oracle.has.rsf, 11.2.0.2.0 打补丁…

正在为组件 oracle.has.crs, 11.2.0.2.0 打补丁…

正在为组件 oracle.network.rsf, 11.2.0.2.0 打补丁…

正在为组件 oracle.ldap.rsf.ic, 11.2.0.2.0 打补丁…

正在为组件 oracle.ldap.rsf, 11.2.0.2.0 打补丁…

正在为组件 oracle.precomp.rsf, 11.2.0.2.0 打补丁…

正在为组件 oracle.rdbms.rman, 11.2.0.2.0 打补丁…

正在为组件 oracle.rdbms.util, 11.2.0.2.0 打补丁…

正在为组件 oracle.nlsrtl.rsf, 11.2.0.2.0 打补丁…

正在为组件 oracle.nlsrtl.rsf.ic, 11.2.0.2.0 打补丁…

正在为组件 oracle.xdk.rsf, 11.2.0.2.0 打补丁…

正在为组件 oracle.oraolap.dbscripts, 11.2.0.2.0 打补丁…

正在为组件 oracle.usm.deconfig, 11.2.0.2.0 打补丁…

正在为组件 oracle.has.common, 11.2.0.2.0 打补丁…

正在为组件 oracle.sdo.locator, 11.2.0.2.0 打补丁…

正在为组件 oracle.usm, 11.2.0.2.0 打补丁…

正在为组件 oracle.rdbms.deconfig, 11.2.0.2.0 打补丁…

正在为组件 oracle.has.db, 11.2.0.2.0 打补丁…

正在为组件 oracle.has.deconfig, 11.2.0.2.0 打补丁…

正在为组件 oracle.has.common.cvu, 11.2.0.2.0 打补丁…

正在为组件 oracle.has.cvu, 11.2.0.2.0 打补丁…

本地系统已打补丁。您可以在本地系统上重新启动 Oracle 实例。

在滚动模式下打补丁。

接下来将为节点 ‘rac2’ 打补丁。

请关闭 ‘rac2’ 上在此 ORACLE_HOME 之外运行的 Oracle 实例。
(Oracle 主目录 = ‘c:\app\11.2.0\grid_2’)

该节点是否已准备打补丁? [y|n]

在选择升级第二个节点之前,先开启节点一的服务,同时运行以下脚本:

C:\app\11.2.0\grid_2\BIN>acfsroot.bat install
ACFS-9300: 已找到 ADVM/ACFS 分发文件。
ACFS-9312: 已检测到现有 ADVM/ACFS 安装。
ACFS-9314: 正在删除以前的 ADVM/ACFS 安装。
ACFS-9315: 已成功删除以前的 ADVM/ACFS 组件。
ACFS-9307: 正在安装请求的 ADVM/ACFS 软件。
ACFS-9308: 正在加载已安装的 ADVM/ACFS 驱动程序。
ACFS-9327: 正在验证 ADVM/ACFS 设备。
ACFS-9309: 已验证 ADVM/ACFS 安装的正确性。

然后关闭节点2的所有服务,给节点2打patch

该节点是否已准备打补丁? [y|n]
y
User Responded with: Y
Updating nodes ‘rac2’
Apply-related files are:
FP = “c:\app\11.2.0\grid_2\.patch_storage\12849789_Aug_26_2011_22_05_04\rac\copy_files.txt”
DP = “c:\app\11.2.0\grid_2\.patch_storage\12849789_Aug_26_2011_22_05_04\rac\copy_dirs.txt”
MP = “c:\app\11.2.0\grid_2\.patch_storage\12849789_Aug_26_2011_22_05_04\rac\make_cmds.txt”
RC = “c:\app\11.2.0\grid_2\.patch_storage\12849789_Aug_26_2011_22_05_04\rac\remote_cmds.txt”

Instantiating the file “c:\app\11.2.0\grid_2\.patch_storage\12849789_Aug_26_2011_22_05_04\rac\copy_files.txt.instantiate
d” by replacing %ORACLE_HOME% in “c:\app\11.2.0\grid_2\.patch_storage\12849789_Aug_26_2011_22_05_04\rac\copy_files.txt”
with actual path.
Propagating files to remote nodes…
Instantiating the file “c:\app\11.2.0\grid_2\.patch_storage\12849789_Aug_26_2011_22_05_04\rac\copy_dirs.txt.instantiated
” by replacing %ORACLE_HOME% in “c:\app\11.2.0\grid_2\.patch_storage\12849789_Aug_26_2011_22_05_04\rac\copy_dirs.txt” wi
th actual path.
Propagating directories to remote nodes…

节点 ‘rac2’ 已打补丁。您可以在该节点上重新启动 Oracle 实例。

Patch 12849789 successfully applied
Log file location: c:\app\11.2.0\grid_2\cfgtoollogs\opatch\opatch2011-12-16_14-51-24下午.log

OPatch succeeded.

重新检查

忽略域用户这个警告,点击下一步

点击安装,开始升级安装

C:\Users\Administrator>crsctl query crs activeversion
集群上的 Oracle Clusterware 活动版本为 [11.2.0.3.0]

2.升级Database software

下载11.2.0.3数据库软件,并解压到相应目录,采用滚动升级数据库的方式升级,不需要关闭数据库服务。
点击安装目录下的setup.exe开始安装

选择“跳过软件更新”,点击下一步,如下图所示:

选择Oracle Real Application Cluster数据库安装,点击下一步,如下图所示:

选择数据库产品语言——简体中文和英语,现在下一步,如下图所示:

选择安装企业版,点击下一步,如下图所示:

这里也选择一个新的安装路径,点击下一步,如下图所示:

选择忽略该警告,点击下一步,如下图所示:

点击安装,开始安装,如下图所示:

升级数据库实例,可以选择静默升级,也可以选择dbua这个工具来升级,这里选择使用DBUA来升级数据库,点击下一步,如下图所示:

选择要升级的数据库,如下图所示

点击“是”开始升级数据库

升级期间关闭数据库归档功能,点击下一步,如下图所示:

开始升级数据库

查看最终数据库版本信息:

C:\Users\Administrator>crs_stat -t
名称 类型 目标 状态 主机
————————————————————
ora.CRS.dg ora….up.type ONLINE ONLINE rac1
ora.DATA.dg ora….up.type ONLINE ONLINE rac1
ora….ER.lsnr ora….er.type ONLINE ONLINE rac1
ora….N1.lsnr ora….er.type ONLINE ONLINE rac1
ora….VERY.dg ora….up.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora.honcho.db ora….se.type ONLINE ONLINE rac1
ora….network ora….rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac2
ora.ons ora.ons.type ONLINE ONLINE rac1
ora….SM1.asm application ONLINE ONLINE rac1
ora….C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora….t1.type ONLINE ONLINE rac1
ora….SM2.asm application ONLINE ONLINE rac2
ora….C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora….t1.type ONLINE ONLINE rac2
ora….ry.acfs ora….fs.type ONLINE ONLINE rac1
ora.scan1.vip ora….ip.type ONLINE ONLINE rac1

C:\Users\Administrator>sqlplus /nolog

SQL*Plus: Release 11.2.0.3.0 Production on 星期五 12月 16 20:08:08 2011

Copyright (c) 1982, 2011, Oracle. All rights reserved.

SQL> conn / as sysdba
已连接。
SQL> select * from v$version;

BANNER
——————————————————————————–
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
PL/SQL Release 11.2.0.3.0 – Production
CORE 11.2.0.3.0 Production
TNS for 64-bit Windows: Version 11.2.0.3.0 – Production
NLSRTL Version 11.2.0.3.0 – Production

SQL> set linesize 120
SQL> col comp_name for a40
SQL> select comp_name,version from dba_server_registry;

COMP_NAME VERSION
—————————————- ——————————
Oracle Workspace Manager 11.2.0.3.0
Oracle Database Catalog Views 11.2.0.3.0
Oracle Database Packages and Types 11.2.0.3.0
Oracle Real Application Clusters 11.2.0.3.0

[转] 11g Release 2 RMAN Backup Compression

December 16, 2011 11g, backup, oracle No comments

11g Release 2 RMAN Backup Compression
Oracle 11g Release 2 introduced compression algorithm levels which can be used for compressing table data, Data Pump exports and RMAN backups as well.

The compression levels are BASIC, LOW, MEDIUM and HIGH and each affords a trade off related to backup througput and the degree of compression afforded.

To use this option, we can run the following RMAN commands

RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;

followed by ..

RMAN> CONFIGURE COMPRESSION ALGORITHM ‘HIGH’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘MEDIUM’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘LOW’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘BASIC’;

Tests carried out on a small 1.5 GB database returned compression ratios of about 4.7 at best case. However, it should be noted that use of LOW, MEDIUM and HIGH requires the Advanced Compression license. The backupset size and backup durations are shown below.

Compression Level ‘HIGH’

backupset size: 226.18M
time: 00:02:21

Compression Level ‘Medium’

backupset size: 293.80M
time: 00:00:30

Compression Level ‘Low’

backupset size: 352.59M
time: 00:00:20

Compression Level ‘Basic’

backupset size: 276.55M
time: 00:00:50

To summarise we can conclude:

LOW – corresponds to LZO (11gR2) – smallest compression ratio, fastest
MEDIUM – corresponds to ZLIB (11gR1) – good compression ratio, slower than LOW
HIGH – corresponds to unmodified BZIP2 (11gR2) – highest compression ratio, slowest
BASIC (which is free) – corresponds to BZIP2 (10g style compression) – compression ratio in the range of MEDIUM, but slower

reference:http://gavinsoorma.com/2009/12/11g-release-2-rman-backup-compression/

11G新特性 IO Calibration 评测

December 13, 2011 11g, hardware, oracle No comments

11G新特性 IO Calibration可以帮我们估算出存储的读写性能,在使用这个特性之前 我们需要满足一些条件:

.在linux系统中默认是不开启异步IO的
SQL> show parameter filesystemio_options

NAME TYPE VALUE
———————————— ———– ——————————
filesystemio_options string none

可以通过以下语句查找asynchronous I/O是否被开启:

SQL> col name format a50
select name,asynch_io from v$datafile f,v$iostat_file i
where f.file#=i.file_no
and (filetype_name=’Data File’ or filetype_name=’Temp File’);

NAME ASYNCH_IO
————————————————– ———
/data/oracle/oradata/yhddb1/system.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/system.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/sysaux.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/sysaux.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/undotbs01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/undotbs01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/qipei_data01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/qipei_index01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/undotbs02.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/qipei_data02.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/md_data01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/qipei_data03.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/qipei_index02.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/tms_data01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/undotbs03.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index02.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/tms_idx01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/tms_data02.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/pos_data_01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/md_data02.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index03.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/pos_data_02.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/pos_index_01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index04.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index05.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/ims_data01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index06.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/ims_index01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/md_data05.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/md_data03.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/tms_idx02.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index07.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/md_data04.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/ttuser01.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/tms_data03.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/ttuser02.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index08.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index09.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index10.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index11.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index12.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/lg_index13.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/ttuser03.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/tms_data04.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/tms_data05.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/ttuser04.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/ttuser05.dbf ASYNC_OFF
/data/oracle/oradata/yhddb1/ttuser06.dbf ASYNC_OFF

我们需要打开异步io
SQL> show parameter FILESYSTEMIO_OPTIONS;

NAME TYPE VALUE
———————————— ———– ——————————
filesystemio_options string SETALL

设置这个参数需要重启数据库。参数“filesystemio_options” 支持4种值:
ASYNCH: 使Oracle支持文件的异步(Asynchronous)IO;
DIRECTIO:使Oracle支持文件的Direct IO;
SETALL:使Oracle同时支持文件的Asynchronous IO和Direct IO;
NONE:使Oracle关闭对Asynchronous IO和Direct IO的支持。


Syntax

DBMS_RESOURCE_MANAGER.CALIBRATE_IO (
num_physical_disks IN PLS_INTEGER DEFAULT 1,
max_latency IN PLS_INTEGER DEFAULT 20,
max_iops OUT PLS_INTEGER,
max_mbps OUT PLS_INTEGER,
actual_latency OUT PLS_INTEGER);

num_physical_disks —— Approximate number of physical disks in the database storage
max_latency —— Maximum tolerable latency in milliseconds for database-block-sized IO requests
max_iops —— Maximum number of I/O requests per second that can be sustained. The I/O requests are randomly-distributed, database-block-sized reads.
max_mbps —— Maximum throughput of I/O that can be sustained, expressed in megabytes per second. The I/O requests are randomly-distributed, 1 megabyte reads.
actual_latency —— Average latency of database-block-sized I/O requests at max_iops rate, expressed in milliseconds

我们可以通过 DBMS_RESOURCE_MANAGER.CALIBRATE_IO 测算出存储的性能 disk_count表示实际的物理磁盘个数,max_latency为最大容忍的延迟,这里我们设置为10

————————–

SQL> set serveroutput on;
SQL> DECLARE
2 lat INTEGER;
3 iops INTEGER;
4 mbps INTEGER;
5 BEGIN
6 — DBMS_RESOURCE_MANAGER.CALIBRATE_IO (disk_count,max_latency , iops, mbps, lat);
7 DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 10, iops, mbps, lat);
8
9 DBMS_OUTPUT.PUT_LINE (‘max_iops = ‘ || iops);
10 DBMS_OUTPUT.PUT_LINE (‘latency = ‘ || lat);
11 dbms_output.put_line(‘max_mbps = ‘ || mbps);
12 end;
13 /

max_iops = 901
latency = 15
max_mbps = 800

通过以下视图可以查看 I/O calibration results

SQL> desc V$IO_CALIBRATION_STATUS
Name Null? Type
—————————————– ——– —————————-
STATUS VARCHAR2(13)
CALIBRATION_TIME TIMESTAMP(3)

SQL> desc gv$io_calibration_status
Name Null? Type
—————————————– ——– —————————-
INST_ID NUMBER
STATUS VARCHAR2(13)
CALIBRATION_TIME TIMESTAMP(3)

Column explanation:
——————-
STATUS:
IN PROGRESS : Calibration in Progress (Results from previous calibration
run displayed, if available)
READY : Results ready and available from earlier run
NOT AVAILABLE : Calibration results not available.

CALIBRATION_TIME: End time of the last calibration run
DBA table that stores I/O Calibration results

SQL> desc DBA_RSRC_IO_CALIBRATE
Name Null? Type
—————————————– ——– —————————-
START_TIME TIMESTAMP(6)
END_TIME TIMESTAMP(6)
MAX_IOPS NUMBER
MAX_MBPS NUMBER
MAX_PMBPS NUMBER
LATENCY NUMBER
NUM_PHYSICAL_DISKS NUMBER

comment on table DBA_RSRC_IO_CALIBRATE is
‘Results of the most recent I/O calibration’

comment on column DBA_RSRC_IO_CALIBRATE.START_TIME is
‘start time of the most recent I/O calibration’

comment on column DBA_RSRC_IO_CALIBRATE.END_TIME is
‘end time of the most recent I/O calibration’

comment on column DBA_RSRC_IO_CALIBRATE.MAX_IOPS is
‘maximum number of data-block read requests that can be sustained per second’

comment on column DBA_RSRC_IO_CALIBRATE.MAX_MBPS is
‘maximum megabytes per second of maximum-sized read requests that can be
sustained’

comment on column DBA_RSRC_IO_CALIBRATE.MAX_PMBPS is
‘maximum megabytes per second of large I/O requests that
can be sustained by a single process’

comment on column DBA_RSRC_IO_CALIBRATE.LATENCY is
‘latency for data-block read requests’

comment on column DBA_RSRC_IO_CALIBRATE.NUM_PHYSICAL_DISKS is
‘number of physical disks in the storage subsystem (as specified by user)’