Internals

11g Oradebug Infrastructure (Internal only)

January 2, 2012 11g, Internals, oracle No comments

11g Oracle 加强了oradebug 可以使用oradebug doc命令查看internal trace command,下面简述各种命令的输出,由于是undocumented,所以很多东西有待研究


Contents
ORADEBUG DOC
ORADEBUG DOC EVENT
ORADEBUG DOC EVENT NAME
ORADEBUG DOC EVENT NAME
ORADEBUG DOC EVENT SCOPE
ORADEBUG DOC EVENT SCOPE SQL
ORADEBUG DOC EVENT FILTER
ORADEBUG DOC EVENT ACTION
ORADEBUG DOC COMPONENT


Refer to the SQL*Plus User’s Guide and Reference for more information.
[oracle@testdb ~]$ sqlplus ‘/as sysdba’

SQL*Plus: Release 11.2.0.2.0 Production on Mon Jan 2 15:04:43 2012

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ORADEBUG DOC:

SQL> oradebug doc

Internal Documentation
**********************

EVENT Help on events (syntax, event list, …)
COMPONENT [] List all components or describe

ORADEBUG DOC EVENT:

SQL> oradebug doc event

Event Help:
***********

Formal Event Syntax
——————–
::= ‘ []
[]
[]
[]
[off]’

::= []

::= [: scope_parameters]

::= {: filter_parameters}

::= (action_parameters)

<*_parameters> ::= = [, ]

Some Examples
————-
* Set event 10235 level 1:
alter session set events ‘10235’;

* Set events SQL_TRACE (a.k.a. 10046) level 1:
alter session set events ‘sql_trace’;

* Turn off event SQL_TRACE:
alter session set events ‘sql_trace off’;

* Set events SQL_TRACE with parameter set to ‘never’
and parameter set to ‘true’:
alter session set events ‘sql_trace wait=true, plan_stat=never’;

* Trace in-memory the SQL_MONITOR component (the target) and all its
sub-components at level high. Get high resolution time for each
trace:
alter session set events ‘trace[sql_mon.*] memory=high,
get_time=highres’;

* On-disk trace PX servers p000 and p005 for components ‘sql_mon’
and ‘sql_optimizer’ (including sub-components) at level highest:
alter system set events ‘trace[sql_mon | sql_optimizer.*]
{process: pname = p000 | p005}’;

* Same as above but only when SQL id ‘7ujay4u33g337’ is executed:
alter system set events ‘trace[sql_mon | sql_optimizer.*]
[sql: 7ujay4u33g337]
{process: pname = p000 | p005}’;

* Execute an action immediatly by using ‘immediate’ for the event
name:
alter session set events ‘immediate eventdump(system)’

* Create an incident labeled ‘table_missing’ when external error
942 is signaled by process id 14534:
alter session set events ‘942 {process: 14534}
incident(table_missing)’;

Notes
—–
* Implicit parameter level is 1 by default
e.g. ‘10053’ is same as ‘10053 level 1’

* Event target (see [] construct) is only
supported by specific events like the TRACE[] event

* and/or are constructs
that can be used for any event

* Same event can be set simultaneously for a different scope or
target but not for different filters.

* ‘|’ character can be used to select multiple targets, scope or
filters.

E.g. ‘sql_trace [sql: sql_id=g3yc1js3g2689 | sql_id=7ujay4u33g337]’

* ‘=’ sign is optional in <*_parameters>

E.g. ‘sql_trace level 12’;

* Like PL/SQL, no need to specify the parameter name for target,
scope, filters and action. Resolution is done by position in
that case:

E.g. ‘sql_trace [sql: g3yc1js3g2689 | 7ujay4u33g337]’

Help sub-topics
—————

NAME [] List all events or describe
SCOPE [] List all scopes or describe
FILTER [] List all filters or describe
ACTION [] List all actions or describe

ORADEBUG DOC EVENT NAME:

SQL> oradebug doc event name

Events in library DIAG:
——————————
trace[] Main event to control UTS tracing
disable_dde_action[] Event used by DDE to disable actions
ams_trace[] Event to dump ams performance trace records
ams_rowsrc_trace[] Event to dump ams row source tracing
sweep_verification Event to enable sweep file verification
enable_xml_inc_staging Event to enable xml incident staging format

Events in library RDBMS:
——————————
alert_text event for textual alerts
trace_recursive event to force tracing recursive SQL statements
clientid_overwrite event to overwrite client_identifier when client_info is set
sql_monitor event to force monitoring SQL statements
eventsync_tac Event posted from events syncing tac
sql_trace event for sql trace
pmon_startup startup of pmon process
background_startup startup of background processes
db_open_begin start of db open operation
test_gvtf test GV$() Table Tunction

Events in library GENERIC:
——————————
kg_event[] Support old error number events (use err# for short)

Events in library CLIENT:
——————————
oci_trace event for oci trace

Events in library LIBCELL:
——————————
libcell_stat libcell statistics level specification
cellclnt_skgxp_trc_ops Controls to trace SKGXP operations
cellclnt_ossnet_trc Controls to trace IP affinity in ossnet
cellclnt_high_lat_ops Control to trace High-latency I/O operations

Events in library ADVCMP:
——————————
arch_comp_level[] arch_comp_level[]
ccmp_debug columnar compression debug event
ccmp_align columnar compression enable alignment
ccmp_countstar columnar compression enable count(*) optimization
ccmp_dumpunaligned columnar compression dump dbas of unaligned CUs

ORADEBUG DOC EVENT NAME

SQL> ORADEBUG DOC EVENT NAME sql_trace

sql_trace: event for sql trace

Usage
——-
sql_trace
wait < false | true >,
bind < false | true >,
plan_stat < never | first_execution | all_executions | adaptive >,
level

SQL> ORADEBUG DOC EVENT NAME db_open_begin

db_open_begin: start of db open operation

Usage
——-
db_open_begin
delay

ORADEBUG DOC EVENT SCOPE:

SQL> oradebug doc event scope

Event scopes in library RDBMS:
——————————
SQL[] sql scope for RDBMS

ORADEBUG DOC EVENT SCOPE SQL:

SQL> oradebug doc event scope sql

SQL: sql scope for RDBMS

Usage
——-
[SQL: sql_id ]

ORADEBUG DOC EVENT FILTER

SQL> ORADEBUG DOC EVENT FILTER

Event filters in library DIAG:
——————————
occurence filter to implement counting for event checks
callstack filter to only fire an event when a function is on the stack
tag filter to only fire an event when a tag is set

Event filters in library RDBMS:
——————————
process filter to set events only for a specific process
pgadep filter to only fire an event when the pgadep matches a given value or falls within a range

Event filters in library GENERIC:
——————————
errarg filter to set error events only for a specific error argument

ORADEBUG DOC EVENT ACTION

SQL> ORADEBUG DOC EVENT ACTION

Actions in library DIAG:
—————————
dumpFrameContext – Dump Frame Context contents
dumpBuckets
kgsfdmp
dumpDiagCtx
dumpDbgecPopLoc
dumpDbgecMarks
dumpGenralConfiguration
dumpADRLockTable
act1
action1
action2
UTDumpGC
dbgvci_action_signal_crash

Actions in library RDBMS:
—————————
incident – Create an Incident
sqlmon_dump – SQL Monitor Dump SGA Action
flashfreeze
oradebug – debug process using ORADEBUG
debugger – debug process using System Debugger
debug
– alias for ‘debugger’ – debug process using System Debugger
crash – crash the instance
eventdump – list events that are set in the group
kdlut_bucketdump_action
kzxt_dump_action
dumpKernelDiagState
HMCHECK (async)
DATA_BLOCK_INTEGRITY_CHECK (async)
CF_BLOCK_INTEGRITY_CHECK (async)
DB_STRUCTURE_INTEGRITY_CHECK (async)
REDO_INTEGRITY_CHECK (async)
TRANSACTION_INTEGRITY_CHECK (async)
SQL_TESTCASE_REC (async)
SQL_TESTCASE_REC_DATA (async)
ORA_12751_DUMP
sqladv_dump_dumpctx
ORA_4030_DUMP
– dump summary of PGA memory usage, largest allocations
HNGDET_MEM_USAGE_DUMP_NOARGS – dump hang detection memory usage
kcfis_action – kcfis actions
exadata_dump_modvers – Exadata dump module versions
QUERY_BLOCK_DUMP – Debug action for dumping a qbcdef tree
ASM_MOUNT_FAIL_CHECK (async)
ASM_ALLOC_FAIL_CHECK (async)
ASM_ADD_DISK_CHECK (async)
ASM_FILE_BUSY_CHECK (async)
KJZN_ASYNC_SYSTEM_STATE (async)
KSI_GET_TRACE – Get lmd0 traces for ksi issues
TRACE_BUFFER_ON – Allocate trace output buffer for ksdwrf()
TRACE_BUFFER_OFF
– Flush and deallocate trace output buffer for ksdwrf()
LATCHES – Dump Latches
XS_SESSION_STATE – Dump XS session state
PROCESSSTATE – Dump process state
SYSTEMSTATE – Dump system state
INSTANTIATIONSTATE – Dump instantiation state
CONTEXTAREA – Dump cursor context area
HEAPDUMP
– Dump memory heap (1-PGA, 2-SGA, 4-UGA, +1024-Content)
POKE_LENGTH – Set length before poking value
POKE_VALUE – Poke a value into memory
POKE_VALUE0 – Poke 0 value into memory
GLOBAL_AREA
– Dump fixed global area(s) (1=PGA/2=SGA/3=UGA, add +8 for pointer content)
REALFREEDUMP – Dump PGA real free memory allocator state
FLUSH_JAVA_POOL – Flush Java pool
PGA_DETAIL_GET
– Ask process to publish PGA detail info (level is pid)
PGA_DETAIL_DUMP
– Dump PGA detail information for process (level is pid)
PGA_DETAIL_CANCEL – Free PGA detail request (level is pid)
PGA_SUMMARY – Summary of PGA memory usage, largest allocations
MODIFIED_PARAMETERS – Dump parameters modifed by session (level unused)
ERRORSTACK
– Dump state (ksedmp). Use INCIDENT action to create incident
CALLSTACK – Dump call stack (level > 1 to dump args)
RECORD_CALLSTACK
– Record or dump call stack, level = #frames (level += 1000000 go to trc)
BG_MESSAGES – Dump routine for background messages
ENQUEUES
– Dump enqueues (level >=2 adds resources, >= 3 adds locks)
KSTDUMPCURPROC
– Dump current process trace buffer (1 for all events)
KSTDUMPALLPROCS
– Dump all processes trace buffers (1 for all events)
KSTDUMPALLPROCS_CLUSTER
– Dump all processes (cluster wide) trace buffers (1 for all events)
KSKDUMPTRACE – Dumping KSK KST tracing (no level)
DBSCHEDULER – Dump ressource manager state
LDAP_USER_DUMP – Dump LDAP user mode
LDAP_KERNEL_DUMP – Dump LDAP kernel mode
DUMP_ALL_OBJSTATS – Dump database objects statistics
DUMPGLOBALDATA – Rolling migration DUMP GLOBAL DATA
HANGANALYZE – Hang analyze
HANGANALYZE_PROC – Hang analyze current process
HANGANALYZE_GLOBAL – Hang analyze system
HNGDET_MEM_USAGE_DUMP – dump hang detection memory usage
GES_STATE – Dump DML state
OCR – OCR client side tracing
CSS – CSS client side tracing
CRS – CRS client side tracing
SYSTEMSTATE_GLOBAL – Perform cluster wide system state dump (via DIAG)
DUMP_ALL_COMP_GRANULE_ADDRS
– MMAN dump all granule addresses of all components (no level)
DUMP_ALL_COMP_GRANULES
– MMAN dump all granules of all components (1 for partial list)
DUMP_ALL_REQS
– MMAN dump all pending memory requests to alert log
DUMP_TRANSFER_OPS – MMAN dump transfer and resize operations history
DUMP_ADV_SNAPSHOTS
– MMAN dump all snapshots of advisories (level unused)
CONTROLF – DuMP control file info
FLUSH_CACHE
– Flush buffer cache without shuting down the instance
BUFFERS – Dump all buffers in the buffer cache at level l
SET_TSN_P1
– Set tablespace # for buffer dump (level = ts# + 1)
BUFFER
– Dump all buffers for full relative dba at lvl 10
BC_SANITY_CHECK
– Run buffer cache sanity check (level = 0xFF for full)
SET_NBLOCKS – Set number of blocks for range reuse checks
CHECK_ROREUSE_SANITY – Check range/object reuse sanity (level = ts#)
DUMP_PINNED_BUFFER_HISTORY
– kcb Dump pinned buffers history (level = # buffers)
REDOLOGS – Dump all online logs according to the level
LOGHIST
– Dump the log history (1: dump earliest/latest entries, >1: dump most recent 2**level entries)
REDOHDR – Dump redo log headers
LOCKS – Dump every lock element to the trace file
GC_ELEMENTS – Dump every lock element to the trace file
FILE_HDRS – Dump database file headers
KRB_TRACE – Set krb trace options
FBINC
– Dump flashback logs of the current incarnation and all its ancestors.
FBHDR – Dump all the flashback logfile headers
FLASHBACK_GEN – Dump flashback generation state
KTPR_DEBUG
– Parallel txn recovery (1: cleanup check, 2: dump ptr reco ctx, 3: dump recent smon runs)
DUMP_TEMP – Dump temp space management state (no level)
DROP_SEGMENTS – Drop unused temporary segments
TREEDUMP
– Dump an index tree rooted at dba BLOCKDBA ()
KDLIDMP – Dump 11glob inodes states (level = what to dump)
ROW_CACHE – Dump all cache objects
LIBRARY_CACHE
– Dump the library cache (level > 65535 => level = obj @)
CURSORDUMP – Dump session cursors
CURSOR_STATS – Dump all statistics information for cursors
SHARED_SERVER_STATE – Dump shared server state
LISTENER_REGISTRATION – Dump listener registration state
JAVAINFO – Dump Oracle Java VM
KXFPCLEARSTATS – Clear all Parallel Query messaging statistics
KXFPDUMPTRACE – Dump Parallel Query in-memory traces
KXFXSLAVESTATE
– Dump PX slave state (1: uga; 2: current cursor state; 3: all cursors)
KXFXCURSORSTATE – Dump PX slave cursor state
WORKAREATAB_DUMP – Dump SQL Memory Manager workarea table
OBJECT_CACHE – Dump the object cache
SAVEPOINTS – Dump savepoints
RULESETDUMP – Dump rule set
FAILOVER – Set condition failover immediate
OLAP_DUMP – Dump OLAP state
AWR_FLUSH_TABLE_ON
– Enable flush of table id (ids in X$KEWRTB)
AWR_FLUSH_TABLE_OFF
– Disable flush of table id (ids in X$KEWRTB)
ASHDUMP – Dump ASH data (level = # of minutes)
ASHDUMPSECONDS – Dump ASH data (level = # of seconds)
HM_FW_TRACE – DIAG health monitor set tracing level
IR_FW_TRACE – DIAG intelligent repair set/clear trace
HEAPDUMP_ADDR
– Heap dump by address routine (level > 1 dump content)
POKE_ADDRESS – Poke specified address (level = value)
CURSORTRACE
– Trace cursor by hash value (hash value is address)
RULESETDUMP_ADDR – Dump rule set by address

Actions in library GENERIC:
—————————
xdb_dump_buckets
dumpKGERing – Dump contents of KGE ring buffer
dumpKGEState – Dump KGE state information for debugging

Actions in library CLIENT:
—————————
kpuActionDefault – dump OCI data
kpuActionSignalCrash
– crash and produce a core dump (if supported and possible)
kpudpaActionDpapi – DataPump dump action

ORADEBUG DOC COMPONENT

SQL> ORADEBUG DOC COMPONENT

Components in library DIAG:
————————–
diag_uts Unified Tracing Service (dbgt, dbga)
uts_vw UTS viewer toolkit (dbgtp, dbgtn)
diag_adr Automatic Diagnostic Repository (dbgr)
ams_comp ADR Meta-data Repository (dbgrm)
ame_comp ADR Export/Import Services (dbgre)
ami_comp ADR Incident Meta-data Services (dbgri)
diag_ads Diagnostic Directory and File Services (dbgrf, sdbgrf, sdbgrfu, sdbgrfb)
diag_hm Diagnostic Health Monitor
diag_ips Diagnostic Incident Packaging System
diag_dde Diagnostic Data Extractor (dbge)
diag_fmwk Diagnostic Framework (dbgc)
diag_ilcts Diagnostic Inter-Library Compile-time Service (dbgf)
diag_attr Diagnostic Attributes Management
diag_comp Diagnostic Components Management
diag_testp Diagnostic component test parent (dbgt)
diag_testc1 Diagnostic component test child 1
diag_testc2 Diagnostic component test child 2
KGSD Kernel Generic Service Debugging (kgsd)
diag_events Diagnostic Events (dbgd)
diag_adl Diagnostic ARB Alert Log (dbgrl, dbgrlr)
diag_vwk Diagnostic viewer toolkit (dbgv)
diag_vwk_parser Diagnostic viewer parser (dbgvp, dbgvl)
diag_vwk_uts Diagnostic viewer for UTS traces and files (dbgvf)
diag_vwk_ams Diagnostic viewer for AMS metadata (dbgvm)
diag_vwk_ci Diagnostic viewer for command line (dbgvci)
kghsc KGHSC Compact Stream (kghsc)
dbgxtk DBGXTK xml toolkit (dbgxtk)

Components in library RDBMS:
————————–
SQL_Compiler SQL Compiler
SQL_Parser SQL Parser (qcs)
SQL_Semantic SQL Semantic Analysis (kkm)
SQL_Optimizer SQL Optimizer
SQL_Transform

SQL_MVRW SQL Materialized View Rewrite
SQL_VMerge SQL View Merging (kkqvm)
SQL_Virtual SQL Virtual Column (qksvc, kkfi)
SQL_APA SQL Access Path Analysis (apa)
SQL_Costing SQL Cost-based Analysis (kko, kke)
SQL_Parallel_Optimization SQL Parallel Optimization (kkopq)
SQL_Code_Generator SQL Code Generator (qka, qkn, qke, kkfd, qkx)
SQL_Parallel_Compilation SQL Parallel Compilation (kkfd)
SQL_Expression_Analysis SQL Expression Analysis (qke)
SQL_Plan_Management SQL Plan Managment (kkopm)
MPGE MPGE (qksctx)
SQL_Execution SQL Execution (qer, qes, kx, qee)
Parallel_Execution Parallel Execution (qerpx, qertq, kxfr, kxfx, kxfq, kxfp)
PX_Messaging Parallel Execution Messaging (kxfp)
PX_Group Parallel Execution Slave Group (kxfp)
PX_Affinity Parallel Affinity (ksxa)
PX_Buffer Parallel Execution Buffers (kxfpb)
PX_Granule Parallel Execution Granules (kxfr)
PX_Control Parallel Execution Control (kxfx)
PX_Table_Queue Parallel Execution Table Queues (kxfq)
PX_Scheduler Parallel Execution Scheduler (qerpx)
PX_Queuing Parallel Execution Queuing (kxfxq)
Bloom_Filter Bloom Filter (qerbl, qesbl)
PX_Blackbox Parallel Execution Blackbox (kxf)
PGA_Manage PGA Memory Management
PGA_Compile PGA Memory Compilation
PGA_IMM PGA Memory Instance Manage
PGA_CMM PGA Memory Cursor Manage
PGA_ADV PGA Memory Advisor
rdbms_dde RDBMS Diagnostic Data Extractor (dbke)
VOS VOS (ks)
hang_analysis Hang Analysis (ksdhng)
background_proc Background Processes (ksb, ksbt)
system_param System Parameters (ksp, kspt)
ksu Kernel Service User (ksu)
ksutac KSU Timeout Actions
ksv_trace Kernel Services Slave Management (ksv)
sql_mon SQL Monitor (keswx)
sql_mon_deamon SQL Monitor Deamon
sql_mon_query SQL Monitor Query
CACHE_RCV Cache Recovery (kcv, kct, kcra, kcrp, kcb)
DIRPATH_LOAD Direct Path Load (kl, kdbl, kpodp)
DIRPATH_LOAD_BIS Direct Path Kpodpbis Routine (kpodp)
RAC Real Application Clusters
GES Global Enqueue Service
GCS Global Cache Service (kjb)
GSIPC Global Enqueue/Cache Service IPC
KSI Kernel Service Instance locking (ksi)
RAC_ENQ Enqueue Operations
RAC_RCFG Reconfiguration
RAC_DRM Dynamic Remastering
RAC_MRDOM Multiple Recovery Domains
CGS Cluster Group Services (kjxg)
CGSIMR Instance Membership Recovery (kjxgr)
DD GES Deadlock Detection
GCS_BSCN Broadcast SCN (kjb, kcrfw)
RAC_WLM Work Load Management (wlm)
RAC_MLMDS RAC Multiple LMS (kjm)
GCS_READMOSTLY GCS Read-mostly (kjb)
GCS_READER_BYPASS GCS Reader Bypass (kjb)
GCS_DELTAPUSH GCS Delta Push (kjb)
db_trace RDBMS server only tracing
kst server trace layer tracing (kst)
ddedmp RDBMS Diagnostic Data Extractor Dumper (dbked)
cursor Shared Cursor (kxs, kks)
Bind_Capture Bind Capture Tracing
KSM Kernel Service Memory (ksm)
KSE Kernel Service Error Manager (kse)
explain SQL Explain Plan (xpl)
rdbms_event RDBMS Events (dbkd)
LOB_INODE Lob Inode (kdli)
rdbms_adr RDBMS ADR (dbkr)
ASM Automatic Storage Management (kf)
KFK KFK (kfk)
KFKIO KFK IO (kfkio)
KFKSB KFK subs (kfksubs)
KFN ASM Networking subsystem (kfn)
KFNU ASM Umbillicus (kfnm, kfns, kfnb)
KFNS ASM Server networking (kfns)
KFNC ASM Client networking (kfnc)
KFIS ASM Intelligent Storage interfaces (kfis)
KFM ASM Node Monitor Interface Implementation (kfm)
KFMD ASM Node Monitor Layer for Diskgroup Registration (kfmd)
KFMS ASM Node Monitor Layers Support Function Interface (kfms)
KFFB ASM Metadata Block (kffb)
KFFD ASM Metadata Directory (kffd)
KFZ ASM Zecurity subsystem (kfz)
KFC ASM Cache (kfc)
KFR ASM Recovery (kfr)
KFE ASM attributes (kfe)
KFDP ASM PST (kfdp)
KFG ASM diskgroups (kfg)
DML DML Drivers (ins, del, upd)
Health_Monitor Health Monitor
DRA Data Repair Advisor
DIRACC Direct access to fixed tables (kqfd)
ASH Active Session History (kewa)
PART Partitioning (kkpo, qespc, qesma, kkpa, qergi)
PART_IntPart Interval Partitioning
PART_Dictionary Partitioning Dictionary (kkpod)
LOB_KDLW Lob kdlw (kdlw)
LOB_KDLX Lob xfm (kdlx)
LOB_KDLXDUP Lob dedup (kdlxdup)
LOB_KDLRCI Lob rci (kdlrci)
LOB_KDLA SecureFile Archive (kdla)
SQL_Manage SQL Manageability (kes)
SQL_Manage_Infra Other SQL Manageability Infrastructure (kesai, kesqs, kesatm, kesutl, kessi, keswat, keswts, keswsq)
SQL_Tune SQL Tuning Advisor (kest)
SQL_Tune_Auto SQL Tuning Advisor (auto-tune) (kestsa)
SQL_Tune_Index SQL Tuning Advisor (index-tune) (kestsi)
SQL_Tune_Plan SQL Tuning Advisor (plan node analysis) (kestsp)
SQL_Tune_Px SQL Tuning Advisor (parallel execution) (kestsa)
SQL_Tune_Fr SQL Tuning Advisor (fix regression) (kestsa)
SQL_Test_Exec SQL Test-Execute Service (kestse)
SQL_Perf SQL Performance Analyzer (kesp, keswpi)
SQL_Repair SQL Repair Advisor (kesds)
Auto_Tune_Opt Auto Tuning Optimizer (kkoat)
SQL_trace_parser SQL trace parser (kesstp)
SQL_Analyze SQL Analyze (qksan)
SQL_DS SQL Dynamic Sampling Services (qksds)
SQL_DDL SQL DDL (atb, ctc, dtb)
RAT_WCR Real Application Test: Workload Capture and Replay (kec)
Spatial Spatial (md)
Spatial_IND Spatial Indexing (mdr)
Spatial_GR Spatial GeoRaster (mdgr)
Text Text (dr)
rdbms_gc RDBMS Diagnostic Generic Configuration (dbkgc)
XS XS Fusion Security (kzx)
XSSESSION XS Session (kzxs)
XSPRINCIPAL XS Principal (kzxu)
XSSECCLASS XS Security Class (kzxc)
XSXDS XS Data Security (kzxd)
XSVPD XS VPD
XSXDB_DEFAULT XS XDB
XS_MIDTIER XS Midtier (kpuzxs)
AQ Streams Advanced Queuing (kwq, kkcn, kpon, kpoaq, kpce, kpcm, kpun, kpuaq)
AQ_DEQ Streams Advanced Queuing Dequeue (kwqid, kwqdl)
AQ_TM Streams Advanced Queuing Time Manager (kwqit, kwqmn)
KSFM Kernel Service File Mapping (ksfm)
KXD Exadata specific Kernel modules (kxd)
KXDAM Exadata Disk Auto Manage (kxdam)
KCFIS Exadata Predicate Push (kcfis)

Components in library GENERIC:
————————–
Generic_VOS Generic VOS
VOS_Heap_Manager VOS Heap Manager
VOS_Latches VOS Latches
VOS_GST VOS Generic Stack Trace (kgds)
XML XML (qmxt, qmxq)
Generic_XDB Generic XDB
XDB_Repository XDB Repository (qme)
XDB_Protocols XDB Protocols (qmh, qmf, qmn)
XDB_Query XDB Query (qmxtra, qerxp)
XDB_XMLIndex XDB XMLIndex (qmxtri, qmixq)
XDB_Schema XDB Schema (qmxtri, qmixq)
XDB_XOB XDB XOB (qmx)
XDB_CSX XDB CSX (qmcxe, qmcxd)
XDB_Default XDB Default
LOB LOB (koll, kola)
LOB_Refcount LOB Refcount (kolr)
LOB_Default LOB Default (kole, kokl, koxs, kokla, koklm, koklv)
KGH KGH Memory Allocator (kgh)
KGF ASM Automatic Storage Management (kgf)
LIBCACHE LIBCACHE (kgl, kql)
OBJECTS OBJECTS
OBJECTS_DDL OBJECTS DDL (kokt)
OBJECTS_Types OBJECTS Types (kot, ko, ort)
OBJECTS_Images OBJECTS Images (koke, kot, kad)
OBJECTS_Anydata OBJECTS Anydata (kokla, kolo, kot, kad)
OBJECTS_Streams OBJECTS Streams (koxs)
OBJECTS_Dict OBJECTS Dictionary (kkdo, qcdo)
OBJECTS_Semanal OBJECTS Semantic Analysis (koks, qcso, qcto)
OBJECTS_Default OBJECTS Default

Components in library CLIENT:
————————–
Client_KPU Client KPU
KPU_Memory KPU Memory
KPU_TTC KPU TTC
KPU_Relational KPU Relational
KPU_Objects KPU Objects
KPU_LOBS KPU LOBS
SQLLDR_Load SQLLDR Load (ul)
DPAPI_Load DPAPI Load (kpudp)

Components in library LIBCELL:
————————–
Client_Library Client Library
Disk_Layer Disk Layer
Network_Layer Network Layer
IPC_Layer IPC Layer

Components in library ORANET:
————————–
TNSLSNR OraNet Listener
NET_NSGR Network Service Generic Registration
NET_NSGI TNI Network Service Generic Listener User-defined class
CMAN OraNet Connection Manager
NET OraNet Services
NET_NI Network Interface Layer
NET_NS Network Session Layer
NET_NT Network Transport Layer
NET_NTM Network Transport Mailbox Layer
NET_NTP Network Transport IPC Layer
NET_NTT Network Transport TCP/IP Layer
NET_NTUS Network Transport Unix Domain Sockets Layer
NET_NL Network Library
NET_NA Network Authentication
NET_NZ Network Zecurity
NET_NTZ Network SSL
NET_NU Network Trace Route Utility
NET_NN Network Names

Components in library ADVCMP:
————————–
ADVCMP_MAIN Archive Compression (kdz)
ADVCMP_COMP Archive Compression: Compression (kdzc, kdzh, kdza)
ADVCMP_DECOMP Archive Compression: Decompression (kdzd, kdzs)

结合 oradebug doc event name trace:

SQL> oradebug doc event name trace

trace: Main event to control UTS tracing

Usage
——-
trace [ component ]
disk < default | lowest | low | medium | high | highest | disable >,
memory < default | lowest | low | medium | high | highest | disable >,
get_time < disable | default | seq | highres | seq_highres >,
get_stack < disable | default | force >,
operation ,
function ,
file ,
line

example:

alter session set events ‘trace[RDBMS.SQL_Transform] [SQL: 548447mzsjars] disk=high’;

Scripts:curheaps.sql

December 17, 2011 Internals, oracle No comments

该script 显示各个cursor占用的heap大小

——————————————————————————–

— File name: curheaps.sql
— Purpose: Show main cursor data block heap sizes and their contents
— (heap0 and heap6)

— Author: Tanel Poder
— Copyright: (c) http://www.tanelpoder.com

— Usage: @curheaps

— @curheaps 942515969 % — shows a summary of cursor heaps
— @curheaps 942515969 0 — shows detail for child cursor 0

— Other: “Child” cursor# 65535 is actually the parent cursor

——————————————————————————–

col curheaps_size0 heading SIZE0 for 9999999
col curheaps_size1 heading SIZE1 for 9999999
col curheaps_size2 heading SIZE2 for 9999999
col curheaps_size3 heading SIZE3 for 9999999
col curheaps_size4 heading SIZE4 for 9999999
col curheaps_size5 heading SIZE5 for 9999999
col curheaps_size6 heading SIZE6 for 9999999
col curheaps_size7 heading SIZE7 for 9999999

col KGLOBHD0 new_value v_curheaps_kglobhd0 print
col KGLOBHD1 new_value v_curheaps_kglobhd1 noprint
col KGLOBHD2 new_value v_curheaps_kglobhd2 noprint
col KGLOBHD3 new_value v_curheaps_kglobhd3 noprint
col KGLOBHD4 new_value v_curheaps_kglobhd4 print
col KGLOBHD5 new_value v_curheaps_kglobhd5 noprint
col KGLOBHD6 new_value v_curheaps_kglobhd6 print
col KGLOBHD7 new_value v_curheaps_kglobhd7 noprint

select
KGLNAHSH,
KGLHDPAR,
kglobt09 CHILD#,
KGLHDADR,
KGLOBHD0, KGLOBHS0 curheaps_size0,
KGLOBHD1, KGLOBHS1 curheaps_size1,
KGLOBHD2, KGLOBHS2 curheaps_size2,
KGLOBHD3, KGLOBHS3 curheaps_size3,
KGLOBHD4, KGLOBHS4 curheaps_size4,
KGLOBHD5, KGLOBHS5 curheaps_size5,
KGLOBHD6, KGLOBHS6 curheaps_size6,
KGLOBHD7, KGLOBHS7 curheaps_size7,
— KGLOBT00 CTXSTAT,
KGLOBSTA STATUS
from
X$KGLOB
— X$KGLCURSOR_CHILD
where
KGLNAHSH in (&1)
and KGLOBT09 like (‘&2’)
order by
KGLOBT09 ASC
/

— Cursor data block summary
select
‘HEAP0’ heap
, ksmchcls class
, ksmchcom alloc_comment
, sum(ksmchsiz) bytes
, count(*) chunks
from
x$ksmhp
where
KSMCHDS = hextoraw(‘&v_curheaps_kglobhd0’)
group by
‘HEAP0’
, ksmchcls
, ksmchcom
order by
sum(ksmchsiz) desc
/

select
‘HEAP4’ heap
, ksmchcls class
, ksmchcom alloc_comment
, sum(ksmchsiz) bytes
, count(*) chunks
from
x$ksmhp
where
KSMCHDS = hextoraw(‘&v_curheaps_kglobhd4’)
group by
‘HEAP6’
, ksmchcls
, ksmchcom
order by
sum(ksmchsiz) desc
/

select
‘HEAP6’ heap
, ksmchcls class
, ksmchcom alloc_comment
, sum(ksmchsiz) bytes
, count(*) chunks
from
x$ksmhp
where
KSMCHDS = hextoraw(‘&v_curheaps_kglobhd6’)
group by
‘HEAP0’
, ksmchcls
, ksmchcom
order by
sum(ksmchsiz) desc
/

— Cursor data block details

— select * from x$ksmhp where KSMCHDS = hextoraw(‘&v_curheaps_kglobhd0’);
— select * from x$ksmhp where KSMCHDS = hextoraw(‘&v_curheaps_kglobhd6’);

For example:

SQL> @cursor 2781999655 65535
old 20: KGLNAHSH in (&1)
new 20: KGLNAHSH in (2781999655)
old 21: and KGLOBT09 like (‘&2’)
new 21: and KGLOBT09 like (‘65535’)

KGLNAHSH KGLHDPAR CHILD# KGLHDADR KGLOBHD0 SIZE0 SIZE1 SIZE2
———- ——– ———- ——– ——– ——– ——– ——–
SIZE3 KGLOBHD4 SIZE4 SIZE5 KGLOBHD6 SIZE6 SIZE7 STATUS
——– ——– ——– ——– ——– ——– ——– ———-
2781999655 2C8E8408 65535 2C8E8408 2CAF987C 1781 0 0
0 00 0 0 00 0 0 1

old 10: KSMCHDS = hextoraw(‘&v_curheaps_kglobhd0’)
new 10: KSMCHDS = hextoraw(‘2CAF987C’)

HEAP CLASS ALLOC_COMMENT BYTES CHUNKS
—– ——– —————- ———- ———-
HEAP0 perm permanent memor 1608 2
HEAP0 free free memory 364 2
HEAP0 freeabl kgltbtab 76 1
HEAP0 freeabl kksfbc:hash1 40 2

old 10: KSMCHDS = hextoraw(‘&v_curheaps_kglobhd4′)
new 10: KSMCHDS = hextoraw(’00’)

no rows selected

old 10: KSMCHDS = hextoraw(‘&v_curheaps_kglobhd6′)
new 10: KSMCHDS = hextoraw(’00’)

no rows selected

模拟一次online redo crash 的恢复

December 7, 2011 Internals, oracle No comments

[oracle@liu dbs]$ sqlplus ‘/as sysdba’

SQL*Plus: Release 10.2.0.1.0 – Production on Wed Dec 7 00:06:50 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options

SQL> create table t as select * from dba_segments; —–创建测试表 t

Table created.

SQL> select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
———- ———- ———- ———- ———- — —————- ————- ———
1 1 27 52428800 1 NO CURRENT 782969 06-DEC-11
2 1 24 52428800 1 YES INACTIVE 696233 02-DEC-11
3 1 25 52428800 1 YES INACTIVE 696534 04-DEC-11
10 1 26 52428800 1 YES INACTIVE 749727 06-DEC-11

[oracle@liu liu]$ echo “my sql” > redo01.log
[oracle@liu liu]$
[oracle@liu liu]$

SQL> shutdown abort;————-直接 shutdown abort 模拟crash
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 218103808 bytes
Fixed Size 1218580 bytes
Variable Size 75499500 bytes
Database Buffers 134217728 bytes
Redo Buffers 7168000 bytes
Database mounted.
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: ‘/u01/app/oracle/oradata/liu/redo01.log’
ORA-27046: file size is not a multiple of logical block size
Additional information: 1

SQL> create pfile from spfile;

File created.

SQL> !

[oracle@liu dbs]$ vi initliu.ora

liu.__db_cache_size=134217728
liu.__java_pool_size=4194304
liu.__large_pool_size=4194304
liu.__shared_pool_size=67108864
liu.__streams_pool_size=0
*.audit_file_dest=’/u01/app/oracle/admin/liu/adump’
*.background_dump_dest=’/u01/app/oracle/admin/liu/bdump’
*.compatible=’10.2.0.1.0′
*.control_files=’/u01/app/oracle/oradata/liu/control01.ctl’,’/u01/app/oracle/oradata/liu/control02.ctl’,’/u01/app/oracle/oradata/liu/control03.ctl’
*.core_dump_dest=’/u01/app/oracle/admin/liu/cdump’
*.db_block_size=8192
*.db_domain=”
*.db_file_multiblock_read_count=16
*.db_name=’liu’
*.db_recovery_file_dest=’/u01/app/oracle/flash_recovery_area’
*.db_recovery_file_dest_size=2147483648
*.db_unique_name=’liu’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=liuXDB)’
*.fal_client=’liu’
*.fal_server=’standby’
*.job_queue_processes=10
*.log_archive_config=’DG_CONFIG=(liu,standby)’
*.log_archive_dest_1=’location=/u01/arch/ valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=liu’
*.log_archive_dest_2=’service=standby lgwr async valid_for=(online_logfiles,primary_role) db_unique_name=standby’
*.log_archive_format=’%t_%s_%r.dbf’
*.open_cursors=300
*.pga_aggregate_target=71303168
*.processes=300
*.remote_login_passwordfile=’EXCLUSIVE’
*.sessions=335
*.sga_target=216006656
*.standby_file_management=’auto’
*.undo_management=’AUTO’
*.undo_tablespace=’UNDOTBS1′
*.user_dump_dest=’/u01/app/oracle/admin/liu/udump’
*._allow_resetlogs_corruption=TRUE
*._allow_error_simulation=TRUE

[oracle@liu dbs]$ sqlplus ‘/as sysdba’

SQL*Plus: Release 10.2.0.1.0 – Production on Tue Dec 6 23:57:41 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options

SQL> shutdown abort;
ORACLE instance shut down.
SQL> create spfile from pfile;

File created.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 218103808 bytes
Fixed Size 1218580 bytes
Variable Size 75499500 bytes
Database Buffers 134217728 bytes
Redo Buffers 7168000 bytes
SQL> alter database mount;

Database altered.

SQL> show parameter all

NAME TYPE VALUE
———————————— ———– ——————————
_allow_error_simulation boolean TRUE
_allow_resetlogs_corruption boolean TRUE
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2148
parallel_instance_group string
parallel_max_servers integer 40
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE

NAME TYPE VALUE
———————————— ———– ——————————
parallel_server_instances integer 1
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
SQL> recover database
ORA-00283: recovery session canceled due to errors
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: ‘/u01/app/oracle/oradata/liu/redo01.log’
ORA-27046: file size is not a multiple of logical block size
Additional information: 1

SQL> recover database until cancel;
ORA-00279: change 782970 generated at 12/06/2011 22:05:16 needed for thread 1
ORA-00289: suggestion : /u01/arch/1_27_767998669.dbf
ORA-00280: change 782970 for thread 1 is in sequence #27

Specify log: {=suggested | filename | AUTO | CANCEL}
auto
ORA-00308: cannot open archived log ‘/u01/arch/1_27_767998669.dbf’
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3

ORA-00308: cannot open archived log ‘/u01/arch/1_27_767998669.dbf’
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3

ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u01/app/oracle/oradata/liu/system01.dbf’

SQL>
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced

SQL> shutdown immediate;
ORA-24324: service handle not initialized
ORA-01041: internal error. hostdef extension doesn’t exist
SQL>
SQL>
SQL> startup;
ORA-24324: service handle not initialized
ORA-01041: internal error. hostdef extension doesn’t exist
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options

[oracle@liu dbs]$ ps -ef |grep ora_
oracle 10245 10041 0 00:01 pts/3 00:00:00 grep ora_
[oracle@liu dbs]$ sqlplus ‘/as sysdba’

SQL*Plus: Release 10.2.0.1.0 – Production on Wed Dec 7 00:01:43 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup;
ORACLE instance started.

Total System Global Area 218103808 bytes
Fixed Size 1218580 bytes
Variable Size 75499500 bytes
Database Buffers 134217728 bytes
Redo Buffers 7168000 bytes
Database mounted.
Database opened.
SQL>
SQL>
SQL>
SQL> set line 200
SQL> select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
———- ———- ———- ———- ———- — —————- ————- ———
1 1 1 52428800 1 YES INACTIVE 782971 07-DEC-11
2 1 2 52428800 1 YES INACTIVE 782972 07-DEC-11
3 1 3 52428800 1 NO CURRENT 802977 07-DEC-11
10 1 0 52428800 1 YES UNUSED 0

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
———- ———- ———- ———- ———- — —————- ————- ———
1 1 5 52428800 1 YES INACTIVE 803877 07-DEC-11
2 1 6 52428800 1 YES INACTIVE 803879 07-DEC-11
3 1 7 52428800 1 NO CURRENT 803882 07-DEC-11
10 1 4 52428800 1 YES INACTIVE 803875 07-DEC-11

SQL> alter system switch logfile;

System altered.

SQL> select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
———- ———- ———- ———- ———- — —————- ————- ———
1 1 5 52428800 1 YES INACTIVE 803877 07-DEC-11
2 1 6 52428800 1 YES INACTIVE 803879 07-DEC-11
3 1 7 52428800 1 YES INACTIVE 803882 07-DEC-11
10 1 8 52428800 1 NO CURRENT 803887 07-DEC-11

SQL> alter system switch logfile;

System altered.

SQL> select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
———- ———- ———- ———- ———- — —————- ————- ———
1 1 9 52428800 1 NO CURRENT 803893 07-DEC-11
2 1 6 52428800 1 YES INACTIVE 803879 07-DEC-11
3 1 7 52428800 1 YES INACTIVE 803882 07-DEC-11
10 1 8 52428800 1 YES INACTIVE 803887 07-DEC-11

SQL> shutdown immediate;
dDatabase closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORACLE instance started.

Total System Global Area 218103808 bytes
Fixed Size 1218580 bytes
Variable Size 75499500 bytes
Database Buffers 134217728 bytes
Redo Buffers 7168000 bytes
Database mounted.
Database opened.
SQL> desc t;
ERROR:
ORA-04043: object t does not exist

看到虽然open了database 但是redo01.log 中的记录也将丢失

[转] Oracle 10g RAC中的DRM问题及关闭

December 1, 2011 Internals, oracle, RAC 2 comments

在RAC环境中,Oracle使用GRD(Global Resource Service)来记录各个RAC节点的资源信息,具体通过GCS(Global Cache Service)和GES(Global Enqueue Service)这两个服务进行管理。
由于在RAC中每个节点都有自己的SGA和buffer cache,为了保证Cache资源的一致性和提高性能,GCS和GES会指定RAC中的一个instance来管理Cache,这个节点这时就是Resource Master。
在10g以前,Cache资源是不能在各个节点间移动的,除非重启或者某节点因为其他异常被RAC驱逐等情况。而10g的DRM就解决了这个问题,它可以保证cache能够被remaster到频繁访问这部分数据的节点上,从而提高RAC的性能。DRM的全称是Dynamic Resource Mastering,metalink上的Doc ID: 390483.1文档详细介绍了DRM的信息。

从理论上讲,利用此项技术,非master节点对所需资源有频繁访问需求时,可以提升为master节点,从而减少大量后续的跨节点资源访问需求。
但是,首先从根本上说,一个好的RAC应用设计,本就应该极尽所能的取避免同一资源的多节点访问,如果不存在同一资源的多节点访问,则DRM所要解决的问题,就根本不存在。其次,DRM本身是需要消耗资源的,并且存在诸多bug,对于一个设计较差的系统而言,频繁的DRM,也会引发Libary cache lock而导致实例挂住。
更严重的,在10.2.0.3系统上,曾经遇到一个case,电信行业的巨型数据库,rac的2号节点由于批量处理作业在非业务时间段,首先cache了一张40G的表,而到了业务时间段后,rac的1号节点的OLTP业务需要频繁访问该表,此时,故障发生了,由于DRM的介入,2号节点开始将内存内的40Gcache数据向1号节点传输,心跳网段千兆带宽被耗尽,RAC陷入僵死阶段,足足维持了40分钟。
事后检查网络流量图,该时段内,私有网络流量持续保持在90M/s的峰值水平。
根据metalink确认,该问题确实由DRM机制引起,最终解决方案,使用隐含参数,将DRM特性屏蔽:
_gc_affinity_time=0
_gc_undo_affinity=FALSE

修改这两个参数需要重启数据库,可以用下面两个参数实行关闭:

_gc_affinity_limit=250  

_gc_affinity_minimum=10485760 

因此,从根本上来说,drm的出现,只是在理论上的一种缓解,而并不能在实际的大型应用中发挥其作用。就类似于Oracle自己针对RAC推出的自动负载平衡一样,只是一种看起来很美的东西,如果真的有人用了,呵呵,那就只能等死吧。或许压力极小的数据库无所谓,但我没遇到过,话又说回来,压力极小,又何必上RAC呢。

reference: http://www.dbaleading.com/Article.asp?id=42

[转] Know More about Oracle PGA Memory

November 8, 2011 Internals, oracle 9 comments

Question :

1. How can we know available PGA and temporary tablespace before we issue a huge operation?
2. Can we estimate PGA and temporary tablespace for a huge operation?
3. As we know, there is limitation for a user process. Can we set unlimited and how to do it?

We say:

PROPOSED SOLUTION(S)
====================
ANSWER
======
Note 223730.1 Title Automatic PGA Memory Management in 9i and 10g

This Document briefly describes how Oracle 9i manage PGA work area and how to
tune it and some of the common issues and some of the common misunderstood issues.

Automatic PGA Memory Management

Automatic PGA Memory Management
***Checked for relevance on 07-Jan-2011***

note:note:
Automatic PGA Memory Management
——————————-

Process Global Area, often known as the Program Global Area (PGA) resides in the
process private memory of the server process. It contains global variables and data
structures and control information for a server process. example of such information
is the runtime area of a cursor. Each time a cursor is executed, a new runtime
area is created for that cursor in the PGA memory region of the server process
executing that cursor.

The performance of complex long running queries, typical in a DSS environment,
depend to a large extent on the memory available in the Program Global Area (PGA).
which is also called work area.

The size of a work area can be controlled and tuned. Generally, bigger work areas
can significantly improve the performance of a particular operator at the cost of
higher memory consumption. Ideally, the size of a work area is big enough that it
can accommodate the input data and auxiliary memory structures allocated by its
associated SQL operator. This is known as the optimal size of a work area (e.g.
a memory sort). When the size of the work area is smaller than optimal
(e.g. a disk sort), the response time increases, because an extra pass is performed
over part of the input data. This is known as the one-pass size of the work area.
Under the one-pass threshold, when the size of a work area is far too small compared
to the input data size, multiple passes over the input data are needed. This could
dramatically increase the response time of the operator. This is known as the multi-pass
size of the work area.

In Oracle8i administrators sized the PGA by carefully adjusting a number of
initialization parameters, such as, SORT_AREA_SIZE, HASH_AREA_SIZE,
BITMAP_MERGE_AREA_SIZE, and CREATE_BITMAP_AREA_SIZE, etc.

Starting with Oracle9i, an option is provided to completely automate the
management of PGA memory. Administrators merely need to specify the
maximum amount of PGA memory available to an instance using a newly
introduced initialization parameter PGA_AGGREGATE_TARGET.

The database server automatically distributes this memory among various
active queries in an intelligent manner so as to ensure maximum performance
benefits and the most efficient utilization of memory. Furthermore, Oracle9i
and newer releases can adapt itself to changing workload thus utilizing
resources efficiently regardless of the load on the system. The amount of
the PGA memory available to an instance can be changed dynamically by
altering the value of the PGA_AGGREGATE_TARGET parameter making it possible
to add to and remove PGA memory from an active instance online. Since the
database engine itself is better equipped to determine SQL execution memory
requirements, database administrators should use this feature and not try
to tune the PGA manually. This should translate to better throughput for
large number of users on the system as well as improved response time for
queries.

The automatic SQL execution memory management feature is enabled by setting the
parameter WORKAREA_SIZE_POLICY to AUTO and by specifying a size of
PGA_AGGREGATE_TARGET in the initialization file. These two parameters can also be
set dynamically using the ALTER SYSTEM command. In the absence of either of these
parameters, the database will revert to manual PGA management mode. In Oracle9i
Release 2, an advisory for PGA_AGGREGATE_TARGET was introduced. Just like in Buffer
Cache Advisory, the PGA Advisory will suggest the appropriate size for PGA memory
and thus make PGA tuning an even simpler task.

Version specific notes:

Until 9iR2, PGA_AGGREGATE_TARGET parameter controls the sizing of workareas for
all dedicated server connections, but it has no effect on shared servers (aka
MTS) connections and the *_AREA_SIZE parameters will take precedence in this
case.

In 10g, PGA_AGGREGATE_TARGET controls workareas allocated by both dedicated and
shared connections.

As of 11g, Automatic Memory Management (AMM) expands to managing both SGA and
PGA memory. Under memory pressure for PGA memory, SGA memory will be
re-allocated for use by a process to accommodate workarea needs. On the
flip-side, if PGA memory is under allocated, memory can be added to the
auto-tuned components in the SGA beyond the original SGA configuration.
NOTE: With AMM, setting an explicit value for PGA_AGGREGATE_TARGET will
act as a minimum setting that AMM will not shrink below. See note:443746.1
for more information.

How To Tune PGA_AGGREGATE_TARGET
———————————-

The first question we will have when we set this parameter is what is the best
value for it?

To determine the appropriate setting for PGA_AGGREGATE_TARGET parameter we
recommend to follow the following steps

1- Make a first estimate for PGA_AGGREGATE_TARGET based on the following rule

– For OLTP systems

PGA_AGGREGATE_TARGET = ( * 80%) * 20%

– For DSS systems

PGA_AGGREGATE_TARGET = ( * 80%) * 50%

So for example, if we have an Oracle instance configured on system with 16G of
Physical memory, then the suggested PGA_AGGREGATE_TARGET parameter value we
should start with incase we have OLTP system is (16 G * 80%)*20% ~= 2.5G and
incase we have DSS system is (16 G * 80%)* 50% ~= 6.5 G.

In the above equation, we assume that 20% of the memory will be used by the OS,
and in OLTP system 20% of the remaining memory will be used for
PGA_AGGREGATE_TARGET and the remaining memory is going for Oracle SGA
memory and non-oracle processes memory. So make sure that you have
enough memory for your SGA and also for non-oracle processes

2- A second step in tuning the PGA_AGGREGATE_TARGET is to monitor performance
using available PGA statistics and see if PGA_AGGREGATE_TARGET is under sized
or over sized. Several dynamic performance views are available for this
purpose:

– V$PGASTAT

This view provides instance-level statistics on the PGA memory usage and
the automatic PGA memory manager. For example:

SELECT * FROM V$PGASTAT;

NAME VALUE
——————————————————–
aggregate PGA target parameter 524288000 bytes
aggregate PGA auto target 463435776 bytes
global memory bound 25600 bytes
total PGA inuse 9353216 bytes
total PGA allocated 73516032 bytes
maximum PGA allocated 698371072 bytes
total PGA used for auto workareas 0 bytes
maximum PGA used for auto workareas 560744448 bytes
total PGA used for manual workareas 0 bytes
maximum PGA used for manual workareas 0 bytes
over allocation count 0 bytes
total bytes processed 4.0072E+10 bytes
total extra bytes read/written 3.1517E+10 bytes
cache hit percentage 55.97 percent

Main statistics to look at

(a) aggregate PGA auto target : This gives the amount of PGA memory Oracle can
use for work areas running in automatic mode. This part of memory represent the
tunable part of PGA memory,i.e. memory allocated for intensive memory SQL operators
like sorts, hash-join, group-by, bitmap merge and bitmap index create. This memory
part can be shrinked/expanded in function of the system load. Other parts of
PGA memory are known as untunable, i.e. they require a size that can’t be negociated
(e.g. context information for each session, for each open/active cursor,
PL/SQL or Java memory).

So, the aggregate PGA auto target should not be small compared to the value of
PGA_AGGREGATE_TARGET. You must ensure that enough PGA memory is left for work areas
running in automatic mode.

(b) total PGA used for auto workarea: This gives the actual tunable PGA memory used by
the system. The ‘maximum PGA used for auto workareas’ gives the maximum reached
by previous statistic since instance startup.

(c) total PGA in used: This gives the total PGA memory in use. The detail of this
value can be found in the PGA_USED_MEM column of the v$process view.

Oracle92, 10g, 11g:
(d) over allocation count: Over-allocating PGA memory can happen if the value of
PGA_AGGREGATE_TARGET is too small to accommodate the untunable PGA memory part plus
the minimum memory required to execute the work area workload. When this happens,
Oracle cannot honor the initialization parameter PGA_AGGREGATE_TARGET, and extra
PGA memory needs to be allocated. over allocation count is the number of time the
system was detected in this state since database startup. This count should ideally be
equal to zero.

Oracle92,10g, 11g:
(e) cache hit percentage: This metric is computed by Oracle to reflect the
performance of the PGA memory component. It is cumulative from instance
start-up. A value of 100% means that all work areas executed by the system
since instance start-up have used an optimal amount of PGA memory. This is,
of course, ideal but rarely happens except maybe for pure OLTP systems. In
reality, some work areas run one-pass or even multi-pass, depending on the
overall size of the PGA memory. When a work area cannot run optimally, one or
more extra passes is performed over the input data. This reduces the cache
hit percentage in proportion to the size of the input data and the number of
extra passes performed. this value if computed from the “total bytes processed”
and “total extra bytes read/written” statistics available in the same view using
the following formula:

total bytes processed * 100
PGA Cache Hit Ratio = ——————————————————
(total bytes processed + total extra bytes read/written)

– V$SQL_WORKAREA_HISTOGRAM (Oracle92, 10g, 11g)

This view shows the number of work areas executed with optimal memory size, one-
pass memory size, and multi-pass memory size since instance start-up. Statistics
in this view are subdivided into buckets that are defined by the optimal memory
requirement of the work area. Each bucket is identified by a range of optimal
memory requirements specified by the values of the columns LOW_OPTIMAL_SIZE and
HIGH_OPTIMAL_SIZE.

Example :

The following query shows statistics for all nonempty buckets.

SELECT LOW_OPTIMAL_SIZE/1024 low_kb,(HIGH_OPTIMAL_SIZE+1)/1024 high_kb,
optimal_executions, onepass_executions, multipasses_executions
FROM v$sql_workarea_histogram
WHERE total_executions != 0;

The result of the query might look like the following:

LOW_KB HIGH_KB OPTIMAL_EXECUTIONS ONEPASS_EXECUTIONS MULTIPASSES_EXECUTIONS
—— ——- —————— —————— ———————-
8 16 156255 0 0
16 32 150 0 0
32 64 89 0 0
64 128 13 0 0
128 256 60 0 0
256 512 8 0 0
512 1024 657 0 0
1024 2048 551 16 0
2048 4096 538 26 0
4096 8192 243 28 0
8192 16384 137 35 0
16384 32768 45 107 0
32768 65536 0 153 0
65536 131072 0 73 0
131072 262144 0 44 0
262144 524288 0 22 0

The query result shows that, in the 1024 KB to 2048 KB bucket, 551 work areas used
an optimal amount of memory, while 16 ran in one-pass mode and none ran in
multi-pass mode. It also shows that all work areas under 1 MB were able to run in
optimal mode.

You can also use V$SQL_WORKAREA_HISTOGRAM to find the percentage of times work
areas were executed in optimal, one-pass, or multi-pass mode since start-up.

Example :

SELECT optimal_count, round(optimal_count*100/total, 2) optimal_perc,
onepass_count, round(onepass_count*100/total, 2) onepass_perc,
multipass_count, round(multipass_count*100/total, 2) multipass_perc
FROM
(SELECT decode(sum(total_executions), 0, 1, sum(total_executions)) total,
sum(OPTIMAL_EXECUTIONS) optimal_count,
sum(ONEPASS_EXECUTIONS) onepass_count,
sum(MULTIPASSES_EXECUTIONS) multipass_count
FROM v$sql_workarea_histogram
WHERE low_optimal_size > 64*1024); —- for 64 K optimal size

– V$SQL_WORKAREA_ACTIVE

This view can be used to display the work areas that are active (or executing)
in the instance. Small active sorts (under 64 KB) are excluded from the view.
Use this view to precisely monitor the size of all active work areas and to
determine if these active work areas spill to a temporary segment.

Example :

SELECT to_number(decode(SID, 65535, NULL, SID)) sid,
operation_type OPERATION,trunc(EXPECTED_SIZE/1024) ESIZE,
trunc(ACTUAL_MEM_USED/1024) MEM, trunc(MAX_MEM_USED/1024) “MAX MEM”,
NUMBER_PASSES PASS, trunc(TEMPSEG_SIZE/1024) TSIZE
FROM V$SQL_WORKAREA_ACTIVE
ORDER BY 1,2;

SID OPERATION ESIZE MEM MAX MEM PASS TSIZE
— —————– ——— ——— ——— —– ——-
8 GROUP BY (SORT) 315 280 904 0
8 HASH-JOIN 2995 2377 2430 1 20000
9 GROUP BY (SORT) 34300 22688 22688 0
11 HASH-JOIN 18044 54482 54482 0
12 HASH-JOIN 18044 11406 21406 1 120000

This output shows that session 12 (column SID) is running a hash-join having its
work area running in one-pass mode (PASS column). This work area is currently
using 11406 KB of memory (MEM column) and has used, in the past, up to 21406 KB
of PGA memory (MAX MEM column). It has also spilled to a temporary segment of
size 120000 KB. Finally, the column ESIZE indicates the maximum amount of memory
that the PGA memory manager expects this hash-join to use. This maximum is dynamically
computed by the PGA memory manager according to workload.

When a work area is deallocated—that is, when the execution of its associated SQL
operator is complete—the work area is automatically removed from the
V$SQL_WORKAREA_ACTIVE view.

– note: have some other queries we use to monitor SQL execution memory

3- The Third and last step is tuning the PGA_AGGREGATE_TARGET. In Oracle 9i
Release 2 we have 2 new views that help us in this task

– V$PGA_TARGET_ADVICE
– V$PGA_TARGET_ADVICE_HISTOGRAM

By examining these two views, you will be able to determine how key PGA statistics
will be impacted if you change the value of PGA_AGGREGATE_TARGET.

To enable automatic generation of PGA advice performance views, make sure the
following parameters are set:

– PGA_AGGREGATE_TARGET
– STATISTICS_LEVEL. Set this to TYPICAL (the default) or ALL; setting this
parameter to BASIC turns off generation of PGA performance advice views.

The content of these PGA advice performance views is reset at instance start-up
or when PGA_AGGREGATE_TARGET is altered. NOTE: PGA_AGGREGATE can change
automatically over time starting with 11g as part of the Automatic Memory
Management enhancements available at 11g. See note:443746.1 for more
details.

V$PGA_TARGET_ADVICE view predicts how the statistics cache hit percentage and
over allocation count in V$PGASTAT will be impacted if you change the value of
the initialization parameter PGA_AGGREGATE_TARGET.

The following select statement can be used to find this information

SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
ESTD_OVERALLOC_COUNT
FROM v$pga_target_advice;

The output of this query might look like the following:

TARGET_MB CACHE_HIT_PERC ESTD_OVERALLOC_COUNT
———- ————– ——————–
63 23 367
125 24 30
250 30 3
375 39 0
500 58 0
600 59 0
700 59 0
800 60 0
900 60 0
1000 61 0
1500 67 0
2000 76 0
3000 83 0
4000 85 0

From the above results we should set the PGA_AGGREGATE_TARGET parameter to a
value where we avoid any over allocation, so lowest PGA_AGGREGATE_TARGET value
we can set is 375 ( where ESTD_OVERALLOC_COUNT is 0)

After eliminating over-allocations, the goal is to maximize the PGA cache hit
percentage, based on your response-time requirement and memory constraints.

V$PGA_TARGET_ADVICE_HISTOGRAM view predicts how the statistics displayed
by the performance view V$SQL_WORKAREA_HISTOGRAM will be impacted if you
change the value of the initialization parameter PGA_AGGREGATE_TARGET. You can
use the dynamic view V$PGA_TARGET_ADVICE_HISTOGRAM to view detailed
information on the predicted number of optimal, one-pass and multi-pass work
area executions for the set of PGA_AGGREGATE_TARGET values you use for the
prediction.

Common issues
—————

1- When we set the PGA_AGGREGATE_TARGET and WORKAREA_SIZE_POLICY to auto
then the *_area_size parameter are automatically ignored and oracle will
automatically use the computed value for these parameters.

2- In Oracle 8i and earlier, the PGA memory was static, once the process started
and started to allocate memory for it’s PGA area then it will not release it
back to the OS unless the process exits or dies. But the OS and under heavy
memory pressure will decide to page out unused memory pages belongs to a process
PGA to the swap space.

In Oracle 9i and under the automatic PGA memory management, Oracle will be able
to unallocate memory from a process PGA which is not using it any more so
another process can use it, also it can adjust the different work areas size
to accommodate the current workload and the amount of memory can be used.

3- Using automatic PGA memory management feature will help limiting resources
used by oracle process, and will use it more efficiently.

4- Using automatic PGA memory management will help also reducing the possibility
of getting ora-4030 errors unless we hit a OS limit, because work area sizes
will be controlled and adjusted automatically based on the PGA_AGGGREGATE_TARGET
parameter first and then the current work load.

5- If column ESTD_OVERALLOCATION_COUNT in the V$PGA_TARGET_ADVICE VIEW is nonzero,
It indicates that PGA_AGGREGATE_TARGET is too small to even meet the minimum
PGA memory needs. If PGA_AGGREGATE_TARGET is set within the over allocation
zone, the memory manager will over-allocate memory and actual PGA memory
consumed will be more than the limit you set. It is therefore meaningless to
set a value of PGA_AGGREGATE_TARGET in that zone.

6- Some customer reported that SQL LOADER in Oracle 9i is slower than SQL Loader
in Oracle 8i, and example of this is bug: which was closed as not a
bug. Using PGA_AGGREGATE_TARGET alleviated the problem.

7- PGA_AGGREGATE_TARGET is not supported on VMS, for more information please refer
to note: “Oracle9i Release Notes Release 1 (9.0.1) for Alpha OpenVMS”.
ORA-3113 is returned on instance startup when init.ora PGA_AGGREGATE_TARGET is set.

8- Setting PGA_AGGREGATE_TARGET in 9.0.1 on HP-UX 11.0 may panic the OS. for
more information please refer to note: “ALERT HP-UX Patch Levels
Advised” and Bug:2122307.

Known Bugs
———–

– bug:
Details: Attempting to set pga_aggregate_target over 4000Gb should error with
ORA-4032 but no error is signalled.
Fixed-Releases: 9.2.0.1.0

– Bug:2122307 HP System crash when setting PGA_AGGREGATE_TARGET to 10M or more
in Oracle 9.0.1.

This is basically an OS Problem that cause the crash. The system call pattern
automatic PGA management is using causing HP/UX to try to extend fixed region
and leads to memory allocation failures.

To resolve the bug both this patch and PHKL_25188 (or later) must be installed.

As the DBA you need to get confirmation from your operating system administrator that the amount of memory reported as being in use by a process includes or does not include shared memory. If shared memory is included in the value displayed by the operating system utility, then the shared pool size must be deducted from that value to know how much private memory the process is actually using.

See note 174555.1 “UNIX Determining the Size of an Oracle Process”.

If an RDBMS user process is using more private memory than expected, then the DBA has three options:

– Do nothing
– Monitor the RDBMS iuser session to find out what SQL statements are being performed or were being performed by that RDBMS session. Using the SQL*Trace functionality of the database would normally be done if information from the end user cannot be obtained directly as to what they were doing since the memory usage was higher than expected or what they are doing right now.
– Kill that RDBMS user session.
PGA_AGGREGATE_TARGET does not set a hard limit on pga usage. It is only a target value used to dynamically size the process work areas. It also does not affect other areas of the pga that are allowed to grow beyond this limit.

There are certain areas of pga that cannot be controlled by initialization parameters. Such areas include pl/sql memory collections such as pl/sql tables and varrays.

Depending on the programming code and amount of data being handled these areas can grow very large (up to 20G internal limit on 10) and can consume large amounts of memory. This memory growth can be controlled by good programming practices. As an example, use LIMIT clause with BULK COLLECT.

Additionally, programming mistakes can also lead to excessive memory usage.

You can take steps to control the size of a process. However, from within the database framework you cannot place a hard limit on the size of a process by setting any initialization parameters or database configuration.

You can limit the size of a process from the OS side by setting kernel limits or user shell limits but this leads to the ORA-4030 and will cause transaction rollback.

As noted in bug 7279150, “… this is not a hard limit and that we will exceed it when it is undersized and the workload increases rapidly, such as when they start the workload for their testing or when they spawn a new set of sessions from their application servers.”

During the execution of SQL statements, server processes consume memory for various operations. Part of this memory is assigned to work areas for operations such as sorts and hash joins. Ideally, a work area should be large enough to support the SQL operation being performed. This size is known as the optimal size of a work area. When a work area is smaller than its optimal size, an extra pass is performed over part of the input data. This is known as the one-pass size of the work area. When the size of a work area is far too small compared to the input data size, multiple passes over the input data are needed. This is known as the multi-pass size of the work area. Operations performed in one-pass and multi-pass work area sizes increase response times, sometimes dramatically in the case of the latter.

You can set the size of the various work areas with individual initialization parameters but the same amount of memory is allocated to each process. So if your parameters are set to avoid any multi-pass operations, processes will be assigned this much memory even for operations that could run in a much smaller optimal size.

With Oracle9i, you can use the PGA_AGGREGATE_TARGET initialization parameter to assign memory that is shared by the server processes active in the instance and is automatically reallocated to the processes that currently need it. In the initial release of Oracle9i, additional information was added to various dynamic performance tables to help you monitor whether you had set a sufficiently high value for the PGA target. However, these statistics do not provide a lot of detail nor give you any guidance for setting a more appropriate value when it is under- or over-sized.

Additional statistics are available in Oracle9i Database Release 2 to help you monitor and tune the PGA_AGGREGATE_TARGET parameter. Some of these statistics are provided through new views and others through modified views. In this section of the lesson, you can find more details about managing your PGA memory with the views listed above.

The new statistics to help you monitor the performance of the PGA memory component for a particular value of PGA_AGGREGATE_TARGET are based on two concepts: work areas groups and a computed PGA cache hit percentage value.

By grouping work areas based on their optimal memory requirement, statistics can be collected for each group based on the number of optimal, one-pass, and multi-pass executions for each of these work area groups. With this finer granularity of work area statistics than previously available, you can more accurately predict how much memory is required across the instance to maximize the number of optimal executions.

The PGA cache hit percentage summarizes statistics on work area executions into a single measure of PGA performance for a given value of PGA_AGGREGATE_TARGET. The PGA cache hit percentage is derived from the number of work areas that run optimal, the number of passes for the non-optimal work areas, and the sizes of the work areas.

The new view, v$sql_workarea_histogram, enables you to study the nature of the work area workload over a chosen time period. The work areas are split into 33 groups based on their optimal memory requirements. Each group is defined by the lower bound on its optimal memory requirement, displayed in the low_optimal_size column, and its upper bound, displayed in the high_optimal_size column. For each group, the view accumulates the number of executions using optimal, one-pass, multi-pass memory since instance start up. These are displayed in the optimal_executions, onepass-executions, and multipasses_executions columns respectively. The total_executions column contains the sum of the other three execution counts.

To determine if you have set a good value for the PGA target size, query the v$sql_workarea_histogram view using a statement such as the following, which converts the low_optimal_size and high_optimal_size column values to kilobytes:

SQL> SELECT low_optimal_size/1024 AS low_kb,
2 (high_optimal_size+1)/1024 AS high_kb,
3 ROUND(100*optimal_executions/total_executions) AS optimal,
4 ROUND(100*onepass_executions/total_executions) AS onepass,
5 ROUND(100*multipasses_executions/total_executions) AS multipass
6 FROM v$sql_workarea_histogram
7 WHERE total_executions != 0
8 ORDER by low_kb;

Of course, as with any monitoring effort using dynamic views, you should issue the query at the beginning and at the end of a typical work period and use the differences between the two results to determine what activity occurred during that period.

Although it would be ideal for all work areas to execute in the optimal size, this goal is typically achieved by over-allocating memory to the PGA. If you graph the results of your query as a histogram, you can quickly identify the location in the graph where work groups begin to use one-pass, or even multi-pass, sizes. In the example shown the former occurs with a work area size of about 200KB and the latter about 100MB. Based on your knowledge of the type of work being done and the current level of performance, you can decide if this is acceptable or if the PGA_AGGREGATE_TARGET parameter value should be changed.

If a work area runs in one-pass or multi-pass mode, extra bytes will be processed since one or more extra pass over the input data will be performed. A new statistic, PGA cache hit percentage, condenses in one global numerical metric the relative performance of the PGA memory component. The PGA cache hit percentage is the percentage formed from the ratio of the number of bytes that need to be processed when all work areas run in optimal mode to the total bytes really processed. The higher the PGA cache hit percentage, the better the performance of PGA memory and hence of the system.

The PGA cache hit percentage statistic is stored in the v$pgastat view and can be retrieved from the row with the value cache hit percentage in the name column. A sample query is shown above.

The v$sql_workarea and v$sql_workarea_active views have been modified in Oracle9i Database Release 2 to reflect information on the temporary segment an operator (work area) uses.

The v$sort_usage view is renamed to v$tempseg_usage to reflect that information about all temporary segments, not only sort segments, is recorded in this view. In the current release, the old view name is being maintained for backward compatibility.

If you monitor the PGA space consumption with the various views provided for that purpose (v$sql_workarea_histogram, v$pgastat, v$sql_workarea, and v$sql_workarea_active), you may decide that you need to reset your PGA_AGGREGATE_TARGET initialization parameter value to make better use of your available memory. In some cases, you may want to reduce the memory allocated, in other cases; you may realize that you should increase the value. To help you determine by what factor you should change the parameter value, you can use two new views, provided in Oracle9i Database Release 2, that provide PGA sizing advice. These views, v$pga_target_advice and v$pga_target_advice_histogram, accumulate advice statistics to help you tune the PGA_AGGREGATE_TARGET value. The views are only populated if PGA_AGGREGATE_TARGET is set to a non-zero value that enables auto management of the PGA. Also the initialization parameter STATISTICS_LEVEL value must be set to Typical or All. Further, the view contents are refreshed when you shut down the instance or when you change the value of PGA_AGGREGATE_TARGET dynamically.

Rows in these two views correspond to setting the PGA_AGGREGATE_TARGET value to 0.125, 0.25, 0.5, 0.75, 1, 1.2, 1.4, 1.6, 1.8, 2, 3, 4, 6, and 8 times its current value. However, if these values are either less than 10MB or more than 256GB, they will not be included in the output.

To begin monitoring and tuning the PGA target size, you should issue query against the v$pga_target_advice view similar to:

SQL> SELECT
2 ROUND(pga_target_for_estimate/1024/1024) AS target_mb,
3 estd_pga_cache_hit_percentage AS cache_hit_percent,
4 estd_overalloc_count
5 FROM v$pga_target_advice
6 ORDER BY target_mb;

For output that is easier to read and use, this query converts the value of the pga_target_for_estimate column from bytes to megabytes. As mentioned earlier, you should issue this query at the beginning and at the end of a typical work period and use the differences between the two results to obtain advise related to the work performed during that period.

The output from the query on v$pga_target_advice contains up to 14 rows that contain estimated statistics based on the multiples of the current PGA_AGGREGATE_TARGET value.

Assuming that your query produced the output shown above with the PGA_AGGREGATE_TARGET value set to 500MB, follow these steps to evaluate the results:

Step 1: Identify the first row with a value of zero in the estd_overallocation_count column. The rows above this one are for PGA_AGGREGATE_TARGET values (shown in the target_mb column) that are too small for the minimum PGA memory needs. In this case, this is the fifth row, which contains data for the current PGA_AGGREGATE_TARGET value, 500MB. Had the target_mb column value in the first row with a zero estd_overallocation_count been larger than the current setting, you should increase the PGA_AGGREGATE_TARGET parameter to at least this size.

Step 2: Examine the rows with PGA values larger than the minimum required to find the first pair of adjacent rows with values in the cache_hit_percent column that are within a few percentage points of each other. These rows indicate where, were you to graph the values, you would see an inflection point (sometimes referred to as a “knee”) in the curve. The optimal value for the PGA_AGGREGATE_TARGET parameter is at this inflection point, represented by the target_mb column value in the first of these two rows. Based on the above results, you should set the PGA_AGGREGATE_TARGET to 3000MB if you have sufficient memory. If you have even more memory available, you should assign it to some other use, such as one of the SGA components, rather than increasing the PGA target size.

 

 

 

reference http://www.oracledatabase12g.com/archives/pga-and-temporary-tablespace-in-data-warehouse.html#comment-19948

data_object_id

October 24, 2011 Internals, oracle 1 comment

xxx_objects.data_object_id differs from object_id after

* truncate table (or alter table truncate partition), unless table/partition is empty
* alter table move (or alter table move partition)
* alter index rebuild (or alter index rebuild partition)
* alter table exchange partition

Alter table move even without TABLESPACE clause (or specifying the same tablespace) physically
moves the table, based on dba_extents.file_id and block_id. But truncate table doesn’t move the
table. Think of data_object_id as an ID for the segment. If xxx_objects.data_object_id is null, it
must be an object not associated with a physical segment, such as view, sequence, domain index,
partitioned table/index whose segments are in the individual partitions.

Alter table move lob also changes lob segment’s data_object_id.

* How much does data_object_id increment?

Except in case of partition-table exchange, data_object_id is brought up to

select max(data_object_id)+1 from dba_objects

If you just create a new table, it will be assigned an object_id and data_object_id of

select greatest(max(object_id), max(data_object_id)) from dba_objects

If the table has one index, truncate will increment data_object_id by 2 because its index takes the
number one below it. If there’s no index, truncate should increment data_object_id by 1 only.

Exchange of a partition with a table swaps their data_object_id’s. So don’t assume data_object_id’s
always go up.

If the table or its partition is already empty, truncating it does not increment data_object_id.
(Rebuilding an empty index or its partition still increments data_object_id, and in case of
online index rebuild, data_object_id could increment by 3 because of “transient” objects created
and dropped during the rebuild.)

* When to use which ID?

You use data_object_id in dbms_rowid, v$bh and x$bh, but object_id in most other cases, such as
v$locked_object. Note that v$segstat, v$segment_statistics and v$logmnr_contents have both.

reference:http://yong321.freeshell.org/oranotes/DataObjectId.txt

datafile Checkpoint cnt

October 24, 2011 Internals, oracle 2 comments

通过dump datafile header 发现每个Checkpoint cnt不同

大致原因如下:

1.tablespace 建立的时间不同 现在我们新建一个tablespace ->liu datafile->liu01.dbf

DATA FILE #10:
(name #14) /u01/liu01.dbf
creation size=128 block size=8192 status=0xe head=14 tail=14 dup=1
tablespace 8, index=8 krfil=10 prev_file=0
unrecoverable scn: 0x0000.00000000 01/01/1988 00:00:00
Checkpoint cnt:2 scn: 0x0000.005339bb 10/24/2011 10:19:42
Stop scn: 0xffff.ffffffff 10/24/2011 10:19:42
Creation Checkpointed at scn: 0x0000.005339ba 10/24/2011 10:19:42
thread:1 rba:(0x37.c3a3.10)

DATA FILE #1:
(name #7) /u01/app/oracle/oradata/css/system01.dbf
creation size=0 block size=8192 status=0xe head=7 tail=7 dup=1
tablespace 0, index=1 krfil=1 prev_file=0
unrecoverable scn: 0x0000.00000000 01/01/1988 00:00:00
Checkpoint cnt:131 scn: 0x0000.0052d2cc 10/22/2011 09:58:13
Stop scn: 0xffff.ffffffff 10/21/2011 17:10:54

2.当tablespace 处于hot backup 或者offline 的时候 发生checkpoint 我们再建一个tablespace ->liu01 datafile ->liu101.dbf

DATA FILE #11:
(name #15) /u01/liu101.dbf
creation size=128 block size=8192 status=0xe head=15 tail=15 dup=1
tablespace 9, index=9 krfil=11 prev_file=0
unrecoverable scn: 0x0000.00000000 01/01/1988 00:00:00
Checkpoint cnt:2 scn: 0x0000.00533ffe 10/24/2011 10:25:22

alter tablespace liu begin backup;

DATA FILE #10:
(name #14) /u01/liu01.dbf
creation size=128 block size=8192 status=0xe head=14 tail=14 dup=1
tablespace 8, index=8 krfil=10 prev_file=0
unrecoverable scn: 0x0000.00000000 01/01/1988 00:00:00
Checkpoint cnt:8 scn: 0x0000.00534033 10/24/2011 10:26:56

DATA FILE #11:
(name #15) /u01/liu101.dbf
creation size=128 block size=8192 status=0xe head=15 tail=15 dup=1
tablespace 9, index=9 krfil=11 prev_file=0
unrecoverable scn: 0x0000.00000000 01/01/1988 00:00:00
Checkpoint cnt:7 scn: 0x0000.0053426f 10/24/2011 10:40:00

alter tablespace liu end backup;

DATA FILE #10:
(name #14) /u01/liu01.dbf
creation size=128 block size=8192 status=0xe head=14 tail=14 dup=1
tablespace 8, index=8 krfil=10 prev_file=0
unrecoverable scn: 0x0000.00000000 01/01/1988 00:00:00
Checkpoint cnt:10 scn: 0x0000.005342b8 10/24/2011 10:43:15

DATA FILE #11:
(name #15) /u01/liu101.dbf
creation size=128 block size=8192 status=0xe head=15 tail=15 dup=1
tablespace 9, index=9 krfil=11 prev_file=0
unrecoverable scn: 0x0000.00000000 01/01/1988 00:00:00
Checkpoint cnt:8 scn: 0x0000.005342b8 10/24/2011 10:43:15

alter tablespace liu offline;

DATA FILE #10:
(name #14) /u01/liu01.dbf
creation size=128 block size=8192 status=0x80 head=14 tail=14 dup=1
tablespace 8, index=8 krfil=10 prev_file=0
unrecoverable scn: 0x0000.00000000 01/01/1988 00:00:00
Checkpoint cnt:11 scn: 0x0000.005342e7 10/24/2011 10:45:15

DATA FILE #11:
(name #15) /u01/liu101.dbf
creation size=128 block size=8192 status=0xe head=15 tail=15 dup=1
tablespace 9, index=9 krfil=11 prev_file=0
unrecoverable scn: 0x0000.00000000 01/01/1988 00:00:00
Checkpoint cnt:9 scn: 0x0000.005342fd 10/24/2011 10:45:23

可以设置事件查看chk cnt

alter session set events = ‘immediate trace name file_hdrs level 10’