Oracle9i Tuning Memory
Netg - tuning strategies
1. Tuning architecture and design
2. Tune application - SQL code.
3. Tune memory structures - SGA and
user process memory.
4. Tune I/O - distribution of data
files, segment types and tablespaces, tune interaction between data files and
memory reads
and writes. The interaction between
files can be tuned by separating the most frequently accessed data into
different tablespaces
and separating tables from their
related indexes. You can also separate rollback segments from tables and data
dictionary objects
from all other objects.
5. Reduce contention about actions
in the database. Tuning is needed to minimize time that processes have to wait
until aresource available.
The resources include blocks, shared
pools, locks and latches.
6. Tune OS to satisfy Oracle demand
Tools for
memory tuning:
STATSPACK –
a user PERFSTAT is automatically created by the installation script and granted
read only privileges on V$ views. STATSPACK gives 4 different views at the tome
of beginning or ending the snapshot. The sections are SQL ordered by gets,
reads, executions, and parse calls.
EXPLAIN
PLAN – evaluates performance of SQL and PL/SQL. You can use the utlexplan.sql
to create PLAN_TABLE. The useful columns are OPERATION, OPTION and OBJECT_NAME.
An example:
SQL>
@D:\oracle\ora90\rdbms\admin\utlxplan.sql
Explain
SQL>
EXPLAIN PLAN FOR
UPDATE
hr.employees
SET SALARY
= salary * 1.10
WHERE
department_id = 30;
Explained
SQL>
@D:\oracle\ora90\rdbms\admin\utlxplns
Plan table
The plan
table can be queried directly or you can run either utlxpls.sql or utlxplp.sql
depending on whether Parallel Query was used.
SQL TRACE
and TKPROF (formatting of SQL TACE files)
A
particular sequence of steps is necessary to diagnose SQL statement performance
with SQL TRACE and TKPROF. The first step is to ensure appropriate init
parameters. These can be set at the instance level. Some parameters can be set
at the session level. Second, SQL TRACE must be invoked at either instance or
session (better for performance reasons) level. Third, run the application or
SQL code you want to diagnose.
SQL>
ALTER SESSION
SET
SQL_TRACE = TRUE
Session
altered
SQL>
UPDATE hr.employees
SET SALARY
= salary * 1.10
WHERE
department_id = 30;
55 rows
updated
SQL>
ALTER SESSION
SET
SQL_TRACE = FALSE
Session
altered
Fifth, use
TKPROF to format the trace file generated during the trace session.
Finally,
use the TKPROF results to diagnose the performance of the SQL statement. Two
parameters in the init.ora file control the size of and the destination of the
output file from the SQL TRACE. The MAX_DUMP_SIZE is measured in bytes or M is
psecifed, otherwise the number represents OS blocks. The default MAX_DUMP_FILE
is 10,000 OS blocks. The USER_DUMP_DEST specifies the trace file destination.
To get timing information, the TIMED_STATISTICS must be set to TRUE.
There are 3
methods for enabling SQL TRACE at the session level:
SQL>
ALTER SESSION
SET
SQL_TRACE=TRUE’
SQL>
EXECUTE DBMS_SESSION.SET_SQL_TRACE (FALSE);
SQL>
EXECUTE DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION (8, 12, TRUE);
Use the
SYS=NO option of TKPROF to suppress recursive (SYS) SQL statements.
TKPROF
When you
specify the EXPLAIN parameter, TKPROF logs on to the database with the username
and password provided. It then works out the access path for each SQL statement
traced and includes that in the output.
C:\TKPROF
g:\oradata\orcl3\admin\udump\ora00044.trc EXPLAIN=hr/hr TABLE=plan_tab1
INSERT=STOREA.SQL SYS=NO SORT=(EXECCPU,FCHCPU)
TKPROF logs
on to the database and uses the information available at the time that TKPROF
is run, not the time when statistics were produced. TKPROF also reports Library
Cache misses. This indicates the number of times the statement was not found in
the Library Cache. You can type TKPROF at MS?DOS prompt to get help on the use.
TKPROF collects several statistics. COUNT contains the number of times the
statement was parsed or executed and the number of fetch calls issued for the
statement. CPU has the processing time for each phase in seconds if the
statement was found in the Shared Pool and zero for the parse because it needs
no processing time. ELAPSED contains the elapsed time, in seconds. DISK
contains the physical data block read from the database files. QUERY contains
the logical buffer s retrieved for consistent read. CURRENT contains the
logical buffers retrieved in current mode, usually for DML statements. ROWS
contains the rows processed by the outer statement. For SELECT statements this
is shown for the FETCH phase. For DML statements this is shown for the execute
stage. The SUM of QUERY and CURRENT is the total number of logical buffers
accessed.
AUTOTRACE
AUTOTRACE
generates a report on the execution path used by the SQL Optimizer and the
statement execution statistics to tune the performance of DML statements. The
advantage is that AUTOTRACE does not require you to use TKPROF to format the
file. AUTOTRACE parses and executes the SQL while EXPLAIN PLAN only parses the
SQL. The using AUTOTRACE begins with creating plan_table by running the
utlxplan.sql. Second, create the PLUSTRACE roles by executing the plustrce.sql
script. This grants SELECT privileges on V$ views to the role and grants the
role to the DBA role. Grant the PLUSTRACE role to users who do not have the DBA
role. Finally, use AUTOTRACE. The ON option produces the result set and the
explain plan, and lists statistics. The TRACEONLY option displays the explain
plan and the statistics, but not the results. The ON EXPLAIN option displays
the result set and the explain plan results without the statistics. The
STATISTICS option displays the statistics only.
SQL> SET
AUTOTRACE TRACEONLY
SQL> SET
AUTOTRACE ON EXPLAIN
SQL> SET
AUTOTRACE ON STATISTICS
Oracle SQL
Analyze – shows explain plans based on different Optimizer modes.
Statistics – Collection Methods
The
DBMS_STATS package contains several procedures that allow an index, table,
schema or database to be analyzed. This package also allows you to gather most
of the statistics with a degree of parallelism. DBMS_STATS gathers statistics
only for cost-based optimization. The GATHER_DATABASE_STATS procedure collects
statistics for all objects in the database. The GATHER_INDEX_STATS collects
index statistics. The GATHER_SCHEMA_STATS collects stats for all objects in
schema and GATHER_SYSTEM_STATS collects CPU and I/O statistics for the system.
The gather TABLE_STATS collects table, column and index stats.
You can
collect statistics on an object with the ANALYZE command. Although the
cost-based Optimizer is not sensitive to minor changes in volume or
selectivity, you may want to collect new statistics periodically on frequently
modified tables to ensure optimizer uses recent accurate information.
SQL>
ANALYZE TABLE hr.employees COMPUTE STATISTICS FOR COLUMNS salary SIZE 10;
Table
analyzed
Using
ANALYZE command to collect new statistics overwrites any existing statistics in
the data dictionary and flushes any related execution plans from the Shared
Pool.
You can
estimate statistics with the ESTIMATE option. You can delete statistics as wel.
SQL>
ANALYZE TABLE hr.employees DELETE STATISTICS;
SQL>
ANALYZE TABLE hr.employees COMPUTE STATISTICS FOR ALL COLUMNS;
SQL>
ANALYZE TABLE hr.employees COMPUTE STATISTICS FOR ALL INDEXED COLUMNS;
SQL>
ANALYZE TABLE hr.employees COMPUTE STATISTICS FOR ALL INDEXES;
The size
clause specifies the maximum number of histogram buckets (default 75, max 254).
Statistics
– Table, Index and Columns
Each table
maintains a high watermark in the segment header block. The high watermark
indicates the last block that was ever used for the table. When Oracle
performas full table scans it reads all the blocks up to high watermark. The
DBMS_SPACE.UNUSED_SPACE can be used to find the high watermark and the number
of blocks above the HWM.
An index
clustering factor is an important index statistic for the Cost-based optimizer
in estimating index scan costs. It is an indication of the number of logical
data block visits to retrieve all table rows by the means of the index. If the
index entries follow the table order, the value approaches the number of data
blocks. Each block is visted only once. On the other hand, if the index entries
randomly point at different data blocks the clustering factor could approach the
number of rows. The USER_TAB_COL_STATISTICS view offers a relevant subset of
the columns displayed by the USER_TAB_COLUMNS. You can also use the
DBA_TAB_COL_STATISTICS. However, the USER_TAB_COL_STATISTICS does not containt
the OWNER column. The NUM_BUCKETS column shows that regular column statistics
are treated as a histogram with one bucket. When using regular column
statistics, a minimum value and a maximum value are stored in each column. The
Cost-based Optimizer uses these values to calculate predicate selectivity,
assuming an even distribution of the data between the two extreme values.
However, if your data is skewed this assumption may lead to suboptimal plans.
You can use histograms to store more detailed information about the data
distribution within a column by partitioning the column values in a number of
buckets. This means additional data dictionary requirements. Buckets are height
balanced, meaning that each bucket contains approximately the same number of
values.
SQL> ANALYZE
TABLE job_history COMPUTE STATISTICS FOR COLUMN employee_id;
You
generate histograms by using the DBMS_STATS package. You can generate histograms for columns of a table or
partition.
SQL>
EXECUTE DBMS_STATS.GATHER_TABLE_STATS (‘HR’,’EMPLOYEES’, ‘METHOD_OPT => ‘FOR
COLUMNS SIZE 10’);
SQL>
SELECT endpoint_number, endpoint_value
FROM
user_histograms
WHERE
table_name = ‘EMPLOYEES’ AND column_name = ‘SALARY’;
In this
example a 10 bucket histogram is created on the salary column. The SIZE
specifies the maximum number of buckets for the histogram. If the number of
frequently occurring distinct values in a column is relatively small, then the
set number of buckets to be greater than that number.
Statistics
– Copying
To copy
statistics, first use the DBMS_STATS.CREATE_STAT_TABLE to create a user defined
statistics table. Second, use DBMS_STATS.EXPORT_SCHEMA_STATS in the production
database to copy statistics from the data dictionary from the user-defined
statistics table fro step 1. Third, use the Export/Import Utilities to transfer
the statistics to a corresponding user-defined statistics table in the test
database. Finally, use DBMS_STATS.IMPORT_SCHEMA_STATS to import the statistics
into the data dictionary in the test database. The DBMS_STATS package can be
used to back up statistics prior to analyzing objects. The backup can be used
to restore old statistics and study changes in data characteristics over time.
SQL>
EXECUTE DBMS_STATS.IMPORT_SCHEMA_STATS (‘HR’, ‘STAT_TAB’)
To copy
statistics from the data dictionary to a user defined statistics table, use
EXPORT_COLUMN_STATS, EXPORT_DATABASE_STATS, ECPORT_INDEX_STATS, and vice versa
Tuning the Shared Pool and Buffer
Cache
Shard Pool
– Contents
The Shared
Pool is a portion of the SGA that contains shared memory constructs such as
shred SQL areas. The Library cache stores SQL and PL/SQL code. These are the
fully parsed or compiled representations of PL/SQL blocks and SQL statements.
The data Dictionary cache keeps information about dictionary objects. The UGA
keeps information about shared server connections, when the Large Pool is not
configured. UGA is located in the Shared Pool when using shared server and if
the Large Pool is not configured. You set the size of the Shared Pool with
HSRAED_POOL_SIZE parameter. It defaults to 8MB.
Shared Pool
– Tuning Parameters
A cache
miss on the Data Dictionary Cache or Library Cache is more expensive than a
miss on the Database Buffer Cache. Tuning the Shared Pool is a priority. When
you use the shared pool, you must concentrate on the Library Cache because the
algorithm tends to hold data dictionary in memory longer than Library Cache
data. Therefore, tuning the Library Cache to an acceptable cache hit ratio
ensures that Data Dictionary hit ratio is also acceptable. If the Shared Pool
is too small the server must dedicate resources to managing limited space
available. This consumes CPU and causes contention. Statistics on the reserved
pool from the V$SHARED_POOL_RESERVED view can help you to tune the parameters.
On system with ample free memory to increase the SGA, the goal is to have
REQUEST_MISSES = 0, not to have any request failures, or at least to prevent
this value from increasing.
You use the
ABORTED_REQUEST_THRESHOLD procedure in the DBMS_SHARED_POOL package to limit
the amount of the shared pool to flush prior to reporting an ORA-4031 error and
the extent of a flush that could occur due to a large object. The reserved pool
is too small when the value for REQUEST_FAILURES is more than zero and
increasing. To resolve this, increase the value of SHARED_POOL_RESERVED_SIZE
and SHARED_POOL_SIZE. The settings you select for these depend on the SGA size
constraint of your system. Increasing the value of the
SHARED_POOL_RESERVED_SIZE parameter increases the amount of memory available on
the reserved list without having an effect on users who do not allocate memory
from the reserved list. The reserved list is used to maintain the large
contigious chunks of memory in the reserved pool.
Too much
memory may have been allocated to the reserved list if REQUEST_MISSES is = 0
and not increasing or free memory is greater than or equal to 50%
SHARED_POOL_RESERVED_SIZE minimum. If either of these are true, decrease the
value for SHARED_POOL_RESERVED_SIZE.
SQL>
SELECT request_misses FROM V$SHARED_POOL_RESERVED;
SQL>
SELECT * FROM v$sgastat
WHERE pool
= ‘shared_pool’ AND name = ‘free memory’;
SQL>
SHOW PARAMETER shared_pool_reserved_size
When
V$SHARED_POOL_RESERVED fixed table can also indicate when the value for
SHARED_POOL_RESERVED_SIZE is too small. This may be the case if
REQUEST_FAILURES is greater than zero and increasing. If you have enabled the
reserved list decrease the value for the shared pool reserved size. If you have
not enabled the reserved list, you could increase the value for the
SHARED_POOL_SIZE parameter.
Library
Cache – Tuning Goals
Maintaining
information about frequently executed SQL and PL/SQL statements in the Library
Cache improve the performance of the Oracle server. The goals for the tuning
are reducing parsing and avoiding fragmentation. The Oracle server uses library
cache to store SQL and PL/SQL. A LRU algorithm is used to manage the cache. To
find whether a statement is already cached, the Oracle server reduces the
statement to the numeric value of the ASCII text and uses a hash function of
this number. The first goal for tuning the Library Cache involves parsing to a
minimum. Ensure that the statements that miss criteria on exisiting SQL
statements are kept to a minimum. If an application makes a parse call for a
SQL statement and the parsed representation does not already exist in the
shared SQL area of the Library Cache, Oracle parses the statement and allocates
a shared SQL area. Ensure that SQL statements can share a shared SQL area
whenever possible, by using as much generic code as possible and bind variables
rather than constraints. However, bind variables should only be used with OLTP,
not DSS. If an execute call for a SQL area containing the parsed representation
of the statement has been removed from the Library Cache to make room for
another statement, the Oracle server implicitly reparses the statement,
allocates a new shared SQL area for it, and executes it. To increase memory for
shared pool adjust the SHARED_POOL_SIZE parameter. If a schema object is
referenced in a SQL statement and that object is modified in any way, the SQL
statement in memory is rendered invalid. The second goal for tuning the Library
Cache is to avoid fragmentation. You avoid fragmentation by ensuring
availability of contigious space for large memory requirements through the
allocation of reserved space in the Shared Pool area. You can also avoid
fragmentation by pinning frequently required large objects such as SQL or
PL/SQL in memory, instead of aging them out with LRU. Another way to avoid
fragmentation is by using small PL/SQL packaged functions instead of large
anonymous blocks. Memory fragmentation can also be avoided by configuring the
Large Pool for Oracle Shared Server connections and measuring session memory
used by shared server processes in oracle shared server connections.
Library
Cache – Diagnostic Views
V$LIBRARYCACHE
contains statistics on library cache management.
SQL>
SELECT gets, pins, reloads
FROM
V$LIBRARYCACHE
WHERE
namespace=’SQL AREA’;
V$SGASTAT
displays the sizes of all SGA structures. The contents of the Shared Pool are
not aged out as long as memory is available in Shared Pool. V$SQLAREA contains
full statistics about all shared cursors, and the first 1,000 characters of the
SQL statement. V$SQLTEXT contains the full SQL text. V$DB_OBJECT_CACHE contains
references of database objects cached, packages, tables and synonyms in SQL
statements. The V$SQL_PLAN view contains columns derived from the PLAN_TABLE.
Shared
Cursors – Parameters
V$LIBRARYCACHE
view contains GETHITRATIO which determines the percentage of parse calls that
find a cursor to share. This ratio must be in high 90 in OLTP environment. If
not – the code must be improved. A cursor is a handle or pointer to a specific
private SQL area. A cursor that is shared by multiple occurrences of the same
PL/SQL statement is called a shared cursor. Oracle uses work areas to execute
SQL statements and store processing information. A cursor lets you name a work
area and access it stored information. To make efficient use of the space
allocated to the SGA you close cursors when you finish using them . the
OPEN_CURSORS parameter defines the number of cursors referencing private SQL
areas allocated to the user process. A private SQL area continues to exist
until the cursor is closed and therefore still exists after the completion of
the statement. To take advantage of additional memory available for shared SQL
areas you may need to increase the number of cursors permitted for a session.
Application developers must close unneeded open cursors to conserve system
memory. The default is 50.
The
CURSOR_SPACE_FOR_TIME Boolean parameter, defaults to false, letting you use
more space for cursors in order to save time. If you set it to true, you choose
to use space gain time. Shared SQL areas are not aged out until the cursor
referencing them is closed. Therefore, make sure that free memory is available
and there are no cache misses. Do not change the CURSOR_SPACE_FOR_TIME
parameter unless the value of RELOADS in V$LIBRARYCACHE is consistently 0. If
your application uses forms or any
dynamic SQL leave the setting at FALSE.
The
SESSION_CACHED_CURSORS parameter helps in situations in which a user repeatedly
parses the same statements. This occurs in Forms applications when users switch
between forms. All the SQL statements opened for a form are closed when you
switch to another one. A SESSION_CACHED_CURSORS parameter causes closed cursors
to be cached within the session. Therefore, any subsequent call to parse the
statement will bypass the parse phase. This is similar to HOLD_CURSORS in the
precompilers. To check if your settings is efficient compare the Session Cursor
Cache hit and Parse Count session statistics in the V$SESSTAT for a typical
user session. Remember that this increases overall demands on memory. The
default is 0, no caching.
Library
Cache – Reloads-to-Pins Ratio
The reloads
to pins ratio can be computed using two methods. The first method is by using
the V$LIBRARYCACHE view. V$LIBRARYCACHE displays whether statements that have
already been parsed have been aged out of the cache. The number of reloads must
not be more than 1% of the number of pins. Each row in the V$LIBRARYCACHE
contains statistics for one type of item in the Library cache. The item
described by each row is identified by the value of the namespace column. Rows
of the table with NAMSPACE values SQL AREA, TABLE/PROCEEURE, BODY and TRIGGER
reflect Library Cache activity for SQL statements and PL/SQL blocks. Rows with
other values reflect Library Cache activity for object definition that the
server uses dependency maintenance. Three keywords related tot the namespace
are GETS, PINS, and RELOADS. GETS shows the total number of requests for
information on corresponding item. PINS shows the number of executions of SQL
statements or procedures for each of these areas. RELOADS shows the number or
reloads counted for each of these namespaces. You can also use statspack to get
the reloads-to-pins ratio. The library cache activity section contains the
summary for the instance.
Library
Cache – Sizing
There 2
possible reasons for the reloads-to-pins ratio being greater than 0. First,
although required by successive reexecutions, shared parsed areas have been
aged out due to lack of space. Second, shared parsed areas are invalidated. To
avoid these frequent reloads, increase the SHARED_POOL_SIZE parameter. If a
schema object is referenced in a SQL statement and that object is later
modified, it becomes invalid. All dependent shared SQL areas are invalidated
when a table, sequence, synonym or a view is recreated altered or dropped or
procedure or package specs is recompiled.
There are a
few steps for sizing the Library Cache. First, define the global space for
stored objects such as packages and views. Second, define the amount of memory
used by SQL statements. Third, reserve space for large memory requirements to
avoid misses and fragmentation. Fourth, keep frequently used objects. Finally,
convert large anonymous PL/SQL blocks into small anonymous blocks calling
packaged functions. For existing application you can set up test and use the
dynamic views to find out how much memory is used. Begin by setting the
SHARED_POOL_SIZE parameter to a very large value, then run the application. To
compute the sharable memory used for stored objects such as packages and views,
use the query in this example. To compute the sharable memory used for SQL
statements, you need to query V$SQLAREA after the application has been running
for a while. For frequently issued statements, you can use the query in this
example to estimate the amount of memory used
though this will not include dynamic SQL. You must also allow about 250
bytes in the Shared Pool per user open cursor. This can be tested during peak
times:
SQL> SELECT
SUM(250 * users_opening)
FROM
v$sqlarea;
In a test
environment you can measure sharable memory by selecting the number of open
cursors for a test user. Fr example determine the number of current cursors for
user SID 9 by querying the V$SESSTAT and V$STATNAME:
SQL>
SELECT 250 * s.value_bytes_per_user
FROM
v$sesstat s, v$statname n
WHERE
s.statistic# = n.statistic#
AND n.name=
‘open cursors current’
AND s.sid=
9;
Multiply
the resulting value by the total number of users. Your application must be
ideally have a Library Cache as large as the sum of the numbers above plus a
small allowance for dynamic SQL.
SHARED_POOL_RESERVED
The DBA can
reserve memory within Shared Pool to satisfy large allocations during
operations such as PL/SQL compilation and trigger compilation. Smaller objects
will not fragment the reserved list, helping to ensure that the reserved list
has large contigiouys chunks of memory. Once the memory allocated from the
reserved list is free, it returns to the reserved list. The size of the reserved
list as well as the minimum size of the objects that can be allocated from the
reserved list, are controlled by the SHARED_POOL_RESERVED_SIZE parameter, which
controls the amount of SHARED_POOL_SIZE reserved for large allocations. Set the
value of the SHARED_POOL_RESERVED_SIZE parameter to 10% of the
SHARED_POOL_SIZE. If the SHARED_POOL_RESERVED_SIZE is more than 50%, the server
signals an error. The V$SHARED_POOL_RESERVED helps in tuning the reserved pool
and space within the Shared Pool. The REQUEST_FAILURES column contains the
number of times that no memory was found to satisfy a request.
Keeping
Large Objects
Loading
large objects is the primary source of fragmentation. User response time is
affected because of the large number of small objects that need to be aged out
from the Shared Pool to make room. To prevent slow user response time keep
these large or frequently required objects in the Shared Pool to make sure they
are never aged out of the Shared Pool.
SQL>
SELECT name, namespace FROM v$db_object_cache
WHERE
sharablemem > 10000
AND (type =
‘PACKAGE’ or type = ‘PACKAGE BODY’
OR type =
‘FUNCTION’ OR type = ‘PROCEDURE’_
AND kept =
‘NO’;
Objects to
keep include frequently required large procedural objects such as STANDARD and
DIUTIL packages, and those for which shareable memory exceeds a defined
threshold.
SQL>
EXECUTE DBMS_SHARED_POOL.KEEP(‘dbms_standard’);
Additionally,
compiled triggers that are executed often on frequently used tables and
sequences are also kept because sequence numbers are lost when the sequence is
aged out of the Shared Pool.
Startup
time is the best time to keep these objects because that prevents further
fragmentation.
To create
the package used to keep large objects run the dbmspool.sql script. The
prvtpool.plb script is automatically executed at the end. These scripts are not
run by catproc.sql.
Eliminating
Anonymous PL/SQL Blocks
There are 2
solutions for eliminating large anonymous PL/SQL blocks. First, find them and
convert them into small packaged functions. Second, if an anonymous PL/SQL
block can not be turned into a package, it can be identified in the V$SQLAREA
and marked KEPT.
A procedure
that keeps anonymous blocks in memory.
SQL>
DECLARE /* KEEP_ME */ x NUMBER
BEGIN x:=5;
END;
/
PL/SQL
Procedure successfully completed
SQL>
SELECT address, hash_value
FROM
v$sqlarea
WHERE
command_type = 47
AND
sql_text LIKE ‘%KEEP_ME%’;
ADDRESS HASH_VALUE
678822E4 4052616553
SQL>
EXECUTE DBMS_SHARED_POOL.KEEP (‘678822E4,4052616553’)
Data
dictionary Cache - V$ROWCACHE
SELECT
parameter, gets, getmisses
FROM
v$rowcache;
Data
Dictionary Cache - Tuning Goals
The reatio
of the sum of all GETMISSES to the sum of all GETS should be ideally less than
15% during normal running.
If it is
hgher, consider increasing Shared Pool size parameter. If a STATSPACK report
indicates a higher GET_MISS and GET_REQ
ratio for a number of items, this indicates
the SHARED_POOL_SIZE must be increased.
Buffer
Cache - Managing
When a
server needs a block, it follows several steps to read the block. First, the
server checks whether the required block is avalable in the Buffer Cache
by using a
hash function. If the block is found, it moved to another point in the LRU list
away from the LRU end.
This is a
logical read because no actual I/O took place. If the block is not found in
Buffer Cache the server process has to read from the data file.
Second, the
server process searches the LRU list for a free block before reading from the
data file. If the checkpoint queue excceeds its size threshold the server
signals
DBWn to flush dirty blocks from the data Buffer Cache. If the Server cannot
find a free block within asearch
threshold, it signals DBWn to flush. Fourth, if the
block is
found the server reads the block from the data file into the free block in the
database Buffer Cache. TheOracle serevr process moves the data block away from the
LRU list.
If the
block is not consistent the server rebuilds an earlier version of the block
from the current rolback segments. The DBWn manages the buffer cache by writing
dirty blocks to the data
files to
ensure that there are free blocks for the servers. DBWn responds to different
events in the instance. When a server process finds that the checkpoint queue
has exceeded its size threshold, it signals DBWn to flush.
DBWn writes
out the blocks on the checkpoint queue. When a tablespace is altered offline
temporarily or its online backup is started, DBWn copies the dirty blocks for
that tablespace from the LRU to the checkpoint queue and writes out the blocks
on the
checkpoint queue. Fifth, a server process that cannot find a free block on the
LRU list within the search threshold signals DBWn to flush dirty blocks. DBWn
writes out the dirty blocks from the LRU lsit. Sixth, DBWn wakes up every 3
seconds
to check
the checkpoint queue for blocks to write. DBWn moves dirty blocks from the LRU
list. If there is no update activity for extended periods of time, DBWn writes
all the dirty blocks during the 3 seconds time outs.
FInally,
when Log Wtriter LGWR signals that acheckpoint has occured, DBWn copies dirty
blocks from the LRU to the checkpoint queue.
Buffer
Cache Hit Ratio Tuning
The Buffer
Cache Hit ratio is a ratio of the number of blocks found in memory to the
number of blocks accessed. When the buffer cache is too small, the system slows
down because it performs too many I/O.
The DBA
monitors the Buffer Cache by using the V$SYSSTAT, the utlestat.sql and
utlbstat.sql, or the STATSPACK utility. And second, the V$DB_CACGE_SDVICE view
can also be used to monitor the usage of the Buffer Cache.
To improve
the buffer cache hit ratio, the DBA has four options:
1. Increase
the size of Buffer Cache
2. Multiply
buffer pools to separate blocks by access characteristics
3. The DBA
can configure the tables to be cached in memory when they are accessed by full
table scans.
4.
Configure to bypass Buffer Cache for sorting and parallel reads.
The DBA
determines the change in the hit ratio as buffers are added or removed. A s
ageneral rule, increase Buffer Cache size if the cache hit ratio is less than
90% or if there is adequate memory
for other
processes without inducing additional page faults. Increasing the size of the
database buffer cache does not always improve performance.
For
example, in a large data warehouse or DSS that uses many scans of large tables,
most of the data is read from disk. For such systems, tuning Buffer Cache is
less important, and tuning I/O is vital.
You need to
consider the impact of OS caching. For example, the Oracle servermay show a
high rate of physical I/O that does not appear at the OS level. This could mean
that Oracle blocks are aged out of Buffer Cache are kept in the OS cache and
can be quickly accessed.
However, it
generally is better to bypass the OS cache. More memory may be required to
maintain duplicate blocks
in memory
including one block in the OS and one in Buffer Cache. The OS caching is also
avoided when there is the CPU overhead of copying blocks from the OS cache to
the Buffer Cache.
Buffer
Cache - Views
The
V$SYSSTAT lists system statistics and contains the statistics to calculate the
cache hit ratio for the system. This view contains number, name, class, and
value for each system stat.
The
V$SESSTAT lists session statistics and contains the statistics used to
calculate the cache hity ratio for a sesison. This view shows session
identifier (SID), the statistic number and value.
Buffer
Cache Hit ratio – Measuring
The
database Buffer Cache performance, as measured by the hit ratio, records the %
of times that a data block requested by aquery
was already
in the memory. The server collects statistics on data access and stores them in
in the dynamic view V$SYSSTAT. You measure the
cache hit
ratio by using several system statistics.
Physical
reads contains the number of blocks read from the disk.
SQL>
SELECT name, value
FROM
v$sysstat
WHERE name
= 'physical reads or name = 'session logical reads';
To begin
measuring the cache hit ratio, first query the V$SYSSTAT view to get the values
of Physical reads, physical reads direct, physical reads direct (LOB),
and session
logical reads. Second, calculate the hit ratio for the buffer cache with the
formula:
Hit ratio =
1 - (physical reads - physical reads direct - physical reads direct (LOB)) /
session logical reads
Session
logical reads gives the total number or read requests for data. This value
includes requests satisfied by access to buffers in memory and requests that
cause a
physical I/O.
Finally,
multiply the ratio by 100 %. You must query the statistics during normal
business hours, but not immediately after startup (buffer cache).
Hit ratio
is affected by full table scans, data or application design, large tables with random
access, badly written SQL and uneven distribution of cache hits. If your buffer
cache hit ratio is low
them you
can increase the number of buffers in the cache to improve performance. To make
the buffer cache larger allocate more meomory to the buffer cache by decreasing
the size of other SGA components
with unused
memory. If there is room for SGA to grow, increase the value of the
DB_CACHE_SIZE and increase the size of the DB_BLOCK_BUFFERS.
If your hit
ratio is high your cache is probably large enough to hold your most frequently
accessed data. In this case you can reduce the cache size and still maintain
good performsance. Also, check your SQL statements
for tuning.
To make the BUFFER_CACHE smaller reduce the value of the DB_BLOCK_BUFFERS and
DB_CACHE_SIZE parameters. The minimum of DB_BLOCK_BUFFERS is
4. You can
use any leftover memory for other Oracle memory structures. Do not continue
increasing DB_BLOCK_BUFFERS and DB_CACHE_SIZE if the last increase made no
significant difference in cache hit ratio.
As a genral
rule, you increase DB_BLOCK_BUFFERS and DB_CACHE_SIZE under the conditions that
the cache hit ratio is less than 90%, there is adequate memory for other
processes as measured by the amount of page faults, and the previous
increase of
DB_BLOCK_BUFFERS was effective.
Buffer
Pools
The DBA may
be able to improve the performance of the database Buffer Cache if he creates
multiple buffer pools. Objects are assigned to a buffer pool depending on how
the objects are accessed.
1. KEEP
POOL is used to retain objects in memory that are likely to be reused. Keeping
these object in memory reduces I/O operations.
2. RECYCLE
POOL is used to eliminate blocks from memory that have a little chance to be
reused.
3. DEFAULT
POOL always exists. It is equivalent to the single Buffer Cache.
You can
define the 3 buffer pools. To segregate tables, use the STORAGE clause on the
underlying tables.
SQL>
ALTER TABLE emp
STORAGE
(buffer_pool KEEP);
There 3
init parameters used to define multiple buffer pools. First, DB_CACHE _SIZE
defines the number of buffers for the default pool in Oracle9i. This pool is
individually
configured
and the other pools do not become a part of this pool. In Oracle8i,
DB_BLOCK_BUFFERS parameter defines the size of the buffer cache for the
instance. In Oracle8i each individual buffer pool is created from this total
amount, the
remainder
is allocated to the default buffer pool. Second, DB_KEEP_CACHE_SIZE is used to
define the buffer pool blocks of data that you intend to retain in cache. In
Oracle9i, the memory blocks for the keep pool are independent of the
DB_CACHE_SIZE,
whereas in Oracle9i memory for KEEP pool is allocated from that defined in
DB_BLOCK_BUFFERS. Finally, DB_RECYCLE_CACHE_SIZE is used yo define the buffer
pool for blcoks that may not be retained in memory for long. The RECYCLE pool
gets allocated
from the DB_BLOCK_BUFFERS in 8i. The minimum number of buffers that must be
allocated to each pool is 50 times the number of LRU latches.
Buffer
Pools - Defining
SQL>ALTER
TABLE hr.employees
STORAGE
(BUFFER_POOL DEFAULT);
If the
buffer pool is too small, it is possible for a block to age out of the cache
before the transaction is completed. The DBA can monitor the number of buffer
pool blocks by object by using V$CACHE created by catparr.sql.
V$CACHE is
intended for use with RAC and creates a number of other views that are useful
only for RAC. V$CACHE maps extents in the data files to database objects and
needs to be rerun after new objects have been created. You can determine the
number
of blocks
required for objects in the recycle pool. First, begin by tuning Buffer Cache
with the RECYCLE pool disabled. Second, run catparr.sql to set up and populate
V$CACHE. Thired during pick running times query to calculate how many blocks
are used
by each
object:
SQL>
SELECT owner#, name, count(*) blocks
FROM
v$cache
GROUP BY
owner#, name;
Finally,
sum the blocks for all objects that will be used in the recycle buffer pool and
divide by fourto get
RECYCLE
pool size. You divide by 4 because 1/4 of the blocks targeted for the RECYCLE
pool are active. The other 3/4 are wating to be aged out of the cache.
Also,
V$SESS provides I/O stats by session.
Buffer
Pools - Views
The
V$BUFFER_POOL_STATSISTICS view displays statistics such as prysical writes,
consistent gets, and free buffer waits against the multiple buffer caches, if
allocated.
The
DBA_SEGMENTS, USER_SEGMENTS dieplays info about segments within all or user
accessible tablespaces. USER_CLUSTERS describes clusters. USER_INDEXES
describes indexes owned by current user. USER_TABLES describes all relational
tables. USER_OBJECT_TABLES
describes
all objects owned by current user.
The
V$BUFFER_POOL describes the buffer pools allocated. When the server retrieves
blocks by using a full table scan, the block go to the end of the LRU list. The
blocks are used the next time a free block is needed, which means they are
available for other processes.
You can
choose to alter this behavior by caching the tables at the most recently used
MRU end of the list.
You can
create the table by using the CACHE clause. Next, by altering a table with the
CACHE clause. Finally, by coding the CACHE hint in the query.
The CACHE
clause is indicated when you create small lookup tables used by many users. You
may overcrowd teh Buffer Cache if you have many cached tables.
In addition
to buffer cache hit ratio, you can use other diagnostics.
You must
consider increasing the buffer cache size if there are high or increasing
values for the FREE_BUFFER_INSPECTED in V$BUFFER_POOL_STATISTICS. This is the
number of buffers skipped to find a free buffer. Buffers are skipped because
they are dirty or pinned.
You can
find out if there have been waits for buffers from V$SYSTEM_EVENT and V$SESSION_WAIT.
There are 3 main events to look for. These events are BUFFER_BUYS_WAIT (there
are some buffers in the buffer cache that multipleprocesses trying to use
concurrently),
FREE_BUFFER_INSPECTED, and FREE_BUFFER_WAIT.
If there are no waits then the event has not yet occurred. Query V$WAITSTAT for
the wait statistics for each class of buffer. Consider using automatic segment
space management or increasing freelists to avoid multiple processes attempting
to insert into the same block.
UNDO HEADER
displays contention or rollback segment header. If you are not using automatic
undo management and more rollback segment.
UNDO BLOCK
displays contention on rollback segment block. If you are not using automatic
undo management, then consider making rollback segment sizes larger.
The
FREE_BUFFERS_INSPECTED eventis a measure of how many buffers on the LRU list
are inspected by a process looking for a free buffer before triggering DBWn to
flush buffers to disk. The FREE_BUFFER_WAIT event indicates that a server process
was unable to find a free buffer and has posted the DBWn to write to disk. To
resolve
the
contention involved DBWn has to make blocks available faster for writing. To
achieve this examine ways of speeding up the writing processes. This event is
also an indication that the Buffer Cache is too small. DBWn may not keep up
with writing dirty buffers based on several situations. First, the I/O system
is slow. Second, there are resources it is waiting for.
Third, the
Buffer Cache is so small that DBWn spends too much time cleaning out buffers
for several processes. Finally, it is so big that one DBWn process is
insufficient to free buffers in the cache to satisfy requests. If this event
occuring frequently, then examine the session waits for DBWn to see whther ther
eis anything delaying DBWn - V$SESSION_WAIT.
Freelist
Contention - Resolving
A freelist
is a list of data blocks that usually includes blocks existing in a number of
different extents with the segment. When an insert occurs, the free list is
used to determine which blocks are availabale for inserts. many server
processes can contend for the same free list if many inserts are occuring. This
results in freelist contention while server process incurs waits.
Single CPU
systems do not benefit greatly from multiple free lists because the CPU system
manages one process at a time. Even in a single CPU system, however, adding
freelists may ensure that the processor is used more effectively, but care must
be taken when adding free lists. The overall tuning goal for free lists is to
ensure that there are a sufficient number of freelists
to minimize
contention. By using automatic free space management Oracle stopres free list
information inside bitmaps that are faster to update.
SQL>
CREATE TABLESPACE auto_seg_ts DATAFILE
'file_seg.dbf'
SIZE 1M
EXTENT
MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
The overall
tuning goal for free lists is to ensure that there are a sufficient number of
free lists to minimize contention. The V$SYSTEM_EVENT, V$WAITSTAT and
V$SESSION_WAIT dynamic performance views are used to diagnose free list
contention. The DBA_SEGEMNTS data dictionary view is used to identify the
objects that needed to be modified to increase the number of free lists.
The
FREELIST keyword is used at the segment level. The number of free lists can be
changed by ALTER TABLE <> STORAGE (FREELISTS 5); statement. Notice that
you can not alter the free list sopreage parameter for segments in tablespaces with AUTO SPACE
MANAGEMENT.
You can
query V$WAITSTAT and V$SYSTEM_EVENT to determine whether there is freelist
contention. If the high numbers are returned, you must identify the objects.
SQL>
SELECT * FROM v$waitstat
WHERE class
= 'segment header';
SQL>
SELECT event, total_waits
FROM
v$system_event
WHERE event
= 'buffer busy waits';
SQL>
SELECT * FROM v$waitstat;
SQL>
SELECT event, total_waits FROM v$system_event;
To reduce
the value of BUFFER_BUSY_WAIT on data blocks, change PCTFREE and/or PCTUSED.
Check for right-hand indexes, increase INTRANS and reduce number of rows per
block.
To reduce
buffer waits on segment headers, use free lists, increase the number of free
lists or use free list groups. To resolve free list contention you must first
identify the object. First, determine the file, block, and ID for which
freelist contention is occuring by querying V$SESSION_WAIT
SQL>
SELECT p1, p2, p3 FROM v$session_wait;
Second,
identify the segment and determine the number
of free lists that currently exist for the segment identified by
querying DBA_SEGMENTS.
SQL>
SELECT s.segment_name, s.segment_type, s.freelists,
w.wait_time,
w.seconds_in_wait, w.state
FROM dba_segments
s, v$session_wait w
WHERE
w.event = 'free busy waits'
AND w.p1 =
s.header_file
AND w.p2 =
s.header_block;
Finally,
you have 2 options. You can change the object to increase the number of free
lists for the object, with Oracle9i by
using the alter table command the freelist keyword. Alternatively, you can move
the object to an auto managed tablespace.
With
Oracle9i free space can be managed automatically inside database segments. The
in-segment free or used space is tracked by using bitmaps as opposed to free
lists.
Dynamic SGA
and Other SGA Structures
Redo
Entries
The redo
log buffer of the SGA stores redo entries that log changes made to the
database. The redo entries stored in the redo log buffers
are written
to an online redo log file, which is used in case of recovery. The Oracle
server copy redo entries from the user memory to the redo log
buffer for
each DML or DDL statement. The redo entries contain the information necessary
to reconstruct or redo changes made to the database. The LGWR processes
write the
redo log buffer to the active online redo log file or members. It writes all
redo entries that have been copied into the buffer since the last time it
wrote.
Larger
values for LOG_BUFFER reduce log file I/O particularly if trnasctions are long
or numerous. Frequent commits will clear out the buffer, leading to a smaller
buffer size.
The value
of LOG_BUFFER is OS dependent.
Diagnosing
Disk I/O contention
An overly
active I/O system can be evidenced by disk queue lengths of greater than two,
or disk service times that are over 20-30 mins. On machines with fast
processors and relatively
slow disks
the processors may be filling the rest of the redo log buffer in the time it
takes the LGWR process to move a portion of the buffer out to disk. For this
reason, a larger buffer makes it less
likely
that the
new entries will collide with the part of the buffer still being written.
Server processes may request space from the redo log buffer to write new
entries and not find any. They will wait for LGWR to flush the buggers to disk.
The
V$SESSION_WAIT indicates through log buffer space event if there are any waits
for space in the log buffer because the session is writting data into the log
buffer faster than LGWR can write it out.
SQL>
SELECT sid, event, seconds_in_wait, state
FROM
v$session_wait
WHERE event
= '%log buffer space%';
The value
of redo buffer allocation retries ahould be near 0. This number should not be
greater than 1% of the redo entries. If this number increments consistentlly,
process have to wait for space in the buffer.
The wait
may be caused by the redo log buffer to be too small. In this case, you would
increase the size of the redo log buffer by changing the value of LOG_BUFFER to
improve the checkpointing or archiving process.
The redo
log buffer is normally small and a modest increase can greatly increase the
throughput. The value of SECONDS_IN_WAIT of the log buffer space event
indicates the time spent waiting for space in the redo log buffer because the
log
switch does
not occur. This is an indication that the buffers are being filled up faster
than LGWR is writing. This may also indicate disk I/O contention on the redo
log files.
The redo
buffer allocation retries statistic in the V$SYSSTAT view reflects the number
of time a user process waits for space in the redo log buffer to copy new
entries over that have been written to disk. LGWR normally writes fast enough
to ensure
that space
is always available in the buffer for new entries, even when access to the redo
log is heavy.
SQL>
SELECT name, value FROM V$SYSSTAT
WHERE name
= 'redo allocation retries';
The
V$SYSSTAT view displays another statistic, Redo log space requests. This
statistic indicates that the active log file is full and that the Oracle server
is waiting for disk space to be performing a log switch. If in V$SESSION_WAIT
the value for the Log Buffer
space event
indicates some time spent waiting for space in the redo log buffer consider
making the log buffer bigger if it is small or moving the log files to faster disks such as striped disks.
Investigate
the possible reasons why the LGWR is slow in freeing buffers. If there is disk
I/O contention on the redo log files, check that the redo log files are stored
on separate fast devices.
If the
V$SYSTEM_EVENT view check the number of occurences of the log file switch
completion event which identifies the number of log switch waits. Increase the
size of the redo log files.
SQL>
SELECT event, total_waits, time_waited, average_wait
FROM
v$system_event
WHERE event
like 'log file switch completion';
DBWn has
not completed checkpointing the file when the LGWR needs the file again. LGWR
has to wait. In the alert.log file check for the message CHECKPOINT NOT
COMPLETE.
In the
V$SYSTEM_EVENT view, check the nuber of occurences of the event Log File Switch
(Checkpoint Incomplete). These occurences identify the log file switch waits
because of incomplete checkpoints.
SQL>
SELECT event, total_waits, time_waited, average_wait
FROM
v$system_event
WHERE event
like 'log file switch%';
When the
Archiver process ARCH cannot write to the archived redo log files or archive
fast enough, it prevents the LGWR from writing. Confirm that the archive device
is not full, then add redo log groups.
SQL>
SELECT event, total_waits, time_waited, average_wait
FROM
v$system_event
WHERE event
like 'log file switch%';
The LGWR
process starts a new ARCn process whenever the current number of ARCH processes
is insufficient. If you anticipate a heavy workload for archiving you can
specify multiple archiver processes with the LOG_ARCHIVE_MAX_PROCESSES
parameter.
LGWR is
slow in in freeing buffers when the DB_BLOCK_CHECKSUM parameter is set to TRUE
therefore adding performance overhead.
Avoid
logging bulk operations in the redo log by using SQL*Loader and NOLOGGING mode.
Sizing the
Java Pool Memory
When a
user's session-during Java state exceeds this size, a warning is written into
an RDBMS trace file. The default is 1MB. The JAVA_SOFT_SESSIONSPACE_LIMIT
parameter allows you to specify a soft limit on Java memory usage in a session.
When a user session-duration
Java state
attempts to exceed the size of JAVA_MAX_SESSIONSPACE_SIZE parameter, the
session is killed with an out of memory failure. The default size is 4GB. This
limit is set purposefully very high as not to be visible normally.
The
JAVA_MAX_SESSIONSPACE_SIZE parameter specifies the maximum amount of session
space made available to a Java program executing in the server. If a user
invoked Java program is not self limiting in its memory usage, setting a
maximum amount of session space places a hard
limit
available to it. When the value for the JAVA_MAX_SESSIONSPACE_SIZE is exceeded,
Oracle9i displays the ORA-29554 message.
SIzing the
SGA for Java
The Java
engine uses about 8KB per loded class. The Shared Pool is also temporarily used
by the class loader while loading and resolving classes into the database.
While loading and resolving particularly large JAR files, you can use 50MB of
the Shared Pool memory.
The UGA is
allocated in the Large Pool when the LARGE_POOL_SIZE is included in the
init.ora file. The Java Pool is a structure in the SGA that is used for all
session specific Java code and data within the EJE. During instance startup the
Java pool is allocated a fixed amount
of memory
equal to the init parameter JAVA_POOL_SIZE. Generally, the JAVA_POOL_SIZE
should be set to 50MB or higher for large applications (the default is 20MB).
Sizing of
the UGA (USer Global Area)
If you are
using the Oracle shared server the large pool is not configured, then user
session and cursor stateinformation is stored in the Shared Pool instead of in
private user memory.
Sort area
and private SQL areas are included in the session information. this is because
shared servers work on a per statement basis, so any server may need access to
any user's information. this part of the Shared Pool is called the UGA. The
total memory for shared servers will be less than
dedicated
servers. You may want to increase the SHARED_POOL_SIZE but your private user
memory is lower. For all Oracle shared server connections you need tocomute the
amount of space required for all shared server users to put their session
memory in the shared pool. this example displays the computation of
UGA space
used by your connection.
SQL>
SELECT SUM(value) || ' bytes' "Total session memory"
FROM
V$MYSTAT, V$STATNAME
WHERE name
= 'session uga memory'
AND
v$mystat.statistic# = v$statname.statistic#;
Large Pool
LArge pool
is set by LARGE_POOL_SIZE lets you specify the size of the large pool
allocation heap. The large pool allocation heap is used by shared server
systems for session memory by parallel execution for message buffers and by
backup processes fro disk I/O buffers.
The large
pool must be explicitely configured. The memory does not come out of the shared
pool, but directly out of the SGA therefore adding to the amount of shared
memory. The large pool is used to provide large memory allocations for session
memory for the I/O server processes, backup and restore operations.
The memory
for Oracle backup and restore operations and for I/O server processes is
allocated in buffers of a few hundred kilobytes. The Large Pool is better able
to satisfy such requests than the Shared Pool.
By
allocating session memory from the Large Pool for the shared server the server
can use the Shared Pool primarily for caching shared SQL and avoid the
performance overhead caused by extra information in the shared pool.
Dynamic SGA
- Size Parameters
A dynamic
SGA implementation allows SGA configuration to change while the instance is
running. Dynamic SGA allows sizes of the Buffer Cache and Shared Pool to be
changed without shutting down the instance. Concievably the Buffer Cache and
Shared Pool could be initially under-configured and would grow and shrink
depending upon
their
respective workloads. You should configure instances to start with less than
the maximum amount of memory the OS makes available, and allows the structure
to change as needed.
For optimal
performance in most systems the entire SGA should fit into real memory. If it
does not, and virtual memory is used to store parts of it then overall database
performance can decrease dramatically because portions of the SGA are written
to and from disk by the OS. The amount of memory dedicated to all shared areas
in the
SGA also
has erformance impact. The size of the SGA is determined by several init
parameters - DB_CAHE_SIZE, LOG_BUFFER, SHARED_POOL_SIZE and LARGE_POOL_SIZE.
Shared Pool
SQL>
ALTER SYSTEM SET SHARED_POOL_SIZE = 72;
The size
allocation requires that size must be an an integer multiple of the granule
size and the total SGA size cannot exceed the value of the SGA_MAX_SIZE.
Buffer
Cache - Parameters
The Buffer
Cache parameters DB_BLOCK_BUFFERS, BUFFER_POOL_KEEP, and BUFFER_POOL_RECYCLE
are deprecated but have maintained for backward compatibility. The Buffer Cache
consists of independent subcaches for buffer pools and multiple block sizes.
the parameter DB_BLOCK_SIZE determines the primary block size, which is used
for the SYSTEM tablespace.
In Oracle9i
the value of DB_CACHE_SIZE refers to only the size of the default buffer pool.
The buffer cache advisory enables and disables statistics gathering.
SQL>
ALTER SYSTEM SET DB_CACHE_ADVISE = ON;
The buffer cache
advisory info is in V$DB_CACHE_ADVICE.
Granule
A granule
is unit of continuous virtual memory allocation. The basic unit of work
inparallelism called a granule. The number of granules and their size
correlates with the Degree of Parallelism (DOP). The size of a granule depends
on the estimated total SGA_MAX_SIZE size
parameter. If SGA_MAX_SIZE is less than 128M then the granule size is
4M, if it is
> 128M.
the granule size is 16M. The buffer cache and shared pool are allowed to grow
and shrink based on granule boundaries. For each Buffer Cache cache componentthat
owns granules, the number of granules allocated to that component any pending traNSATIONS AGAINST COMPONENET,
AND TARGET SIZE IN GRANULES WILL BE TRACKED AND ANALYZED BY v$bufFER_POOL. At
instance
startup the
Oracle server allocates granule entries one for each granule to support
SGA_MAX_SIZE bytes of address space/ At stratup continues each component
aquires as many granules as it requires.
The minimum SGA configuration requires at least 3 granules. There is one
granule for the fixed SGA area including redo buffers, one granule for buffer
cache, and one granule for shared pool
for a fixed
SGA. The V$BUFFER_POOL shows all buffer pools available for instance.
I/O Slaves
Mechanism
Many
platforms that support asynchronous I/O for disk devices do not do so for tape
devices. In this case I/O slaves can be used to do nonblocking I/O to tape
devices.
The
DBWR_IO_SLAVES and BACKUP_TAPE_IO_SLAVES init parameters control I/O slaves
deployment. You can turn asynchronous I/O on and off with DISK_ASYNCH_IO
parameters. It may be necessary to turn off the asynchronous I/O facility
provided by the OS. For example if the asynchronous I/O code of the platform
has bugs or inefficient asynchronous I/O can be disabled by the device
type.
Ususally, the parameter should be left at the default TRUE. I/O slaves can be
deployed by the DBW0 process. I/O slaves for DBW0 are allocated immediately
following database open, when the first I/O request is made. The DBW0 process
looks for an idle I/O slave. If one is found it gets the post. If there are no
idle slaves the I/O issuer spawns one. If the allowed number of slaves
have been
spawned, the issuer waits and tries again to find an idle slave. The DBW0
continues to do all the DBW0 related WORK SUCH AS GATHERING DIRTY BUFFERS INTO
A BATCH.
tHE dbw0
i/o SLAVE SIMPLY DOES THE i/o ON BEHALF OF DBW0. That is the writing of the
batch is parallelized between I/O slaves. This is beneficial in write intensive
environments because the CPU time associated with issuing I/O can be divided
between the I/O slaves.
Multiple
DBWR Processes
Multiple
DBWR processes can be specified by the DB_WRITER_PROCESSES parameter (DBW0 to
DBW9).
SQL>
SELECT total_waits FRIM v$system_event
WHERE event
= '%free buffer waits%';
Consider
increasing the DBWR processes of you see a high number of FREE_BUFFER_WAITS
after querying the V$SYSTEM_EVENT.