OCP - Oracle9i Database Administrator Fundamentals 1 Exam Prep - Module 9: Storage Structure and Relationships

Practice: List the segment types stored in the database

 

This practice covers the various datatypes stored in an Oracle database.

 

Instructions

 

1.      Identify the different types of segments stored in your database.  Note, this list may vary from database to database depending on the installation.

 

 

SQL> SELECT DISTINCT segment_type FROM dba_segments;

 

SEGMENT_TYPE

------------------

CACHE

CLUSTER

INDEX

INDEX PARTITION

LOBINDEX

LOBSEGMENT

NESTED TABLE

ROLLBACK

TABLE

TABLE PARTITION

TYPE2 UNDO

 

11 rows selected.

 

SQL>

 

 

2.      Login as the user SYSTEM and identify the different types of segments used in the schema.  Note, this list may vary from database to database depending on the installation

 

 

SQL> connect system/manager

Connected.

SQL> SELECT DISTINCT segment_type FROM user_segments;

 

SEGMENT_TYPE

------------------

INDEX

INDEX PARTITION

LOBINDEX

LOBSEGMENT

TABLE

TABLE PARTITION

 

6 rows selected.

 

 

Practice: Oracle data block space usage

 

This practice covers the parameters that determine how the space in a data block is used.

 

Instructions

 

1.      Create a locally managed tablespace that uses the Oracle9I automatic space management feature.

 

 

SQL> CREATE TABLESPACE auto_space_mgmt

  2  DATAFILE '$HOME/ORADATA/u03/auto_sp01.dbf' SIZE 5M

  3  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K

  4  SEGMENT SPACE MANAGEMENT AUTO;

  

 

 

2.      Create a test table TEST222  in the AUTO_SPACE_MGMT tablespace without specifying the initrans, maxtrans, pctfree and pctused parameters.

 

 

SQL> create table test222

  2  (c1 char) tablespace auto_space_mgmt;

 

Table created.

 

 

3.      Create a test table TEST223  in the AUTO_SPACE_MGMT tablespace specifying the initrans, maxtrans, pctfree and pctused parameters.

 

 

SQL> create table test223

  2  (c1 char) tablespace auto_space_mgmt

3       pctfree 25 pctused 50 initrans 4 maxtrans 4;

 

Table created.

 

 

4.      Create a test table TEST224  in the DATA01 tablespace specifying the initrans, maxtrans, pctfree and pctused parameters.  Use the same values as in number 3 above.

 

 

SQL> create table test224

  2  (c1 char) tablespace data01

  3  pctfree 25 pctused 50 initrans 4 maxtrans 4;

 

Table created.

 

 

5.      Determine the settings for pctfree, pctused, initrans and maxtrans for the three tables that you just created.  Were all of the parameters that you specified for each table used?  Can you determine why?

 

 

SQL> select table_name, pct_free, pct_used, ini_trans, max_trans

  2  from user_tables where table_name like 'TEST22%';

 

TABLE_NAME        PCT_FREE   PCT_USED  INI_TRANS  MAX_TRANS

--------------- ---------- ---------- ---------- ----------

TEST222                 10                     1        255

TEST223                 25                     4          4

TEST224                 25         50          4          4

 

  

 

Practice: Obtain information about storage structures from the data dictionary

 

This practice covers obtaining details about space usage within the database.

 

INSTRUCTIONS

 

1.      Create a new table in the DATA01 tablespace and specify 1m for the initial and 1m for the next extent size.

 

 

SQL> create table test232

  2  (c1 char) tablespace data01

  3   storage(initial 1m next 1m);

 

Table created.

 

 

2.      List segments that may generate errors because of lack of space when they try to allocate an additional extent.  Note, your listing might vary from the output shown here but the TEST232 table should be listed.

 

 

SQL> SELECT s.segment_name, s.segment_type,

  2   s.tablespace_name, s.next_extent

  3  FROM   dba_segments s

  4  WHERE  NOT EXISTS

  5   ( SELECT 1

  6      FROM dba_free_space f

  7      WHERE s.tablespace_name=f.tablespace_name

  8      HAVING  max(f.bytes) > s.next_extent ) ;

 

SEGMENT_NAME   SEGMENT_TYPE    TABLESPACE_NAME    NEXT_EXTENT

-------------- --------------- ------------------ -----------

KOTAD$         TABLE           SYSTEM                  253952

TESTRO         TABLE           READONLY

TEST232        TABLE           DATA01                 1048576

 

 

 

3.      Which files have space allocated for the TEST232 table?  Are the files autoextensible?

 

 

SQL> SELECT DISTINCT f.file_name, f.autoextensible

  2      FROM   dba_extents e,dba_data_files f

  3      WHERE  e.segment_name='TEST232'

  4      AND    e.file_id=f.file_id;

 

FILE_NAME                              AUT

-------------------------------------- ---

E:\ORANT901\ORADATA\D901\DATA01.DBF    NO

 

 

 

4.      List the free space available by tablespace. The query should display the number of fragments, the total free space, and the largest free extent in each tablespace.

 

 

SQL> SELECT tablespace_name,COUNT(*) AS fragments,

  2        SUM(bytes) AS total,

  3        MAX(bytes) AS largest

  4      FROM     dba_free_space

  5      GROUP BY tablespace_name;

 

TABLESPACE_NAME                 FRAGMENTS      TOTAL    LARGEST

------------------------------ ---------- ---------- ----------

AUTO_SPACE_MGMT                         1    5046272    5046272

CWMLITE                                 1   14680064   14680064

DATA01                                  1    1028096    1028096

DATA02                                  1    1966080    1966080

DATA03                                  1    1966080    1966080

DEMO_STORAGE                            1    1835008    1835008

DRSYS                                   1   12845056   12845056

EXAMPLE                                 1     196608     196608

INDEX01                                 1    2031616    2031616

INDX                                    1   26148864   26148864

SYSTEM                                  1   40124416   40124416

TOOLS                                   1   10420224   10420224

UNDO1                                   1   40566784   40566784

UNDOTBS                                 3  208338944  208011264

UNDOTEST                                3    2097152     983040

USERS                                   1    1245184    1245184

 

 

  NetG Thompson Oracle 9i Training Notes

 

1. Storage Structures - Admin Manage Storage Structure

 

Any storage parameter specified at the segment level overrride the corresponding paramter set at the tablespace level

except MINIMUM EXTENT and UNIFORM SIZE.

 

The OPTIMUM parameter can not be specified at the tablespace level, OPTIMUM is only specified at segment level.

 

 

The MINIMUM EXTENT parameter specified at the tablespace level applies to all extents allocated subsequently to the tablespace.

 

Extents alllocated when the segment is:

created,

extended,

altered

deallocated when the segment:

dropped

altered

truncated

automatically resized.

 

Contiguous extents are coalesced into one extent when:

SMON merges adjacent free extents

when the Oracle server needs to allocate an extent that needs space from more tahn one ajacent extent

When DBA uses coalesce extent command

The DBA_FREE_SPACE_COALESCED view about coalesced extents and blocks

 

To use non standard block sizes you configure sub caches within the buffer cache area of the SGA. You replace n with 2,4,6,8,16 or 32 in DB_nK_CACHE_SIZE to specify non standard db block sizes.

 

A data block consists of header, free space and data space. Block header - contains data block address, directories, and transaction slots. Free space - can be used by either new rows or additional transaction entries. data space - contains column values.

 

the parameters that control concurrency are INITRANS (initial number of transactions on the block, default 1 for tables 2 for indexes) and MAXTRANS (the maximum number of transactions slots, default 255).

the transaction slots store information about the transactions.

the PCTFREE specifies the % of space in each data block that is reserved for growth resulting from updates of rows in that data block, default 10%.

For example, the PCTFREE of 20% in a create table means that inserts to the block must stop as soon as free space drops to 20% or less. The free space then used for updates only.

The PCTUSED represents the minimum % of the space mantained for each data block (default 40%). when a data block is filled to the limit determined by the PCTFREE, the Oracle server considers block unavailable for inserts.

the block remains anavailable for inserts untill % by the data falls below PCTUSED. Until % of the block falls below PCTUSED, the Oracle server uses the free space only for updates.

For example, when PCTUSED is 40% the block is reused as soon as utilization drops to less than 40%. Insertions continue until utilization rechaes 80% and then the cycle repeats.

 

BMB (Bit Mapped Blocks)

The BMBs in a segment have a tree hierarchy. There can be a maximum of 3 levels of this hierarchy:

leaves,

intermediate,

root.

the leaves of this hirarchy point to a range of data blocks taht are potential candidates for an insert.

 

Auto management of free space - Benefits

1. Efficient space utilization

2. Improved handling of data concurrently

3. Better performance on RAC environment

 

views Modified to Support Auto-Management of Segment Free Space

DBA_TABLESPACES (stores segment information)

USER_TABLESPACES

DBA_TABLES (stores relational information)

DBA_SEGMENTS

 

DBMS_SPACE package

 

Analyzes growth and space requirements in segments. DBMS_SPACE runs with the SYS privilege, execution granted to PUBLIC, subprograms run under caller security, user must have ANALYZE granted.

Procedures of DBMS_SPACE:

UNUSED_SPACE - returns information about space avaialable, including HWM.

FREE_BLOCKS - number of blocks available for storing data (used on non-auto free space managed object).

SPACE_USAGE - show space usage of auto blocks  under the segment HWM.

 

DBMS_REPAIR package

Corrects data corruption by dropping and recreating an object after corruption detected.

DUMP_ORPHAN_KEYS - reports index entries that point to rows in corrupt data blocks

SEGMENT_FIX_STATUS - allows the user to correct a corrupted bitmap entry.

REBUILD_FREELISTS - recreates the free lists for a specified object

CHECK_OBJECT - validates specified objects and populates the repair table

ADMIN_TABLES - provides create, purge and drop functions

FIX_CORRUPT_BLOCKS - remedies corrupt data blocks in specified objects

SKIP_CORRUPT_BLOCKS - enables or disables scanning of all blocks

 

 

 

2. Storage Structures - Rollback Segments

 

A transaction uses only one undo segment to store all of the undo records. However, multiple transactions can write to a single undo segment.

The header of an undo segment contains a transaction table. The table contains information about the current transactions that are using the undo segment.

Undo segments are used for different purposes. One use of undo segments is transaction rollback. When a transaction changes any data, the old image of this data is stored in an undo segment. If the transaction is rolled back, the value is restored.

Undo segments are also used for transaction recovery. If an Oracle instance fails while transactions are in progress, the Oracle server ensures transaction recovery by rolling back the uncommitted changes after the database is reopened. This is possible because the undo segments are also protected by redo logs files.

Undo segments are used to provide read consistency for a given statement. Read consistency means that only the changes committed prior to the time when a statement starts execution are visible to the database users.

 

Automatic undo management

 

In previous releases of the database, undo data was managed manually by using rollback segments. In Oracle9i, rollback segments are replaced by undo segments that are managed by the Oracle server.

Every Oracle database maintains undo data by using a certain method. The Oracle server performs undo or rollback operations for the changes to the database based on the undo records. The undo records store the result of transactions before they are committed.

Undo records provide read consistency. I f a user accesses the data that is being updated by another user, the server provides the before image of the data thru undo records.

In Oracle 9i you can use the System Managed Undo (SMU) mode - automatic. When you specify the SMU mode, the server allocates all of your undo space in a single undo tablespace. There may be multiple undo tablespaces in the database. However, only one tablespace is active at a time.

The SMU mode eliminates the complexities of managing rollback segment space. In the SMU mode the Oracle server automatically manages the creation, allocation and tuning of undo segments.

Dynamic transfer of extents

The SMU mode supports dynamic transfer of extents between undo segments in an undo tablespace. When a transaction needs undo space, the Oracle server extents from the current undo segment or performs an extension. If these steps result in insufficient space, the Oracle steals extents fro other undo segments.

The dynamic transfer of extent space prevents the users from receiving the ORA-30036 error – unable to extend due to insufficient space in the undo tablespace.

Dynamic transfer of extents allows undo segment space to be reused efficiently. By using the SMU mode, you can control the duration for which undo records are retained before they are overwritten. You specify time for the UNDO_RETENTION system parameter.

Undo segments – read consistency

To construct a read consistent image of data the Oracle server uses the information in an undo segment and the information in the table being queried.

The changes to the data are permanent only when the Oracle server completes processing and commits the transaction. This means that any statement that starts before a transaction is committed is not able to read the uncommitted changes.

Oracle server ensures read consistency for a long query

The Oracle server uses the following process to ensure read consistency. A block may have changes that are not committed when the server begins the query process. If a long query executed when several changes are being made, the Oracle server constructs a read consistent image of the changed rows in the block.

The read-consistent image of the block is built through a series of four steps. When processing a select statement the Oracle server first determines the SCN. The SCN enables the server to ensure that any changes that were not committees before the SCN are not processed. The Oracle serve reads the necessary data blocks, based on the SCN value. If a block has not been modified after the query has been initiated, the Oracle server checks for suitable rows.

If the data blocks contain changes that were not committed, the Oracle server retrieves the before image of the uncommitted changes from the undo segment.

The Oracle server constructs a read-consistent copy of the block by applying the before image from the undo segment to the affected rows in the block.

The Oracle server always provides read consistency for a transaction. You can explicitely request for read consistency by issuing the SET TRANSACTION READ ONLY at the beginning of a long running transaction.

Alternatively, you can request read consistency for a transaction that is performing a DML statement by using SET TRANSACTIOn LEVEL SERIALIZABLE. The Oracle server ensures read consistency by using undo segments data.

 

Types of undo segments

The Oracle server storees undo data in the SYSTEm, non-system and deferred undo segments. In the manual undo management mode, the non-SYSTEM undo segments are classified as private and public. In the automatic undo management mode, the non-SYSTEM undo segments are contained in an undo tablespace.

The Oracle server stores the changes made to the objects in the SYSTEM tablespace in the SYSTEM undo segment. This undo segment created when the database is created. The SYSTEM undo segment exists and works in the same way.

The Oracle server needs a non-SYSTEM undo segment when a database has multiple tablespaces. In the manual undo management mode, a DBA creates at least one non-system undo segment. In contrast, the DBA creates at least one undo tablespace in the automatic undo management mode. In the manual mode the Oracle server stores the before-mages of changes to the objects in a non-system tablespace in a non-system undo segment. A non-system undo segment can be classified as a private undo segment or a public undo segment.

Private undo segments are segments that are brought online automatically by a instance because they are listed in the parameter file. You specify the private undo segments by using the ROLLBACK_SEGMENTS init parameter. You can also explicitly bring a private undo segment online by issuing the LATER ROLLBACK SEGMENT.

The DBA can use the ALTER ROLLBACK SEGMENT command with the ONLINE clause to acquire private undo segments manually. You specify the name of the undo segment in the ALTER ROLLBACK SEGEMNT command. In a RAC private undo segments allow each instance to aquire specific undo segments. Private undo segments are specified in the init file.

Public undo segments form a pool of undo segments that are available in the database. An instance can aquire an available public undo segment by using the values of TRANSACTIONS and TRANSACTIONS_PER_ROLLBACK_SEGMENT init parameters.

Public undo segments can be used in the RAC environment. An instance may acquire a public undo segment from the ones that are defined in the database. However, a public undo segment can be used by only one instance at a time.

In the automatic undo management mode the non-SYSTEM undo segment constitute an undo tablespace. A DBA can specify undo tablespace that an instance acquires in the parameter file by using the UNDO_TABELSPACE.

You can switch to another undo tablespace after instance has started. You can use ALTER SYSTEM SET UNDO_TABLESPACE. When a user rolls back a transaction involving data in a tablespace that is offline, a new undo segment is created in the SYSTEM tablespace and the transaction is transferred to the undo segment. This undo segment is deferred and remains deferred until the tablespace is online.

When the tablespace is brought online, the undo entries in the deferred undo segment are used to restore the old values to the rows. The deferred undo segment is dropped after the old values has been restored. The Oracle server creates maintaines and drops deferred undo segments automatically.

Undo retention control: Features

The SMU mode provides a method for explicitely controlling when the Oracle server cen reuse undo space. This means that you can use this characteristic to control the duration of retention of undo information.

Long running transactions

Normally, when the undo space is overwritten by a newer transaction, the committed undo information is lost. However for consistent read purposes long running transactions may need old undo information to undo the changes and produce older images of data blocks.

The snapshot too old error occurs when the undo data that is needed for read consistency is overwritten by the other transactions. Undo retention control helps in providing read-consistent data for long-running transactions by specifying the duration for which undo data is retained in the database.

You can specify a retention period by using the UNDO_RETENTION parameter. The DBA explicitly specifies the duration of time necessary to retain the information. Undo retention minimizes the chances of the snapshot too old error. Undo retention ensures that undo data is retained across a restart of the instance.  The default value of UNDO_RETENTION is 30 seconds.

When you set the undo_retention PARAMETER, ENSURE THAT THE AMOUNT OF UNDO DATA THAT IS RETAINED IS NOT VERY CLOSE TO THE SIZE OF THE UNDO TABLESPACE. THE SIZE OF 20% IS RECOMMENDED.

Hen you use the ALTER SYSTEM command to change the size of the UNDO_RETENTION parameter the effect of the parameter on the database is immediate. However, retention is possible only when the current undo tablespace has enough space for the undo information that is generated by active transactions.

When an active transaction needs undo space and undo tablespace does not have sufficient space, the system starts reusing unexpired undo space. Such an action can cause some queries to fail with the snapshot too old error. You can ensure that the undo data is retained for a specific period by using the UNDO_RETENTION parameter.

Formula to estimate the undo space.

Undo space in blocks = Retention period in secs * Number of undo blocks + Overhead (metadata).

Example: Undo space = (2 * 3600 * 200 * 4000) = 5.8GB

Retention period = 2 hours (2 * 3600 seconds)

Number of undo blocks = 200 undo blocks per second

Overhead = 4K

 

Undo management modes

To set UNDO_MANAGEMENT in AUTO change it in the init.ora. It can not be changed after the instance has started. You create an undo tablespace to store undo data that is generated by transactions. When the instance starts the server automatically selects for use the first available undo tablespace. If there is no undo tablespace it uses SYSTEM undo segment.

Another rule when you configure the SMU mode is to create at least one undo tablespace. Using the system undo segment to store undo data is not recommended. When the database is using the SYSTEM undo segment the server writes a warning to alert file.

When you create a database you can create an undo tablespace by using the CREATE DATABASE parameter. In addition you can use CREATE UNDO TABLESPACE.

Example:

CREATE DATABASE db01

UNDO TABLESPAEC undo01

DATAFILE ‘undo1db01.dbf’ SIZE 20M

AUTOEXTEND ON;

OR

CREATE UNDO TABLESPACE undo01

DATAFILE ‘undo1db01.dbf’ SIZE 20M;

 

A common rule while configuring the SMU mode is to specify the undo tablespace for an instance by using the UNDO_TABLESPACE init parameter. In a RAC environment, the UNDO_TABLESPACE parameter can be used to assign a specific undo tablespace to an instance.

Specifying the UNDO_TABLESPACE parameter is optional only if one undo tablespace exists in the database and the UNDO_MANAGEMENT is set to auto. Then the server automatically selects the undo tablespace.

If you have not already created the undo tablespace the instance fails to start when you set the UNDO_TABLESPACE parameter. This parameter can be altered dynamically by using the ALTER SYSTEM SET UNDO_TABLESPACE=<>;

The server automatically creates an undo tablespace under 2 conditions. First if the UNDO_MANAGEMENT is set to auto. Second, if you omit the UNDO TABLESPACE in the CREATE DATABASE.

The server automatically creates the SYS_UNDOTBS undo tablespace. The default data file is named dbu followed by the SID or instance name. The data file is stored in the dbs folder below Oracle home. The size is OS dependent and autoextend is set to ON.

Switching undo tablespaces: Features

In the SMU mode a single undo tablespace manages the undo data. You can create multiple undo tablespaces for the database. However, the instance uses only one undo tablespace at a time. You can switch them.

The instance uses the undo tablespace that you specify in the UNDO_TABLESPACE initialization parameter. You can also use ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;

While switching between undo tablespaces the server may report an error if the tablespace does not exist, be an undo tablespace and may not be already in use by another instance.

The Oracle server can switch undo tablespaces when the database is online. This feature enables users to run transaction while the server processes the tablespace switch operations. If the switch completes successfully afterwards are assigned to the new undo tablespace.

The switch operation does not wait for the original tablespace transaction to commit. This feature results in the old undo tablespace entering the PENDING OFFLINE mode. The old undo tablespace is being accessed by pending transactions. The new transactions are stored in the new undo tablespace. A feature of SMU is that you can not assign a PENDING OFFLINE undo tablespace to another instance. In addition, you can not drop a PENDING_OFFLINE undo tablespace.

When an undo tablespace is offline, you can assign the undo tablespace to other instances.

You switch out the current undo tablespace without switching to another tablespace by using the UNDO_TABLESPACE=’’ parameter.

Planning rollback segements

The factors that influence the planning of rollback segments are the application environment. Type of transaction performaed, and the volume of data.

Transaction environment

The OLTP, batch processing, and hybrid environment influence the number and size of rollback segments.

In a transaction environment, you can use rollback segments to manage rollback records. Each rollback segment contains a header. The header contains the transaction table entries that define the state of each transaction when writing to the rollback segment.

Each transaction that uses a rollback segment updates transaction table frequently. This could cause contention on the header especially in OLTP.

An OLTP environment is characterized by short transactions. Therefore, for this environment many small rollback segments are recommended. If possible, create one rollback segment for every 4 concurrent transactions.

A batch environment is characterized by transactions that are large in volume. Therefore, these transactions work better when associated with large rollback segments.

The size of a rollback segment is the expected maximum number of bytes of rollback information for each transaction multiplied by the expected number of concurrent transactions that will use the rollback segment. The size of the rollback information depends on the type of transaction – insert or delete. For example, inserting a record in a table generates less rollback information than a delete because the insert only stores ROWID, while the delete contains the actual data.

Another factor is the volume of data processed.

Steps before deciding the rollback segment size

Execute the longest expected transaction and check the size of the rollback segment. To minimize dynamic extension, create rollback segments with a large number of extents, for example MINEXTENTS 20.

Rollback segment creation guidelines

While creating a rollback segment you must specify the type because this property can not be changed. The type of the rollback segment specifies whether it is accessible to any instance or just this instance.

Create with a large number of initial extents – MINEXTENTS 20.

Do not set PCTINCREASE other than 0.

Set an optimal number of extents for each rollback segment. Set the OPTIMAL to a high value to minimize the snapshot too old error for long running queries.

Set a small OPTIMAL value for short queries.

Use INITIAL=NEXT for rollback segments to ensure that all extents are of the same size.

Avoid setting MAXEXTENTS to unlimited.

Place rollback segments in a separate exclusive non-SYSTEM uniform extent size tablespace.

Set the UNDO_MANAGEMENT init parameter is MANUAL by default. Choose manual in init.ora.

For OMS to be able to access rollback segments they have to be specified in init.ora: rollback_segments=(rb_01,rb_02).

Creating rollback segments

CREATE [PUBLIC] ROLLBACK SEGMENT

TABLESPACE <> [<STORAGE>]

CREATE ROLLBACK SEGMENT “RB_11A”

TABLESPACE “USERS” STORAGE

(INITIAL 100K NEXT 100K);

Setting the created rollback segment online:

ALTER ROLLBACK SEGMENT rollback_segment ONLINE;

The max number of online rollback segments is specified in the MAX_ROLLBACK_SEGMENTS.

Rollback segments – acquisition

When the instance opens  database the process of acquiring rollback segments involves series of steps.

First, the instance acquires all the private rollback segments named in the ROLLBACK_SEGMENTS. Then Oracle computes the number of rollback segments that the instance needs. The values of TRANSACTIONS and TRANSACTIONS_PER_ROLLBACK_SEGMENT are used for this computation. Thenumber of rollback segments that an instance needs determined by dividing the value of the TRANSACTIONS parameter by TRANSACTIONS_PER_ROLLBACK_SEGMENT.

In the third step the Oracle instance verifies the number of NON-SYSTEM rollback segments is at least as high as the estimated number of rollback segments

The fourth step involves a decision. If the number of rollback segments needed is greater than the number of non-SYSTEM rollback segments already available, Oracle acquires additional rollback segments. The public rollback segments must be created and available.

Transaction use of rollback segments

The process by which rollback segments are used by transactions involves a seriews of steps. When a transaction starts, a rollback segment has to be assigned to that transaction. The Oracle server verifies whether a request is made for a named rollback segment. If the request is for a named rollback segment – it is allocated to that transaction. SET TRANSACTION USE ROLLBACK SEGMENT roll_01;

When a request for a rollback segment is not made, Oracle allocates a rollback segments with the fewest transactions. Rollback information is generated during transaction processing. The server writes the rollback info. A pointer is used to specify the location where the entries will be written. When an extent is full, the pointer must move to the next available extent. When the next extent is free or inactive, it issued. If the extent is active, the transaction can not use the extent. In addition, it can not skip over an extent – it allocates an additional extent. The allocation of extents called extend. The rollback segment will extend until the MAXEXTENTS is reached.

Rollback segments – shrinkage

The Oracle server can shrink a rollback segment until it is less or equal of the OPTIMAL. The OPTIMAL can be specified during CREATE ROLLBACK SEGMENT or ALTER ROLLBACK SEGMENT. The OPTIMAL size should be based on the space required by an average transaction.

Shrinkage of a rollback segment

The server shrinks a rollback segment when the pointer of the rollback segment moves from one extent to another. During the shrinking process the server releases the inactive extents until an active extent is found. Or the size of the rollback segment is reduce to the optimal size.

There are two conditions that are required for the shrinking of a rollback segment. First the current size of a rollback segment exceeds the optimal parameter. Second, there are contigious inactive extents in a rollback segment.

Changing storage parameters

You redefine the parameters when the transactions generate more rollback data that was originally estimated. You increase the OPTIMAL parameter value to prevent frequent allocation and deallocation of extents. You use the alter rollback segment to redefine the maxextents and OPTIMAL parameters of a rollback segment. In contrast the INITIAL parameter is defined only at the time of creation of the rollback segment.

You can also use ALTER ROLLBACK SEGMENT <> STORAGE(OPTIMAL 500K);

You can use ALTER ROLLBACK SEGMENT <> SHRINK [TO 500K]; An active extent can not be deallocated.

Taking rollback segments offline: ALTER ROLLBACK SEGMENT rbs OFFLINE;

If you take a rollback segment offline, the status changes to PENDING OFFLINE.

 

Troubleshooting rollback segments

A transaction can write to only one rollback segment and may fail if there is insufficient space in the rollback segment.

Insufficient space for transaction in rollback segment generates the error ORA-01562. The error could be caused by 2 factors:

insufficient space in a tablespace or the MAXEXTENTS value for the rollback segmenthas been reached. One possible cause of the ORA-0162

 

test

 

4. Select factors to be considered while planning rollback segments

Number of concurrent transactions

Type of operation being perform;ed

error is the ORA-01560 error. When there is insufficient space in the tablespace for the rollback segment to extend, the ORA-01560 is generated.

As a solution, you can increase the tablespace available to the tablespace by extending or adding data to the tablespace. Alternatively, you can allow the datafiles

to autoextend. Another possible factor for the ORA-01560 error is the ORA-01628 error. Theserver returns the ORA-01628 error when the limit imposed by maxextents has been reached. In this case, no additional extents can be allocated to the rollback segment.

You can re-crreate the rollback segment or alter with higher maxextents value. To recreate rollback segments you first haVE TODROP ROLLBACK SEGMENTS.

Read consistency errors

The Oracle server guarantees that a statement processes only the data that is committed prior to when the statement started. That means that statements

that are committed after the statement starts executing will not be seen by the statement.

If the Oracle server cannot construct a read consistent imnage of the data the ORA-01555 SNAPSHOOT TOO OLD error is generated. A read consistency error can occur when the transaction that made the change has been committed and the data rquired fi=or the read consistent image has been

overwritten by other transactions. There are 2 probable causes of the read consistency errors. First, the transaction slot in the rollback segment has been reused. Second, the before image of the rollback segment may have been overwritten by another transaction. Read consistency errors can be avoided with

creating rollback segments with higher MINEXTENTS, larger extent values or higher optimal parameters. Hopwever, read consistency errors can notbe avoided by increeeeasing MAXEXTENTS values.

Blocking session

A blocking session can cause unlimited growth of a rollback segment. When a transaction writes to an extent in a rollback segment and the extent runs out of space, the Oracle server attempts to reuse the next extent in the rollback segment.

If the extent contains even one rollback entryu generated by an active transaction, then the extent can not be used. In addition, the rollback segment pointer can not skip and extent and continue writing the transaction to a subsequent inactive extent. Additional extents have to be allocated.

A blocking session has an active transaction that is writing to the extent that the server is trying to access.

When the blocking section is idle for a long time the DBA needs to cancel the transaction. Once you have determined which idle transaction caused the blocking you can kill the transaction. To check for any blocking transaction you can use the V$SESSION, V$TRANSACTION, and V$ROLLSTAT views.

SELECT s.sid, s.serial#, t.start_time, t.xidusn, s.username

FROM V$SESSION s, V$TRANSACTION t, V$ROLLSTAT r

WHERE s.saddr = t.ses_addr AND t.xidusn = r.usn

AND ((r.curext = t.start_uext -1)

OR ((r.curext = r.extents - 1) AND t.start_uext = 0));

Error in taking a tablespace offline

A tablespace with one or more active rollback segments can not be taken offline.

First, if you cannot take a tablespace offline because there are online rollback segments you

peform a query to ascertain that the rollback segments are online in DBA_rollback_SEGS. Second, you take all the online rollback segments in the tablespace offline by using the ALTER ROLLBACK SEGMENT segment_name OFFLINE. Then locate

the transactions that are currently using the rollback segments in V$ROLLSTAT view for pending offline mode. Forurth, you identify thesessions that initiated the transactions by using a rollback segment that has the PENDING OFFLINE STATUS. You use

a join in V$SESSION and V$TRANSACTION views to get the session info. Terminate the sessions.

Take the tablespace offline.

Undo staistics - querying

The V$UNDOSTAT view stores statistical data that indicates the working of the system. Every row records the statistics

collected in the instance in 10 minutes interval. You use the V$UNDOSTAT to estimate the amount of undo space needed of for current load

This view is available in SMU and RBU mode, but the staistics are collected only in the SMU mode.

The V$UNDOSTAT contains 15 columns. UNDOBLKS, TXNCOUNT, BEGIN_TIME, END_TIME, and MAXCUNCURRENCY are are some of the columns in the V$UNDOSTAT view. The UNDOBLKS column

stores the total number of undo blocks used by the transactions. The TXNCOUNT column returns the total number of transactions executed in the period.

The MAXCONCURRENCY column returns the highest number of transactions executed concurrently in the period.

The UNDOTSN column returns the ID of the last active undo tablespace. MAXQUERYLEN identifies the length of the longest query in number of seconds. The NOSPACEERRPCNT returns number out of spce errors.

General rollback segment information

Database performance deterioratesif insufficient rollback segments are availoable for transactions.

To improve this situation you identify any offline rollback segments that can be brought online before you crEATE NEW ONES.

You can view information about rollback segments in DBA_ROLLBACK_SEGS. This information is not available in the dynamic views because

only the rollback that are currently in use by the instance. The DBA_ROLLBACK_SEGS shows info about rollback segments. including id, location, type and status.

A rollback segment can be identified by segment_id and segment_name, and a tablespace_name.

The tuype of rollback segment is specified as public or sys type.

Rollback segment statistics

When tuning the database you can monitor the actual size of rollback segments against their optimal size. This can be done by querying

V$ROLLNAME and V$ROLLSTAT. The column names in V$ROLLNAME - USN - number of a rollback segment, name - name of arollback segment.

RSSIZE - number of extents in each rollback segment. XACTS - number of transactions using the rollback segment, OPTSIZE - optimal size.

To verify that the optimal size has been specified for the rollback segment, query V$ROLLSTAT and V$ROLLNAME.

SELECT n.name, s.extents, s.rssize, s.optsize

FROM V$ROLLNAME n, V$ROLLSTAT s

WHERE n.usn = s.usn;

Current rollback segment activity

While performing certain maintenance operations you may want to find out the sessions that are currently using rollback segments. You can retrieve this information

from the V$SESSION and V$TRANSACTION views. Some of the columns in the V$SESSION are: SADDR - session address, USERNAME, SID, SERIAL#.

You query the XIDUSN in V$TRANSACTION to id rollback segment. UBAFIL, UBABLK, UBASQN, UBAREC - current location where the transaction is writing.

USED_UBLK: number of blocks of rollback information, START_UEXT - rollback segment extent for which the rollback segment started writing,

START_UBAFIL - rollback segment file number for which the transaction started writing.

To query information about name of the user, rollback segment name and number and number of blocks of rollback information generated by this transaction

SELECT s.username, t.xidusn, t.used_ublk

FROM V$SESSION s, V$TRANSACTION t

WHERE s.saddr = t.ses_addr;

 

Test

5. The rows containing the statistics about rollback segments from the V$SESSION and V$TRANSACTION:

SELECT c.username, d.xidusn, d.ubafil

FROM V$SESSION c, V$TRANSACTION d

WHERE c.saddr = d.ses_addr;

 

Identify the features of undo retention control

 

CAn be altered dynamically by using the ALTER SYSTEM command

Affects UNDO settings immediately

Minimizes the occurance of snapshot tooold error

Retains undo data across restatring the instance

Specified by using the UNDO_RETENTION parameter

 

The sequence of events in which rollback segemnts are used by transactions

1. The server verifies if the rollback segment has been requested

2. The server allocates the rollback segemnt

3. The transaction writes to the current extent

4. The transaction finds thenext free extent when the current extent is full

5. The server allocates an extent when the next extent is active

 

Identify the characteristics of a rollback segment that avoids read consistency errors

1. High MINEXTENTS

2. High OPTIMAL

3. Large EXTENT size

 

Identify thefeatures of awitching undo tablespaces

A DBA can change the UNDO_TABLESPACE parameter

A transaction can continure to run during a switch operation

A switch operation can execute on an online database

A switch operation requires an undo tablespace that is not in use

A pending offline tablespace can not be used by another instance

 

Select the features of automatic undo management

Ensure read consistency by using undo data

Simplify undo management by using undo segments

Control the retention period of undo data

Store undo data in a single undo tablespace

Prevent the ORA-30036 error by dynamic transfer of extents

 

Select guidelines for creating rollback segments

Ensure that the OPTIMAL value is being based on volume of transactions

Ensure that the rollback segment is in user managed tablespace

Ensure that you do not set the PCTINCREASE

Ensure that the rollback segments are created in dedicated tablespaces

Ensure that the MAXEXTENTS value is other than UNLIMITED

 

Sequence of steps that the Oracle server uses to ensure read consistency when a transaction occurs

The current SCN is determined

The data is read

The before image of uncommitted changes is retrieved

The changes are applied from the undo segment to a copy of undo block

 

Sequence the steps to resolve an error in taking a tablespace offline

Query to find the online rollback segments in the tablespace

Take all the rollback segments in the tablespace offline

Locate transactions that are using the rollback segment

Obtain information about the user sessions and related transactions

Terminate sessions that have inactive transactions

Take the tablespace offline

 

 

 

Temporary segments

The amount of space a necessary is determined by the value of SORT_AREA_SIZE parameter. If the volume of data being sorted

exceeds this parameter value, the data is sorted using several sort operations. The intermediate results of sorting are stored on the disk.

The examples of statemens needed to create temp segments:

CREATE INDEX

SELECT with ORDER BY

SELECT with DISTINCT

SELECT with UNION

SELECT with GROUP BY

The server creates the temporary segments in different tablespaces - permanent or a temporary

tablespace.

Temp segments in permanent tablespaces

An instance can have more than one temp segment in temp tablespace

Temporary segments ARE CREATED IN A PERMANENT TABLEspace only when executed statements need more sort space and the user has been assigned a permanent tablespace for sorting.

Server overhead decreaSES

 

Temporary segments in permanent tablespaces -

SMON cleans up temp segments no longer needed

When a perm tablespace is used for sorting it may be highly frag,mented. Trefeore, it should be used exclusively for sorts.

 

Temporary segments in a temporary tablespace

An Oracle server create3s only a single temporary segment for each temporary tablespace. However, there is no limit to the number of extents that can be allocated to a temporary segment created in a temporary tablespace. The number of extents in a tablespace is limited by DEFAULT STORAGE.

A temporary segment in a temporary tablespace is created when the first disk sort occurs after startup. The server creates one sort segment for every instance that performs a sort operation. Therefore, in RAC the server may have more than one sort segment in a temp tablespace.

The details of the temp segmentin SGA aremaintained in the Sort Area Pool. Based on the information in the sort extent pool several transactions that need more space for sorting reuse the extents in the temporary segment.

A temporary segment in a temporary tablespace is released when an instance shuts down.

Temporary segments - guidelines

You must set up different temporary tablespaces based on your sorting needs. Temporary tablespaces imporve concurrency and decrease the frequent allocation and deallocation of extents.

Temporary segments use the default storage settings specified for their tablespace. You must define identical values for the INITIAL and NEXT parameters. In addition, you must specify the value of PCTINCREASE as zero.

You must specify the MAXEXTENTS for permanent tablespaces in which the server creates temporary segments. You must specify the MAXEXTENTS for permanent tablespaces in which temporary segmetns are created, the MAXEXTENTS does not affect the temp segments in temp tablespaces.

Create tablespaces with different default storage clauses and assign them based on user sorting requirements

 

Sort segment statistics

The views that are used to retrieve statistics for temporary segments are DBA_SEGMENTS, V$SORT_USAGE, and V$SORT_SEGMENT.

THE DBA_SEGMENT is used to retrieve information on temporary segments. The V$SORT_USAGE view is used to retrieve info on active sorts.

THe V$SORT_SEGMENT describes the status of the Sort Extent Pool. This view is updated only when the tablespace is of the temporary type.

To estimate the size of the temporary tablespaces that are needed to store sort segments, you need to know the number of blocks and extents used by the largest sort operation. This information can be retrieved from the MAX_SORT_SIZE and MAX_SORT_BLOCKS columns.

SELECT max_sort_size, max_sort_blocks

FROM V$SORT_SEGMENT;

 

Temporary segment activity

You can tune a SQL statement based on the sort data from the V$SORT_USAGE view.

SELECT username, contents, extents

FROM V$SORT_USAGE;

 

Test

 

Identify guidelines for using temporary segments

 

Create temporary tablespace according to sort operations

Create and assign independently configured tablespaces

Specify the MAXEXTENTS for a permanent tablespace

Specify storage parameters for temporary tablespace

 

Select the features of the various types of segments

 

One temporary segment is created for every instance

Reclaimed by SMON after a SQL statement completed

Created in a temporary tablespace for the first sort task of an instance

Stores details in the Sort Extewnt Pool in a temporary tablespace

Fragments a permanent tablespace

 

 

 

 

 

Managing Indexes

 

When an index runs out of storage space you can modify the index storage parameters to increase the space available to the index. The storage available to an index includes the transaction and extent area allocated to the index. You can increase the space available to an index by increasing the maximum number of extents allocated to the index.

ALTER INDEX <> [STORAGE CLAUSE] [INITRANS INTEGER] [MAXTRANS INTEGER];

ALTER INDEX hr.emp_name

MAXTRANS 255

STORAGE (MAXEXTENTS 255);

 

Index - extents allocation

ALTER INDEX hr.emp_ix ALLOCATE EXTENT (SIZE 200K);

 

Index - extents deallocation

Index space is deallocated when the table is truncated. You can manually deallocate the extent:

ALTER INDEX hr.emp_ix DEALLOCATE EXTENT UNUSED;

 

Index information

DBA_INDEXES and DBA_IND_COLUMNS provide indexes information.

SELECT index_name, tablespace_name, index_type

FROM dba_indexes

WHERE owner=’HR’;

You can use DBA_IND_COLUMNS to display the column names and order in the index.

SELECT column_name, column_position,

Column_length, index_name

FROM dba_IND_COLUMNS

WHERE index_owner=’HR”

ORDER BY index_name;

 

Index space usage – monitoring

 

When you insert, pdate or delete key values in an index frequently, the index can waste the acquired space over time. You can use INDEX_STATS to analyze the structure of an index.

ANALYZE INDEX hr.emp_ix VALIDATE STRUCTURE;

When you run analyze index the index stats view is populated with index data. The data from this view enables you to develop a history of space that is used by an index. The percentage of space used by an index varies according to the frequency with which the server inserts, updates or deletes the index keys. This percentage value is stored in the PCTUSED column of the INDEX_STATS view. You retrieve the PCTUSED data by querying the INDEX_STATS view.

When the index space being used drops below the average you can condense the index space. You shrink the index by dropping and rebuilding or coalescing the index.

SELECT name, blocks, lf_rows, del_lf_rows

FROM index_stats;

SELECT name, blocks, pct_used

FROM index_stats;

Unused indexes – identifying

If an index is not being used – then you can drop the index and improve the performance of the oracle server.

You monitor index usage by using the ALTER INDEX command with the MONITORING USAGE or NOMONITORING USAGE clause. When you use the ALTER INDEX command, the Oracle server populates the V$OBJECT_USAGE with statistics on the index being monitored. You can only view information in the V$OBJECT_USAGE for indexes you own. When specify the monitoring usage clause in the alter index command, the V$OBJECT_USAGE is reset for the specified index. When the view is reset the server clears or resets the previous information.

SELECT index_name, monitoring, used

FROM v$object_usage

WHERE index_name=’EMP_NAME’;

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

3. Managing an Oracle Instance

 

System Global Area

 

The Oracle server is made up of the Oracle instance and oracle database. The Oracle instance consists of memory space called the SGA, background processes and the database related files. The SGA stores the data and control information of the Oracle server. Every instance has its own SGA and oracle server allocates memory to the SGA when an instance starts. This memory space is released when the instance shuts down. The memory region allocated to SGA is shared. The SGA memory spaced must be non swapped and non paged. Because the purpose of the SGA is to store data in memory for fast access, the SGA must be within main memory. If pages of the SGA are swapped to disk then its data is no longer quickly accessible.

The SGA consists of a group of memory structures. These structures are allocated space during instance startup. The three main memory structures are the shared pool, database buffer cache, and redo log buffer. There is also a large pool and a Java pool. You can view the SGA memory allocations by using SHOW SGA command.

The shared pool is used to store information such as the most recently used data dictionary is also stored in the shared pool.

The database buffer cache stores the most recently used user data. This cache contains the write list and the least recently used LRU list. The write list holds buffers which contain data that is modified but not written to disk. The LRU list holds free buffers, dirty buffers, and the pinned buffers that are currently in use. The redo log is used to record the changes to the database.

In releases prior to Oracle 9i the SGA was a static memory space. In Oracle 9i the SGA configuration can be changed without shutting down the instance. This means that the SGA can grow and sink while the instance is running.

The dynamic SGA allows you to set the size to which the SGA can grow by using the SGA_MAX_SIZE parameter.

Background processes

When an instance is started the Oracle database creates a set of background processes – OS processes.

Mandatory background processes:

DBWn – database writer, LGWR – log writer, SMON – system monitor, PMON – process monitor, CKPT – checkpoint.

The DBWN process writes the changed data from the database buffer cache to the data files. The Oracle server processes all database writes in memory before the DBWn process writes the data to the data files. When a transaction modifies the data blocks, the before image of the data is written to the rollback segment. The rollback segment to which the before image is written is present in the buffer cache.  The committed changes are written to the data files. That means that the data files store the data dictionary and the user object, which contain the last committed changes.

The LGWR process records the changes that are registered in the redo log buffer to the redo log files. When data blocks are modified in the database buffer cache, the modified data blocks are written to the redo log buffer after the changes are committed. The redo log buffers that contain the data blocks are written to the redo log files by the LGWR process. The redo log files are used to record all the changes made to the data. The entries in the redo log buffer are also referred to as the redo log data.

The SMON process performs an instance recovery by rolling back the pending transactions and recovering data that is recorded in the buffer, but not recorded in the database. The SMON cleans up the temporary segments that are no longer in use. The temporary segments are used to store data temporarily during transactions. The SMON process recovers the dead transactions skipped during a system failure. The dead transactions are the transactions that were active during the system failure. The SMON also puts together the fragmented free space in the data files.

The PMON process performs process recovery when a user process fails. The PMON cleans up the cache and unlocks the system resources held by the user process.

The CKPT process updates the database status information such as the database file headers. The updating process occurs at the checkpoint event. During the checkpoint the server records the changes in the database buffer cache permanently to the data files. For example, if the log is full the server switches the log at the checkpoint.

In addition to the mandatory processes, there are optional processes – RECO, ARCn, Dnnn, etc.

 

Database files

An Oracle database is a collection of data. The server stores the data in the database and retrieves the information from the database. One of the responsibilities of the DBA is to manage the files. There are 3 types of files – data files, redo log files and control files.

In an Oracle database there can be one or more data file. A data file stores the data dictionary and user objects. The data files also store the last committed changes to data.

The server reads the data from the data file and stores the information in the database buffer cache. The server does not write immediately modified or new data to data files. The data is stored in the data base buffer cache.

Another type of data base files is the redo log file. Every Oracle database has at least 2 redo log groups, where each group has at least one redo log file. The redo log files are used to record all the changes made to the data. This storage helps ensure recovery of data in case of instance failure. Oracle supports multiplexed redo logs to protect the redo log files against disk failures. This means that you can maintain two or more copies of the redo log files on different disks. The redo log file copies that are maintained on different disks are called mirrored log files. Each member in a log group is amirrored log file of the same size.

The control files store the information that is needed to maintain and verify database integrity. These files store the database structure, database name, time stamp of creation, names and locations of data files and redo log files. The information stored in the control files is used to identify the data files and control files. The control files must be already opened when you open  the database. If the DBA changes the structure of the database, the Oracle server automatically modifies the control file. When there is a system failure, the control files are used along with redo log files for database recovery. Each Oracle database must have at least 1 control file.

Database related files

The parameter file is used to define the characteristics of an Oracle instance. When the Oracle server starts an instance, the server uses the parameter file to get the instance configuration. The instance configuration is specifed by the initialization parameters in the parameter file.the parameter file specifies the name of the database, the amount of memory to allocate the names of the control files, etc.

Password file another type of the database related files which is used to authenticate or validate the privileged database users.

The archived redolog file is another type of non database file. This file stores offline copies of the redo log files which are used during instance recovery. Oracle creates archived redo log files by archiving the redo log files that are full. The online redo log files are reused only after they have been archived. If automatic archiving is enabled, the redo log files are archived by the ARC0 process. In ARCHIVELOG mode the database can be completely recovered from both instance and media failure.

 

Connecting to an Oracle server

There are 3 types of connections that a database user can establish with the Oracle server. The three types are local, multi-tiered and two tiered.

When a user connects to a machine that runs the Oracle server to access a database, two processes are involved – the user process and the server process. A user process is created when a user starts the Oracle worksheet or Oracle Forms. A user process is a mechanism that is used to execute the code of an application or an Oracle tool such as OEM. The server process is created when a user connects to the Oracle server by specifying a username, a password, and a database name. The server process is created on the same machine where the Oracle server is running. A server process is a mechanism of interaction between the Oracle server and the user process. The server process holds the operations requested by the user and performs them on the Oracle server. A communication pathway between a user process and the Oracle server is called a connection. When the user starts to interact with the Oracle server, a connection is established. The user can run a tool on the same machine as the Oracle server. In this case, the communication pathway uses the inter-process communication mechanism that is available on the server.

When the user runs a tool or an application on a client machine, the network software is used to communicate between the user and the Oracle server.

A specific connection of a user is called a session. A session starts when the Oracle server validates the user. The session ends when the user logs off or if there is an abnormal termination of the user process.

User processes

A user process is also called the client. I a client-server environment, this process runs on the machine to which the user directly logged on. The user process starts when a user starts a tool. It ends when the user disconnects or there is a system failure. The user process contains UPI. A UPI is a mechanism used by a user process to communicate with a server process. The UPI is a method of standard communication between any client tool or application and the Oracle server. The UPI generates a call to the Oracle server when a user makes a request to the server. Once Oracle processes the request the result is sent to the user.

Server process

The user requests are submitted by the user process to the server process. The server process then handles these user requests. The server process runs on the same machine as the Oracle server. The user process can run on a separate client. However, the server process interacts closely with the Oracle server. This server can be dedicated or shared.

On a dedicated Oracle server a server process handles a single user process. The server process when a user requests a connection and is terminated when the user disconnects from the server. Every server process uses an area of memory called the PGA. A PGA is created when a server process is started. The server process includes the OPI. The OPI is used to communicate with the Oracle server at the request of a user process. The server process returns the status information of the user request and results to the user process.

In a client-server environment there will be separate user and server processes. In the MTS configuration several user processes share a single server process. This is in contrast to the dedicated server configuration, when each user process has a separate server process to connect to.

 

Oracle security types

The Oracle server offers multiple layers of security to amange the data and user information.

Security of data – The Oracle 9i database provides data security through enhancements to the Virtual Private Database (VDP) and selective data encryption capabilities. VDP offers fine grained access control where the users can only access the rows of data pertaining to them. The selective data encryption hides very sensitive data in the database from the super-privileged users who may misuse their privileges. The data encryption also protects the data from malicious users attempting to read data files from the OS. Oracle 9i also provides security through new technologies such as Oracle label security and fine grained auditing. Fine-grained auditing keeps track of all of the database activities including the statements and their results. This auditing prevents users from misusing their privileges because auditing tracks illegal actions. Oracle 9i attaches access control directly to the data through Oracle label security. By using this type of data access the security can not be bypassed. The technology is ideal for application service providers who host data from multiple companies in the same database and therefore need to separate the data securely.

Security for user

Oracle 9i offers a number of security features that scale to internet size support million of users and are practical to administer. These features include enhancements to database proxy authentication, public key infrastructure (PKI) support and the new web single sign on feature. Proxy authentication creates multiple and scalable database sessions. These sessions carry the identity of the Web user. This feature allows enterprises to identify users through all the tiers of the network. The single sign on feature enables users to access web applications through the enterprise sung a single account. The DBAs use a single Light-weight Directory Access Protocol (LDAP) directory to manage all of the access control information. This means better security and lower cost. Oracle 9i provides PKI integration for easier deployment and management of PKI in the enterprise. PKI based authentication systems issue digital certificates to users, which authenticate the users directly to servers in the enterprise without direct involvement of an authentication server. A certificate is a data structure that stores the user name, id info, and public key. The public key is a key the PKI system uses to authenticate the user. A certificate is created when a user’s public key is signed by a certificate authority. The certificate ensures that the public key actually belongs to that user.

Security for companies

The principal security challenge for a company is to store the data from different user communities separately. The simplest way of separating data si to create physically separate systems for each community – costly. The Oracle 9i database reduces the cost for hosting a provider. The cost reducing mechanism allow multiple users communities to share a single hardware and software instance. The scheme of sharing a single instance among communities retains the data of each user community separately. Oracle 9i obtains the security of data by using Oracle 9i Virtual private database and Oracle label Security  technologies.

 

Stages in processing a query

When a use issues a query the user process submits these queries to the server process. The server process receives these queries and processes them.

PARSE

  1. Server receives the query form the user process
  2. Server validates the syntax of the statement and security privileges of the user in shared pool.
  3. The server builds a parse tree.
  4. The server process returns the status information to the user.

EXECUTE

  1. If the user request is an UPDATE or DELETE command, the server process locks all the rows that would be affected by the statement.
  2. The statement is executed

FETCH

  1. The final stage in the processing a query is a fetch. In this stage the server process retrieves the rows and returns the rows to the user.
  2. The server may need to perform one or more fetches to transfer the results to the user process.

 

Shared pool

The shared pool is a part of the SGA. In Oracle9i the shared pool can be dynamically resized to grow or shrink by using the ALTER SYSTEM SET SHARED POOL command. The shared pool size needs to be an integer multiple of the granule size and the total SGA size can not exceed the SGA_MAX_SIZE value. The granule is a new unit of allocation that depicts a continuous virtual memory.

The shared memory components of the shared pool are the library cache and the data dictionary cache. The library cache stores information about the most recently used SQL statements. This cache stores the execution plan, parse tree, and text of the SQL statement. The execution plan contains the steps determined by optimizer to run the SQL statement. The information associated with a particular SQL statement is stored in the library cache. If a statement is sent again by any user process before the execution plan is aged out, the execution plan and the parse tree can be reused. In addition to the parse tree the library cache stores control structures, such as locks and library cache handles. These handles store the location of the library cache and data on which the locks are applied. The data dictionary cache contains information about the objects in the database, the structure of the database, and the column definitions. The data dictionary cache stores the valid user names and their passwords and the user privileges for the database. The data dictionary cache is also known as row cache.

 

Database buffer cache

When a query is processed the server process searches the database buffer cache, which is a part of the SGA for the data blocks needed for processing. You can set the size of the buffer cache. The database buffer cache holds the blocks of data that are read from the data files. If the server process is unable to find a particular block of data in the database buffer cache, the server reads the data from the data file and places a copy in the database buffer cache. The buffer cache is used store the most recently and frequently used data blocks. The database buffer cache is a set of database buffers. The size of each buffer in the database buffer cache is equal to the size of a data block in the database. In Oracle9i you can change the buffer cache size dynamically. You can resize the buffer cache by using the ALTER SYSTEM command. The allocation size has to be less than the SGA_MAX_SIZE value. The SGA_MAX_SIZE value specifies the maximum size of the SGA. The size of the database buffer cache is equal to the product of the number of buffers in the database buffer cache and the DB_BLOCK_SIZE.

Buffer cache = DB_CACHE_SIZE * DB_BLOCK_SIZE

Two new parameters DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE are introduced in Oracle 9i. The DB_KEEP_CACHE_SIZE parameter is used to store data blocks in memory. The DB_RECYCLE_CACHE_SIZE parameter is used to eliminate data blocks from memory when they are not in use. The buffer cache contains both modified and unmodified data blocks. The oracle server uses a LRU to phase out the buffers that have not been accessed recently. In addition, the algorithm helps make space for the new blocks to be accommodated in the database buffer cache.

 

Buffer cache advisory

In Oracle9i, you can dynamically resize the buffer cache to grow or shrink. The allocation size of the buffer cache has certain limitations. The buffer cache advisory helps predict the behavior of various buffer sizes. You can dynamically resize the cache by using the ALTRER SYSTEM command. The buffer cache advisory feature enables and disables the collection of statistics. You use these statistics to predict the behavior with different cache sizes.

ALTER SYSTEM SET DB_CACHE_ADVISE=ON|OFF|READY

When the DB_CACHE_ADVISE is on the advisory is on and both CPU and memory overhead is incurred. If you attempt to set the value from OFF to ON, the ORA-4031 error occurs because the server is unable to allocate shared pool memory. The value is set to ON from READY because the memory is already allocated. When the advisory is set to READY the advisory is off, but the memory for the advisory is still allocated.

A new view V$DB_CACHE_ADVISE is introduced in Oracle 9i. This view is used to collect and display the buffer cache advisory information. This dynamic view contains the buffer cache statistics that are gathered. The V$DB_CACHE_ADVISE contains different rows that predict the estimated number of physical reads for different cache sizes. The rows also compute a physical read factor. The physical read factor is the ratio of the number of estimated rows to the number of actual reads.

 

PGA (Program Global Area)

When a server process is started the Oracle server allocates a PGA. The PGA is a memory buffer that contains the data and control information for a server process. The PGA is a writable memory region that can not be shared. The PGA is allocated when a server process starts and is deallocated when the server process terminates. The PGA memory is classified to differentiate between the tunable and untunable memory. This division is used to determine the part of the PGA memory that is affected by automatic tuning. The tunable area is the area that is used by the SQL working areas. The untunable area comprises the rest of the PGA memory.

Contents of the PGA

  1. Sort area – used for storing intermediate results of sorting
  2. Session information – privileges for the session
  3. Cursor state – indicates a stage in the processing of different cursors that are being used by the session. It is a handle or pointer in memory that is associated with a specific SQL statement.
  4. Stack space – stores the variables and arrays of the session.

 

Execute phase of a DML statement

The DML statements need two phases  - parse and execute. In the PARSE phase a statement is passed from the user process to the server process. The server process validates the statement for syntax errors. A parsed representation of the statement is loaded into a shared SQL area. The errors in the SQL syntax, together with the errors due to user privilege issues can be trapped during the parse phase. If the processing of the phase is successful, the next phase is the execute phase. The execute phase consuists of several steps. An example of the execute phase:

  1. Server process reads from the buffer cache the database blocks and the rollback blocks.
  2. if the data is not found in the buffer cache, the server process reads from the data files, placing a copy in the buffer cache.
  3. the server process places locks on the data that is going to be changed.
  4. the before image is stored in the rollback segment.
  5. finally, the server process records the changes in the buffer cache. The changes are also recorded in the redo log buffer, therefore protecting the after-image.
  6. the changed blocks in the buffer cache are marked as dirty buffers.

The DELETE and INSERT use similar steps. The before image for a delete statement contains the column values in the deleted row. In contrast, an INSERT statement only needs the row location.

 

Rollback segments

When you issue a DML statement to change data, the server first saves the old values in a rollback segment. Rollback segments are used to reverse the changes of a rollback transaction. Rollback changes also see that other transactions do not see uncommitted changes. The Oracle server assigns a transaction automatically to the next rollback segment. The transaction assignment to a rollback segment occurs for the first DML statement in the transaction. Transactions that contain only queries can read any rollback segment. This rollback segment may contain old values. The number of transactions that one rollback segment can handle is set when the rollback segment is created. The maximum size is OS specific. If needed, rollback segments are used to reverse the changes made to the data. Rollback segments are also used for read consistency. They are also used to recover the database to a consistent state in case of failures.

 

Redo log buffer

The redo log buffer records all of the changes to the data during DML statement processing. The redo log buffer is a part of the SGA. You can set the size of the redo log buffer in bytes by using LOG_BUFFER parameter. The redo log buffers store redo entries. A redo entry records the blocks that are changed the location of the change anf the new value. The redo entry does not identify the type of block where a change is made. This means that redo log entry does not distinguish between a data block, an index block or a rollback segment. The redo log buffer is used sequentially. The changes made by one transaction are interleaved with changes made by other transactions. The redo log buffer is a circular buffer that is released for reuse after the buffer is filled up. However, the buffer is released after all the existsing redo entries are recorded in the online redo log files. The size of the redo log buffer is fixed. The redo log buffer is created at the time of instance startup.

 

Database writer

The DBWn writes data from the database buffer cache to the disk when:

1. The database writer writes to data files when the number of dirty buffers reaches a threshold value. The threshold is

determined by calculating as a percentage ratio of free list waits / request. The performance is better when the number of buffers is high.

2. DBWn writes to data files when a server is unable to locate a free reusable buffer after scanning thethreshold number of buffers.

3. The DBWn writes to data files when a timeout occurs, such as three seconds.

4. The DBWn writes when a DBWn checkpoint occurs. The checkpoint is the means of syncronyzing the database buffer cache with the data file.

 

Log writer

The LGWR writes redo entries from redo log buffer to the redo log files. The LGWR writes only when any one of four events occur:

1. The redo log buffer writes redo log buffer to redo log files when it is 1/3 full

2. When a timeout occurs - 3 seconds

3. Before DBWn writes modified blocks to the data files

4. When transaction commits.

 

System change number

SCN is assigned on every transaction commit. The SCN uniquely identifies a transaction. When a transaction is committed

it is recorded in the rollback segment used by the transaction. The SCN is assigned to the transaction and recorded in the rollback segment.

The SCN is also recorded in the redo log file. The SCN that are assigned to the transactions are unique for every transaction in the database.

An SCN is used as a time stamp by the Oracle database to synchronize data. The SCN provides read consistency when data is retrieved fromthe data files.

. The highest SCN number depicts the latest data. The Oracle database uses different schemes to create SCNs such as the lock SCN scheme (incrementally, used in dedicated env) and the lamport SCN

(used in parallel server mode, based on the transactions committed in different instances) scheme.

An SCN is used during recovery of the database.

 

Commit processing: Steps

A transaction is permanently recorded by issuing a commit.

1. A server process places a commit record and its SCN in the redo log buffer.

2. The LGWR performs continuous write operation of all the redo log buffer entries that are lost in case of failure.

3. The server records the information that indicates the completion of transaction. This step allows the resource locks that are held on tables and rows to be released.

4. User is notified that commit is complete.

These are the fast commit stages - only the redo log is written to. The DBWn independently flushes the dirty buffers to the data files - before or after the commit. Only one singlle write is needed per commit process.

If multiple users simultaniously issue commits, Oracle piggybacks the commits into one write. The write operation helps in achieving less than one I/Oper commit on a busy system.

Fast commit - Advantages

1. Performs sequential writes to the redo log files

2. Writes only thje information needed to record the changes to the redo log file

3. Commits multiple transactions simultaneously into a single write

4. Needs only one synchronous write per transaction. the size of the transaction does not matter.

 

Managing an Oracle instance

 

Privileged accounts

The SYSDBA contains all system privileges such as creating a database, starting up and shutting down the database and altering the database. A SYSDBA can also change the database character set. With SYSOPER you can startup and shutdown the database. You can also alter some of the database properties, such as mount/open.

 

Setting up OS authentication

Special privileges are granted to the DBA users for performing administrative tasks. There are certain activities done on OS level , without logging on to the database – shutting down database, etc.

There are 2 methods available to authenticate valid IDs. These methods are the OS authentication method and the password file authentication method. The selection of the method depends on whether or not the database is administered locally or remotely. If the database is managed locally, you can use either OS authentication or the password file authentication. DBAs can also remotely administer a group of database users from a remote single client. If you have a secure database connection, then either the OS authentication method or the password file authentication method can be used. If you do not have a secure database connectiojn, then you use the password file authentication. This is the most secure method for remote database administration if you do not have a secure database connection.

Steps in setting up a user on NT with no passwordfile – OS authentication

  1. Create a group
  2. Add the DBA user to the group
  3. Set the REMOTE_LOGIN_PASSWORDFILE = NONE in int.ora. this means that privileged connections are not allowed over non secure connection.

 

REMOTE_LOGIN_PASSWORDFILE

To start up a database instance from a remote machine, you need an initialization file on that machine. You set the value for the password file parameter in the init.ora file to access the remote database. The REMOTE_LOGIN_PASSWORDFILE can be set to NONE (the passwordfile does not exist, no privileged connections are allowed over non secure connections, default), EXCLUSIVE (only one database, the password file can contain users others then SYS granted to individual users), SHARED (only SYSDBAand SYSOPER recognized). The greatest level of security is to set REMOTE_LOGIN_PASSWORDFILE to EXCLUSIVE.

 

Passwordfile

When the passwordfile is used, the SYS user can grant DBA privileges to individual users. The password file is usually in the ORACLE_HOME\database directory on NT and ORACLE_HOME/dbs.

 

PARAMETER FILE

The parameter file is stored in the ORACLE_HOME/dbs directory or ORACLE_HOME\database directory. The parameter file is only read during instance startup. The parameters are specified in the parameter=value format. The literals must include “”. Some parameters, such as CONTROL_FILE, can take multiple values. They have to be in parentheses and separated by a comma. The comment lines are separated by the #. The IFILE is used to include additional filenames in the parameter file. IF OS system is case sensitive, filenames are also sensitive.

Essential parameters

DB_NAME

DB_CACHE_SIZE

DB_BLOCK_SIZE

COMPATIBLE

CONTROL_FILES

SHARED_POOL_SIZE

BACKGROUND_DUMP_DEST

USER_DUMP_DEST

 

Undo mode – initialization parameters

The Oracle server manages the rollback information by using rollback segmnetns and undo tablespaces. You must specify the undo management mode at instance startup by using parameters UNDO_MANAGEMENT and UNDO_TABLESPACE. When you use system managed Undo instead of Rollback managed undo scheme, set the UNDO_MANAGEMENT to AUTO. Setting the UNDO_TABLESPACE sets the undo tablespace. When you omit the UNDO_TABLESPACE, the server selects the first available undo tablespace in the database. When there is no undo tablespace available the instance uses the SYSTEM rollback segment. The UNDO_RETENTION parametercontains the retention time of undo. The default time is 5 minutes or 300.

 

When RBU is in SMU mode, the UNDO_SUPRESS_ERROR parameter helps suppress or display error messages that are raised. When the UNDO_SUPRESS_ERROR=TRUE the error messages are suppressed. When the parameter file contains parameters that relate to RBU, the parameter is ignored.

Example:

UNDO_MANAGEMENT=AUTO

UNDO_TABLESPACE=UNDOTBS

UNDO_RETENTION=300

UNDO_SUPRESS_ERRORS=TRUE

 

When RBU is in RBU mode, the UNDO_MANAGEMENT=MANUAL. When you do not specify UNDO_MANAGEMENT the instance starts in RBU mode. When the instance starts up the server brings a number of rollback segments online. The number is determined by the values of the ROLLBACK_SEGENTS, TRANSACTIONS and TRNSACTIONS_PER_ROLLBACK_SEGMENT parameters. The ROLLBACK SEGMENT parameter allocates rollback segments to the current instance. When you set this parameter the instance acquires all of the rollback segments you specify, even if the number exceeds the minimum number needed by the instance. The parameter is static.

The TRANSACTIONS static parameter specifies the maximum number of concurrent transactions. The size of the SGA and the number of rollback segments that are allocated increase with the parameter value. You specify the number of concurrent transactions that each rollback segment can handle by using the TRANSACTIONS_PER_ROLLBACK segment parameter. You determine the minimum number of rollback segments acquired at startup by dividing the TRANSACTIONS parameter value by the value of  TRANSACTIONS_PER_ROLLBACK. The example below shows 51/10=approx 6.

UNDO_MANAGEMENT=MANUAL

ROLLBACK_SEGMENTS=9rbs_01,rbs_02,rbs_03)

TRANSACTIONS=51

TRANSACTIONS_PER_ROLLBACK_SEGMENT=10

MAX_ROLLBACK_SEGMENTS=5

 

License parameters

A user site can have a concurrent user or a named user license. When your site is licensed for concurrent use, you can track and limit the number of sessions that are concurrently connected to the database. You can use a named user to limit the number of users. When you have a license you must control the licensing facilities. You can enable licensing facilities and set the appropriate limits by setting the LICENSE_MAX_SESSIONS (when the number of sessions is exceeded, only restricted session users can connect, default is zero which is unlimited), LICENSE_MAX_USERS and LICENSE_SESSION_WARNIING (additional users can connect, but the server writes an error message in the alert file for every new connection) parameters.

 

Commonly modified parameters

The most commonly modified parameters are IFILE, LOG_BUFFER, MAX_DUMP_FILE_SIZE, PROCESSES, SQL_TRACE and TIMED_STATISTICS.

 

SPFILE

SPFILE is Server Side Parameter File. You use SPFILE in RAC. In this environment multiple Oracle instances access a single database. The parameters that relate to all instances are termed as global parameters. You create an SPFILE form an init.ora:

CREATE SPFILE=’$ORACLE_HOME/dbs/spfiledb01.ora’ FROM PFILE=’$ORACLE_HOME/dbs/initdb01.ora’;

Only the SYSDBA can create SPFILE. The CREATE SPFILE can be executed before or after the database is opened.

In RAC multiple instances access the same SPFILE. In the TRAC environment a method called Cache fusion to transfer latest data blocks between instances. During Cache Fusion an instance that holds the latest data blocks to the instance transfers the blocks to the instance that wants to perform an operation on the data. The transafer of the data blocks is performed through a buffer cache. During the transfer of data from one instance to another the system keeps track by using a past image (PI) of the data. Every instance maintains an image of the data that was transferred to another instance. This image is called PI (Past Image). Each PI is assigned an incremental SCN. You can configure multiple SPFILE in RAC, but only one can be used at a time. You create a single SPFILE in RAC by combining the PFILES from various instances into a text file, you then change these parameters to SID.parameter=value format. If you want the parameter to apply to all instances, you choose *.UNDO_RETENTION=10. You then create the SPFILE form the text file.

 

Instance startup stages

NOMOUNT – only the instance is started. You can start this mode when you create the database or recreate control files. When the instance is started the server uses the parameter file to allocate the SGA and start the background processes. In the NOMOUNT mode, the trace file and alert files are also opened. The database name should be either specified in the DB_NAME parameter or the STARTUP command.

MOUNT – used for renaming data files, enabling and disabling redo logs and performing full database recovery. During this stage the Oracle server locates control files. Once the control files are opened, they a re read to specify names and status of the data files and redo log files. However, at this point no verification checks are performed at all.

OPEN – all users can connect. The server verifies that all of the data files and redo log files can be opened. Next, the server opens the online data files and redo log files. AT the time of opening, if any of the data files are not present, the server returns an error and does not open the database. In this stage the server synchronizes data files. Finally, SMON performs instance recovery, if needed.

 

ALTER SYSTEM QUIESCE RESTRICTED – this command wait until all non-DBA transactions are complete. In addition, the inactive sessions are not allowed to become active. Can not be used for backup. To restore the database- ALTER SYSTEM UNQUIESCE.

 

Shutting down the database

  1. First Oracle writes all the buffer cache changes to the data files. All the redo log buffer cache is also written to the online redo log files.
  2. Online data files and redo logs are closed.
  3. The database is closed, but still mounted and the control file is open.
  4. Dismounted – control files is closed, the instance is still open
  5. The instance is shut down – the trace and alert files are closed.
  6. The SGA is deallocated from the memory and the background processes are terminated

 

Dynamic performance tables

The SYS user owns the dynamic performance tables. Most of the Oracle users are not allowed to use these tables.

V$SPPARAMETER – new to Oracle 9i. Shows contents of SPFILE. The V$SPPARAMETER view returns NULL when PFILE used for startup. The columns are SID, NAME. VALUE, ISSPECIFIED, ORDINAL, UPDATE_COMMENT.

Most V$ performance views are accessible only after the database is mounted. However, the V$ views that access data from memory are accessible in even NOMOUNT mode.

ACCESSIBLE IN NOMOUNT

V$PARAMETER – initialization parameters.

V$SGA – summary information on the SGA. Used in tuning the shared server.

V$OPTION – options installed

V$PROCESS – currently active processes – SMON, PMON, etc

V$SESSION – current session information

V$VERSION – database and OEM versions

V$INSTANCE – the state of the current instance.

 

ACCESSIBLE IN MOUNT

V$THREAD – redo log groups, thread # status of the thread

V$CONTROLFILE – names of the control files

V$DATABASE – db name and creation date

V$DATAFILE – data file name, status and other control file info

V$DATAFILE_HEADER – data file header info

V$LOGFILE – online redo log files

 

Dynamic buffer cache – parameters

The buffer cache in Oracle9i can be dynamically resized to grow or shrink. There are 3 parameters that store values for sizing the buffer cache. The buffer cache consists of independent subcaches for buffer pools. The DB_BLOCK_SIZE determines the primary block size that is used for the SYSTEM tablespace. You use ALTER SYSTEM to dynamicall resize the buffer cache – ALTER SYSTEM SET db_cache_size=50M. There are 3 other new parameters that define the sizes of the caches for buffer for the primary block size. These are (all can be changed dynamically):

DB_CACHE_SIZE – default 48M, size of the default buffer pool for buffers with the primary block size. Has to be a size of at least one granule. A value of zero is illegal because zero is the size of the standard block size. The size of the standard block size is the block size of SYSTEm tablespace.

DB_KEEP_CACHE_SIZE – number of buffers in the KEEP buffer pool. The KEEP buffer pool retains the schema objects blocks in the memory. The size of the buffers in the KEEP buffer pool is the primary block size.

DB_RECYCLE_CACHE_SIZE – the RECYCLE buffer pool eliminates data blocks from memory as soon as they are no longer needed. The size of the buffers in this pool is the primary block size. No default.

 

Dynamic initialization parameters

Some parameters can be changed while the instance is running. You can set the dynamic init parameters by either ALTER SESSION SET <> =<>, applies only to the current session; or ALTER SYSTEM SET <>=<>; - global change, valid until the instance is shut down or changed again. ALTER SYSTEM DEFERRED modifies the parameter value for future sessions that connect to the database. The changed value is not seen for the future sessions. To display information on all modified parameters you query the V$PARAMETER or the V$SYSTEM_PARAMETER (independent of session). They have the following columns:

ISSES_MODIFIABLE – parameter can be modified by the ALTER SESSION

ISSYS_MODIFIABLE – can be modified by ALTER SYSTEM

ISMODIFIED – parameter already modified

ISADJUSTED – if the RDBMS adjusted the input for more appropriate value

 

Diagnostic files – difference

Trace files are used to track internal errors. Provide information for tuning applications and instances

Contain information about internal errors detected by background processes

Can be generated by server processes

Alert files contain a log of messages in chronological order

Contain messages for errors that occur while the Oracle instance is running

The BACKGROUND_DUMP_DEST is usually in the ORACLE_HOME\RDBMS\TRACE or \\LOG on UNIX.

Trace file parameters

BACKGROUND_DUMP_DEST

USER_DUMP_DEST

MAX_DUMP_FILE_SIZE

 

 

 

 

Globalization Support

 

 

Oracle globalization support enables you to interact with a database in your native language and run applications in different language environments. Different countries may use different data formats. With Globalization support you can display data in format needed by different regions or territories.

There are certain restrictions to use of Globalization support – for example:

  1. All keywords are displayed in English
  2. You can not use a fixed-width multi-byte character set as the database character set.
  3. Variable-width character sets use more efficiently than fixed width character sets. The variable-width character sets use only the space that is needed to store the data. In contrast, the fixed width character sets block the memory irrespective of the actual data size.

Character encoding schemes

Yu can use character encoding schemes to interpret character data into meaningful symbols from a workstation to a host machine. Different classes of character encoding schemes are available such as single byte character sets, varying width multi-byte character sets, fixed – width character sets and Unicode character sets.

The single-byte character set is used for European and Middle Eastern languages. In this character set each character set occupies one byte. The two types of single-byte character sets are single byte 7 bit and single byte 8-bit. In the single-byte 7-bit character set, 7 bits are used to define up to 128 characters. This character set supports the 26 letters of the Latin alphabet, A to Z. The single byte 7 bit character set normally supports only one language.

In the single byte 8-bit character set eight bits are used to define up to 256 characters. This character set supports a group of languages such as a group of Western European languages. Some of the single byte character sets are ASCII 7 bit American ISO 8859 West European and EBDCDI code page 500 8-bit Western European.

Another character – encoding scheme is the varying width multi-byte character set. This encoding scheme represents each character set with one or more bytes. This character set supports Asian languages. Some examples of varying – width multi byte character sets are Japanese extended UNIX code and Chinese GB2312. Some varying width multi byte character sets use the value of the most significant bit to differentiate between a single byte and double byte character. Other varying width multi – byte character sets use control codes to differentiate between single-byte and double – byte characters.

Fixed width multi – byte character sets provide support similar to varying width character sets. The only difference is that the format is a fixed number of bytes for each character.

Some examples of fixed width multi – byte character sets are 16-bit Japanese (JA16EUC), 16 bit Japanese (JA16SIJS) and Korean Microsoft Windows code page 929. The traditional Chinese Microsoft Windows code page 950 also belongs to the fixed width multi byte character set.

The Unicode character set is another type of the character encoding scheme. The Unicode character set represents all of the characters needed for computer operations, including technical symbols and characters that are used in publishing. The Unicode character set version 2.0 can represent a total of 38,885 characters. The Unicode character set offers different character formats. One format of the Unicode character set is the Universal character set two byte form USC2. Another is the Universal Character Set Transformation Format UTF8. USC2 is a double byte fixed width format, and UTF8 is a varying width multi – byte format.

Character set features

You preferably select a character set based on the corresponding character encoding scheme that is supported by the OS platform. The character set however, does not have to be the same. The Oracle server provides database character sets and national character sets. The database and national character sets are defined at the time of database creation. You can not change the character set after the database creation. Re-creation of the database is the only way to change the character sets. The database character sets define the storage of data columns of the data type CHAR, VARCHAR2, CLOB and LONG.

The national character sets define the storage of data columns of the type NCHAR, NVARCHAR, NVARCHAR2, and NCLOB. The national character set does not support the LONG data type. The database character sets and the national character sets can store varying width multi-byte character sets. The fixed-width multi-byte character set can only be stored by the national character sets.

Guidelines on selecting the database and national character sets

Select a closely related database and national character set

Consider performance for string operations

Consider the shortcomings of converting between character sets

 

Globalization parameters – server side

There are 3 methods for specifying the Globalization support parameters:

  1. Initialization parameters
  2. Environment variables
  3. ALTER SESSION parameters

You can use initialization parameters to specify the language dependent behavior default settings for a server. The default settings for the server do not affect settings for the client. You can use the environment variables to specify the language dependent behavior defaults for a session on the client machine. These default settings override the default settings for the server. You can use alter session parameters to change language-dependent behavior settings for a session. These parameters override the default values for a session or for a server. The initialization parameters that specify the default language dependent behavior are NLS_LANGUAGE and NLS_TERRITORY.

You can use NLS_LANGUAGE to specify the value for the language dependent session characteristics such as error messages. The server messages are stored in binary format files. Multiple versions of these files exist, one for each supported language. The files use specific naming conventions. For example, the name of the file storing server messages in French is ORAF.MSB. Another session characteristic is the language used for the display of day and month names and their abbreviations. For example, if the default language is French, the day and month are also in French. The language dependent session characteristics also include symbols such as AM, PM, BC, etc. The language dependent session characteristics also include a default sorting sequence for character data. The sorting sequence for character data occurs when the ORDER BY clause is used.

The NLS_TERRITORY parameter specifies the conventions for the default date and numeric formatting characteristics such as date format, local currency format, and ISO currency symbol. Some European territories have dual currency support for the EURO currency. Some of the other default conventions that are specified by the NLS_TERRITORY are the decimal character and group separator, the list separator, the week start day, the credit and debit symbols and the ISO week number calculation.

The NLS_LANGUAGE and the NLS_TERRITORY parameters can be specified either as initialization parameters or as ALTER SESSION parameters, but they cannot be specified as environment variables.

Derived Globalization Support Parameters

To override the default values of NLS_LANGUAGE or NLS_TERRITORY you can set the respective derived parameters.

The derived parameters of NLS_LANGUAGE are NLS_DATE_LANGUAGE and NLS_SORT. The NLS_DATE_LANGUAGE parameter specifies the language for the day month names. This parameter overrides the value specified by the NLS_LANGUAGE parameter. NLS_DATE_LANGUAGE has the same syntax as the NLS_LANGUAGE parameter. The derived parameter NLS_SORT specifies the type of linguistic sort sequence for character data. The sort sequence is either binary or the name of a linguistic sort sequence. NLS_SORT overrides the value defined by NLS_LANGUAGE. The derived parameters of NLS_TERRITORY are NLS_CURRENCY, NLS_ISO_CURRENCY, NLS_DATE_FORMAT and NLS_NUMERIC_CHARACTERS. The NLS_CURRENCY parameter specifies a new local currency symbol that is the character string returned by the number format mask L. The NLS_CURRNECY value overrides the value defined by NLS_TERRITORY.

The NLS_ISO_CURRENCY parameter specifies the character string returned by the number format mask C. This is the ISO currency symbol. You can use the NLS_ISO_CURRENCY parameter to eliminate the ambiguity produced by the local currency symbols. For example a local currency symbol $ can be either US dollar or Australian dollar. Unique international currency symbols for specific territories or currencies are defined by the ISO specifications. According to the ISO specification, the ISO currency symbol is USD for the US dollar and AUD for the Australian dollar.

The NLS_DATE_FORMAT parameter defines a new default date format to be used with the TO_CHAR or TO_DATE functions. The value of this parameter can be any valid date format mask, but the value must be enclosed in double quotation marks. The NLS_NUMERIC_CHARACTERS parameter specifies a new decimal character and group separator. You can use any single byte character to define the decimal or group separator. The character used for the decimal separator must be different from the character sued for group separator. The example is G999G999D99 for 555,555.55. There are certain characters that can not be used for either decimal or group separators - +, -, <> or the numeric symbols.

Overriding the default behavior

The value of NLS_LANG environment overrides the values of the Globalization Support parameters. The syntax is NLS_LANG=<language>_<territory>.<charset>

The language component in the syntax specifies conventions such as the language used for the server messages, the day names, and the month names. The values of the language component

overrides the value of NLS_LANGUAGE and the language component supports the same features as NLS_LANGUAGE.

 

Overriding server settings for a session

When NLS_LANG is defined for a client, an implicit ALTER SESSION command is executed for any session. ALTER SESSION SET NLS_LANGUAGE=<language> NLS_TERRITORY=<territory>;

Globalization and sorting

The conventional sorting mechanizm is binary sorting. By setting the NLS_SORT parameter, you specify the type of sort for the character data and also overrides value defined by NLS_LANGUAGE.

The linguistic sorting of multi-byte character set is not supported. Therefore, if the database character sert is multi-byte the result of any sort is according to binary sort sequence.

ALTER SESSION SET NLS_SORT=german;

NLS_COMP controls the method used by the comparison operators such as <> and =. If the parameter is set to ANSI the comparison is done by the linguistic sorting sequence.

There are certain parameters that are not supported in the SQL functions - NLS_LANGUAGE, NLS_TERRITORY and NLS_DATE_FORMAT.

In Oracle 9i you can create linguistic indexes by using the NLSSORT SQL function. CREATE INDEX nls_ename ON employees (NLSSORT(first_name, 'NLS_SORT=SPANISH'));

 

Globalization support view

 

NLS_INSTANCE_PARAMETERS - what is set in the parameter file.

SELECT * FROM nls_instance_parameters;

 

NLS_SESSION_PARAMETERS - session NLS parameters.

SELECT * FROM NLS_SESSION_PARAMETERS;

 

V$NLS_VALID_VALUES - dynamic performance view. Used for retreiving correct values for LANGUAGE, TERRITORY, etc.

 

V$NLS_PARAMETERS - dynamic. current values for globalization support parameters.

 

test:

 

1. Use ALTER SESSION to change globalization support language to Japanese and country to Japan.

SQL> ALTER SESSION SET nls_language=japanese nls_territory=japan;

 

2. Retrieve information about the characterset

SELECT parameter, value FROM nls_database_parameters

WHERE parameter LIKE '%CHARACTERSET%';

 

 

CREATING A DATABASE

 

OFA Guidelines

The OFA standard provides guidelines for organizing the OS directory structure. To create an OFA compliant directory structure you must crreate a different directory for each database under the Oracle data directory.

You place the control files, redo log files, and data files under the specific database directory. Another quideleine for creating an OFA compliant directory structure is to store groups of objects with different fragmentation characteristics in different tablespaces.

You must place the objects the contend disk resources in different tablespaces. This means that you separate the data that participate in disk contention on different physical disks. You place tables that need daily or monthly backups in different tablespaces. You also

have at least 2 different copies of control files on different physical disks. Every online redo log group must have at least 2 online redo log members on 2 different disks.

 

Oracle software directory structure

ORACLE_HOME - main Oracle software directory.

BIN - binary files - .dll and .exe files.

DBS - message files and scripts for all databases.

ORADATA - contains a subdirectory for every database, incl control files, redol og files and data files.

ORAINST - installation files.

RDBMS - server files, scripts and libraries. Also contains trace files and readme files.

PLSQL - message files and SQL scripts and demo files for PL/SQL.

NETWORK - message files for Oracle Net.

 

Database creation requirements

Distribution of OS files: OMF to easily maintain your files.

Size of data files, database blocks (standard plus four non-standard), tables and indexes.

Characterset of the database.

Use of undo tablespaces

 

Database creation - prerequisites

Oracle software must be installed

The directories for software and database files must be established

The DBA must have a privileged account authenticated by the OS or a password file.

The server has to have enough memory to start an instance (at least 6.5M for small databases)

Sufficient space for database files, control files and redo log files.

 

Physical database design - guidelines

2 control files on different disks

Multiplexed online redo log files on different disks

Separate application, temporary and index data files on different disks

 

Database: manual creation

Unique instance and database name

Specify character set - US7ASCII or WE8MSWIN1252.

Set the OS variables - register instance, database character set, etc.

Create parameter file - SGA size, name of the database, control files,size of the blocks, etc.

Set Oracle services

Create password file

Start an instance

Create the database

Execute scripts generating the data dictionary views

 

Environmental variables

The parameters reside in the HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE subkey. In a character set ORA_NLS33 registry variable enter the default character set US7ASCII.

 

Creating password files

When you grant SYSDBA and SYSOPER privileges to a user the user name and privilege information are added to the password file.

D:\ORAPWD FILE=d:\oracle\ora9i\orclpwd.ora PASSWOPRD=eagle ENTRIES=5

 

Parameter file: types

PFILE - is a text file, it does not show changes to dynamic parameters. After database creation you can use either PFILE or SPFILE.

SPFILE is recommended for RAC. It is recommended for performance management because it changes persist across shutdowns.

Only SYSDBA and SYSOPER can create SPFILE.

 

SQL> STARTUP NOMOUNT SPFILE=e:\ora9i\poradata\ora9i\initora9i.ora

 

SQL> CREATE DATABASE ora9i

DARAFILE 'e:\ora9i\oradata\ora9i\system1.dbf' SIZE 250M REUSE

LOGFILE 'e:\ora9i\oradata\ora9i\log1.log' SIZE 5M REUSE,

'e:\ora9i\oradata\ora9i\log2.log' SIZE 5M REUSE,

'e:\ora9i\oradata\ora9i\log3.log' SIZE 5M REUSE

CONTROLFILE REUSE

MAXLOGFILES 32

MAXDATAFILES 30

CHARACTERSET WE8MSWIN1252

DEFAULT TEMPORARY Tablespace Def_Temp TEMPFILE

'e:\ora9i\oradata\ora9i\def_temp.tmp' SIZE 10M REUSE

UNDO TABLESPACE undo_tbs DATAFILE 'e:\ora9i\oradata\ora9i\undo1.dbf' SIZE 50M REUSE;

 

Database creation errors

Syntax errors

Files already exist

OS errors

Insufficient space for files

 

To recreate after the failure:

Shutdown instance

Delete all OS files

Restart the instance in NOMOUNT

Recreate with REUSE

 

 

 

 

Data dictionary and standard packages

 

The data dictionary is the central source of information for an associated database. The data dictionary record, verify and provide the latest information about the associated database. The data dictionary describes the database and its objects.

Logical structure of the database

Physical database structure

Names, definitions of schema objects and space allocated

Integrity constraints

Database users and privileges

Auditing information

 

Data dictionary components

Consists of base tables ((first objects created by the Oracle database during the SQL.BSQL run) and data dictionary views created during database creation with CATALOG.SQL in ORACLE_HOME\rdbms\admin\.

CATALOG.SQL is automatically run by the Oracle server

CATALOG.SQL Is run mamnually during migration or upgrading to a new Oracle release

Needs a SYS connection

 

Types of dictionary views

DBA_ -info from all the schemas

ALL_ - info about all objects accessible to the user

USER_ - info about objects owned by the user

 

Views

Views

Description

DICTIONARY

DICT_COLUMNS

General overview

DBA_TABLES

DBA_OBJECTS

DBA_LOBS

DBA_TAB_COLUMNS

DBA_CONSTRAINTS

Information about objects – tables, constraints, LOBs and columns

 

DBA_USERS

DBA_SYS_PRIVS

DBA_ROLES

User privileges and roles

DBA_EXTENTS

DBA_FREE_SPACE

DBA_SEGMENTS

Space allocation for database objects

DBA_ROLLBACK_SEGS

DBA_DATA_FILES

DBA_TABLESPACES

General database structures

DBA_AUDIT_TRAIL

DBA_AUDIT_OBJECTS

DBA_AUDIT_OBJ_OPTS

Auditing information

 

Dynamic performance tables

 

The dynamic performance tables are called virtual tables because they exist in memory only as long as the instance is running. Contain real time information. The dynamic tables contain information about availability of objects – locks, session status, etc. They are called X_$ and are owned by SYS. The views can not be updated. Another faeature of dynamic performance views is that is that they are identified by V_$. The public synonyms are V$. The V$FIXED_TABLE contains information about all the dynamic performance tables and views.

 

Creating PL/SQL functionality

The CATPROC.SQL script is run automatically when you create a database. It is in the ORACLE_HOME\rdbms\admin directory.

 

Administrative scripts

 

UTL*.SQL – creates additional views and tables for database utilities – UTLSAMPL.SQL creates and populates sample tables, the UTLDTREE.SQL enables metadata management by creating objects and views that show object dependencies.

 

CAT*.SQL – create data dictionary tables and views – CATALOG.SQL, RMAN is created with CATRMAN.SQL.

 

DBMS*.SQL – creates database package specifications, DBMSPOOL.SQL enables DBAs to lock PL/SQL packages, SQL statements and triggers into the shared pool.

 

PRTV*.PLB – creates package bodies for the packages created by the DBMS*.SQL scripts. The PL/SQL source code in these packages is in encrypted format.

 

NO scripts – remove dictionary scripts for various optional services or components that are installed with Oracle server, such as queuing, partitioned tables and indexes.

 

Migration scripts – for migration, upgrades and downgrades.

 

Java scripts – only useful if you install Jserver with your Oracle database.

 

 

Stored procedures: Properties

Stored in the data dictionary of the database. Do need recompilation. Can be used by many users at the same time. Reduce memory requirements.

 

Packages

A package is a group of stored procedures, functions, variables, and data types that are stored in the database as a single unit. Consists of specification and a body. Both are stored separately in a database. The package specification is an interface to the application and declares all public constructs of a package. The public constructs of a database refer to data types, variables, constants, exceptions, cursors and subprograms of a database which can be directly accessed by the users of the package. Package body defines the public constructs declared in the specification. The package body also declares and defines private constructs. The private constructs of a package are variables, cursors and procedures that are hidden from the users of a package and can accessed only by the procedures within the package.  

Another property of a package is that when it is called for the first time the entire package is loaded into the memory in one operation. Finally, the contents of a package can be shared by multiple applications and users at the same time. A package is stored in memory after it is written and compiled. This property allows multiple users to access a package simultaneously.

 

Stored program units – benefits

 

You can use stored PL/SQL program units to access and manipulate the data. They are stored as schema objects. Procedures, functions and packages are examples of stored program units. These units reduce disk retrieval because after they are called for the first time they are stored in the shared pool. Another benefit of stored units is that they enforce data security. Database operations can be restricted by enabling users to access data thru procedures and functions. Finally, multiple users can share a single copy of the program unit.

 

Packages supplied by Oracle

 

DBMS_LOB – provides routines for performing operations on LOBs

DBMS_SESSION – provides access to ALTER SESSION and SET ROLE statements.

DBMS_UTILITY – provides utility routines to analyze and compile schema and resolve names.

DBMS_SPACE – provides information about availability of segment space.

DBMS_ROWID – creates and obtains information about ROWIDs

DBMS_SHARED_POOL – enables DBAs to keep objects in the shared pool and protect them from being aged out.

 

Invalid objects – specifications

Every schema object has a valid or invalid status (a view based on a table that has been dropped). You can retrieve information about the status of the objects by querying DBA_OBJECTS:

SELECT object_type, object_name

FROM DBA_OBJECTS

WHERE object_type = ‘PACKAGE BODY’ 

AND status = ‘INVALID’;

 

Invalid objects – troubleshooting

Tables, synonyms and sequences are always valid. Usually, the objects become invalid if there is a RENAME or DROP was performed. A dependent object also may be INVALID if the dependent views and stored procedures are loaded with the IMPORt. The change of status occurs because the IMPORT may not be able to create a dependent object after creating a referenced object.

 

 

 

Maintaining redo logs and control files

 

If the control file becomes unavailable, the database can not function properly.

The control file stores:

DB_NAME

Names and locations of the data files and online redo log files.

Tablespace names

Timestamp of database creation

Current log sequence number

Checkpoint information

Log history

Backup information

Archive log information

 

Querying control file information

You can use V$PARAMETER, V$CONTROLFILE, V$CONTROLFILE_RECORD_SECTION.

 

V$CONTROLFILE contains the name column

SQL> SELECT name FROM V$CONTROLFILE;

 

V$PARAMETER –

SQL> SHOW PARAMETER CONTROL_FILES

 

V$CONTROLFILE_RECORD_SECTION – different sections of the control file – DATABASE, REDO LOG, DATAFILE, FILENAME, etc. Also contain size in bytes, number of records allocated to the section, index positions of the forst and the last records and the last record ID.

SQL> SELECT type, records_total

FROM V$CONTROLFILE_RECORD_SECTION

WHERE type=’DATAFILE’;

 

Multiplexing control files

 

You should have at least 2 identical copies of control files and store them on separate disks – multiplex them. You can specify up to 8 control files.

 

Online redo log files

 

An Oracle database needs a minimum of two online redo log groups.

MAXLOGFILES –default 32, maximum 255

MAXLOGMEMBERS – OS dependent

 

Log switches

 

The LGWR process writes the redo log buffer entries to the current online redo log group. The LGWR event occurs:

  1. Before the DBWR writes modified blocks in the database buffer cache to the data files.
  2. When a commit occurs
  3. When the redo log buffer is 1/3 full
  4. When a LGWR timeout occurs – 3 seconds

 

The LGWR writes to the redo log files in a cyclic fashion. Each redo log file group is identified by a log sequence number. LGWR writes sequentially to the online redo log files. The log switch initiates a checkpoint, which causes the DBWR to write the dirty buffers to data files.

The log switch occurs when:

  1. When the current online redo log file is full
  2. The DBA forces a log switch
  3. A DBA can specify that a log switch occurs after a certain amount of time has elapsed, regardless of whether the current online redo log file is filled. This is called time-based thread advance.

To force a switch:

ALTER SYSTEM SWITCH LOGFILE

 

Checkpoints

During a checkpoint all the dirty database buffers are ritten to the data files by the DBWn. The CKPT updates the headers of all the data files and control files. This implies that the checkpoint has successfully completed.

Checkpoints occur when:

  1. Log switches
  2. When an instance shuts down
  3. When FAST_START_MTTR_TARGET signals to checkpoint.
  4. When forced by DBA
  5. When the tablespace is taken offline or made read only

 

Info about each checkpoint is written in the alert file if the LOG_CHECKPOINTS_TO_ALERT is set to TRUE (default is FALSE). You can force checkpoint by setting the FAST_START_MTTR_TARGET in number of seconds to perform crash recovery.

 

Checkpoint intervals – parameters

During normal database operations the initiation of the checkpoint depends on the size of the online redo log file. When the database has large online redo log files, you can control database checkpoints by setting the LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT init parameters. The LOG_CHECKPOINT_INTERVAL control the frequency of checkpoints by specifying the number of redo blocks. A checkpoint is initiated as soon as the LGWR process writes the number of OS blocks specified by this parameter. Regardless of the LOG_CHECKPOINT_INTERVAL value, a checkpoint occurs when switching from one redo log file to another. If the value of LOG_CHECKPOINT_INTERVAL exceeds the actual online redo log file size, checkpoints occur only while switching logs. Specifying 0 for the interval may initiate frequent checkpoints. This happens because a new request will be started even if a single redo log buffer is written after initiating the last request. Specifying 0 is not recommended for performance reasons. The value of the LOG_CHECKPOINT_TIMEOUT specifies the maximum duration before another checkpoint occurs. The value is in seconds. A value of 0 disables time based checkpoints.

 

Archive log mode

SELECT name, log_mode FROM V$DATABASE;

SELECT instance_name, archiver FROM V$INSTANCE;

 

Dynamic performance views for Online redo Log Files

 

V$THREAD contains thread information from the control file. It provides information about the total number of online redo log groups, the current log groups, and log sequence number.

SELECT groups, current_group#, sequence# FROM V$THREAD;

 

V$LOGFILE info about redo log files. Info about group number, the type, the status, and the file names of all the members of the group.

SELECT * FROM V$LOGFILE;

Status:

INVALID – inaccessible (usually newly added redo logs)

STALE – redo log was switched before log was full

DELETED – no longer used

Blank – in use

 

V$LOG number of online redo log groups, the status, sequence number, and number of members in each group.

SELECT group#, sequence#, bytes, members, status FROM V$LOG;

Status:

UNUSED – never been written to (usually newly added)

CURRENT – currently used

ACTIVE – active, but not currently used

INACTIVE – no longer needed for instance recovery

CLEARING – has been recreated as a new log file with the CLEAR LOGFILE command. Once the log is cleared, it is changed to UNUSED.

CLEARING_CURRENT – being cleared of a closed thread.

 

Number of online redo log files

When the number of transactions is limited a database instance may need only 2 groups. When the number of transactions is large, a database instance may need additional online redo log groups to guarantee that a recycled online redo log group is always awailable to LGWR. An easy way to determine if the current online redo log configuration is satisfactory is to examine the checkpoint messages in the LGWR trace file and the alert log. If the messages indicate that the LGWR has to wait frequently for an online redo log group to become available, you must add an online redo log group. The LGWR may have to wait for a redo log group because either a checkpoint has not completed or an online redo log group has not been archived. The final factor is the MAXLOGFILES and MAXLOGMEMBERS. Always build a symmetric configuration with equal # and their sizes.

Location of online redo log files

Should be stored on different disks. They should be separated from archived log files. They should be stored separately from data files.

 

Sizing online redo log files

The minimum size of redo log files is 50KB and maximum is OS specific. However, the sizing of online redo log files is dependent on the number of log switches and checkpoints. If the online redo log fiels are small, a large  number of log switches will occur. This reduces database efficiency. Another factor that influences the sizing of online redo log files is the number of redo entries. If the number of redo entries is high, the online redo log files will fill faster. This leads to the generation of a large number of log switches. The space available on the medium is another factor. Members of different groups can have different sizes. However, you must avoid having groups of different sizes.

 

Adding online redo log groups

If the LGWR has to wait the DBWn also has to wait and the system slows down. To avoid this, additional online redo log groups may be required.

ALTER DATABASE ADD LOGFILE GROUP 4 (‘E:\ora\oradata\log4.log’) SIZE 100M;

 

Adding online redo log members

ALTER DATABASE ADD LOGFILE MEMBER ‘d:\ora\redolog11.log’ TO GROUP 1;

 

Relocating online redo log file

  1. Shut down database
  2. Copy online redo log file to the new location
  3. Mount the database
  4. Execute the SQL command to rename the file:
  5. ALTER DATABASE RENAME FILE ‘FILENAME’ TO ‘FILENAME’;
  6. Open database

 

Dropping online redo log groups

ALTER DATABASE DROP LOGFILE GROUP 7;

At least 2 redo log groups are needed

Active online redo log groups can not be dropped

Unarchived redo log groups can not be dropped

Dropped redo log files need to be deleted manually

 

Dropping online redo log members

ALTER DATABASE DROP LOGFILE MEMEBR ‘filename’;

Last valid online redo log member cannot be dropped

Log switch must be forced before dropping an active online redo log member

Unarchived online redo log member ca not be dropped

 

Clearing Online Redo Log Files

ALTER DATABASE CLEAR [UNARCHIVED] LOGFILE GROUP n

Or

ALTER DATABASE CLEAR [UNARCHIVED] LOGFILE ‘FILENAME’;

This command can be used even if there are only two groups with one file each left.

 

The logminer

 

LogMiner processes redo log files and translates the contents into SQL statements. By using LogMiner you can access redo log information without restoring archived redo log files. By using LogMiner you can determine when a logical corruption in the database may have begun, pinpointing the time or SCN to which you can perform incomplete recovery. You can also use LogMiner to undo changes to the database. It will work with version 8.0 and up.

  1. Translates index cluster-associated DML
  2. Provides support for chained and migrated rows
  3. Provides support for direct path inserts

 

LogMiner dynamic performance views

V$LOGMNR_CONTENTS – changes made to user and table information

V$LOGMNR_DICTIONARY – info about LogMiner dictionary

V$LOGMNR_LOGS – log id, file name, oldest changes

V$LOGMNR_PARAMETERS – info about optional LogMiner parameters

 

Logminer session – steps

before you can use LogMiner utility to perform database recovery you have to perform a few steps.

Extracting a dictionary file – DBMS_LOGMNR_D.BUILD

Specifying redo log files for analysis – DBMS_LOGMNR.ADD_LOGFILE

Starting LogMiner

Analyzing output of V$LOGMNR_CONTENT view

Performing object level recovery

Ending LogMiner session – DBMS_LOGMNR.END_LOGMNR

 

LGWR errors and effects

 

When one member in an online group becomes unavailable and the group has two other members – it still writes just fine.

If all the members of the redo log group are not available to LGWR at a log switch because of media failure, the instance shuts down.

The same occurs if the current online redo log group becomes unavailable - a shut down.

 

LGWR: Troubleshooting

One error is that one member of an online redo log group of two is not available and the online redo log group is not active. In this case

dropping and adding a new log group solve the problem. However, if the online redo log group is active you must force a log switch.

If all the members of the next group are inaccessible to LGWR at a log switch, and if the group is not active then

dropping and adding a new group solves the problem. The database might need a media recovery. If all the members of the current redo log group suddenly become inaccessible to LGWR the database needs media recovery.

 

 

 

 

 

 

 

 

Oracle 9i Data Storage

 

In Oracle 9i data has 2 major components – row header and row data. The row header contains the number of columns, lock status and chaining information. For every column, the server stores the length followed by the column value. If the size of the column is less than 250 bytes, the server only needs 1 byte to store the column length, otherwise it is 3 bytes. The server stores the value of the column after the column length bytes in the row data component. A NULL column uses only one length byte. The server does not need any storage space for trailing NULL columns.

 

Data types

 

The built-in data types are divided into 3 categories. Theses are the scalar, relationship and collection data types.

Scalar data types

 

Character  -

 

CHAR – fixed length alpha numeric strings, default 1, maximum 2000

 

VARCHAR2 – variable length, 4000 bytes, maximum required

 

NCHAR – fixed length alpha numeric, 2000 bytes, NLS supported other languages than English,

 

NVARCHAR2 - variable length, 4000 bytes, maximum required, NLS supported other languages than English,

 

Number – up to 38 digits of precision. Stored in variable length. Oracle does not store leading and trailing zeroes.

 

Date – includes time and can store any date between 4712 BC and 9999 AD. Date values require 7 bytes of storage.

 

RAW – stores small binary data maximum 2000 bytes. This column is not converted even if you move the tables with RAW columns between databases with different character sets

 

ROWID – 10 bytes of binary data representing row addresses.

 

The LONG and LONG RAW data types, internal LOBS and external LOBsused for storing large objects such as long text strings, images, clips or vaweforms.

 

LONG and LONG RAW were used in previous software versions for unstructured data, such as binary images, documents and geographic information. Retained for backwards compatibility. LONG data type to store variable width character data and the LONG RAW data type to store unstructured data. The LONG and LONG RAW store up to 2GB of data in each row. Only one column using the LONG or LONG RAW allowed in each table.

LOBs or large objects are large unstructured data stored in the database or as OS files. Internal LOBS can be binary LOB (BLOB, raw unstructured binary data, 4GB max), character LOB (CLOB, single-byte characters up to 4GB) and national character LOB (NCLOB, 4GB of data – fixed and variable national character set).

External LOBs represent BFILE (stores a pointer inside a table to OS files.

 

REF- The use of REF data type needs the OBJECT option. Access to REF is only possible when an OCI application has been initialized in object mode. When REFs are retrieved from the server they are stored in the client-side object cache. REFs provide the ability to capture an association between two row objects.

 

Collection type

VARRAY is an ordered set of elements that can store a predefined maximum number of elements that can store a predefined maximum number of elements in each row.

 

New Oracle9i data types

TIMESTAMP – is an extentsion of the DATE data type. Stores the year, month, and day of the DATE plus hour, minute and second values. You specify the TIMESTAMP as TIMESTAMP’2001-01-31 09:26:50.124’ Supports fractional seconds but does not support time zone data.

The TIMEZONE(fractional_seconds_precision) WITH TIMEZONE supports all values of TIMESTAMP as well as timezone displacement value which indicates the hours and minutes before or after UTC. The datatype also supports fractional seconds.

The TIMEZONE(fractional_seconds_precision) WITH LOCAL TIMEZONE supports all values of TIMESTAMP but normalizes data to database timezone. Further, the time zone displacement is not stored as part of the column data. When user retrieve this data Oracle returns it in users local timezone. For example, a user in LA sees LA time adjusted.

Apart from datetime data types, Oracle 9i provides two new datatypes to handle XML data.

The XML datatype is an opaque datatype which supports the storage of XML with NCLOB storage. You can insert an XML document in this column. This XML type without indexes can store any XML document. If the XML datatype is a LOB column, you can import, export and load the columkn using the same method as for regular LOB columns. When you define a column of the XML type, you can use XML type with structured storage. In Oracle 9i an XMLIndex index type automatically indexes the XML schema associated with each XML document. You can use the XMLIndex type to constraint the list of schemas to which the document can conform.

 

Types of ROWID

The ROWID can be of 2 types – physical ROWID and logical ROWID. Physical ROWID stores the addresses of rows in ordinary tables, clustered tables, table partitions and subpartitions. The physical ROWID can store one of the two rowed formats – the extended and restricted ROWID.

The extended rowed format supports tablespace-relative data block address. In addition, the format efficiently identifies rows in partitioned tables and indexes as well as non-partitioned tables and indexes. Tables and indexes created by an Oracle8i or higher always have extended rowids. Oracle 8i and higher also use the restricted ROWID which uses the same format as Oracle7.

Logical rowids store the addresses of rows in index-organized tables. Each logical rowid used in a secondary index can include a physical guess. A physical guess identifies the block location of the row in the IOT at the time when the guess was made. Guesses are made when the secondary index is created or rebuilt. A single datatype called  universal rowed or UROWID supports logical and physical rowids. UROWID also supports rowids of foreign tables such as non-Oracle tables. You must set the value of compatible parameter to 8.1 or higher to user UROWID.

The ROWID format consists of four components:

Data object number – 32 bits of storage, unique within the database

Tablespace-relative data file number – uniquely identifies every file in a tablespace and occupies 10 bits of storage.

Block number – uniquely identifies block in database, requires 22 bits of storage.

Row number – position of the row directory slot in the block header that contains the row. Requires 16 bits of space.

A ROWID is displayed using a base-64 encoding scheme.

AAAArs – data object number

BBD – tablespace relative data file number

AAAAUa – block number

EEE – row number

 

Collection data types – comparison

VARRAYS

 

Store ordered sets of static data elements

Store small volumes of data

Store elements that contain indexes

Can contain a variable number of elements but the maximum number is specified during declaration

Store elements of the same datatype

Nested Tables

 

Store unordered sets of records

Can be a large data set

Store pointers to the rows stored in separate table

Do not have a predetermined size

Store rows with the same structure

 

 

Unicode data

 

The limitations of the existing character set encodings can be overcome by using a universal character set. The global character set is called Unicode. This character set works on any platform, program or language. Unicode is a prerequisite to all modern standards such as XML, JVA, LDAP, etc.

The 2 different ways to encode Unicode are UTF-16 and UTF-8 encoding. You can store Unicode characters in an Oracle database in two ways. You can create a Unicode database or you can create Unicode datatypes to store the Unicode characters.

 

Data storage structures

 

You can use 4 structures to store data – tables, partitioned tables, IOTs and clustered tables.

SQL> CREATE TABLE HR.EMP

(ECODE VARCHAR2(3) NOT NULL,

DEPT_ID NUMBER(4) NOT NULL)

STORAGE (INITIAL 200K NEXT 200K PCTINCREASE 0)

TABLESPACE “EXAMPLE”;

 

Setting PCTFREE

 

Each block in the Oracle 9i database is associated with 2 parameters – PCTFREE and PCTUSED. The PCTFREE parameter defines the minimum % of data block to be reserved as free space. This free space is reserved  for possible updates to rows that exist in the block.

PCTUSED value is the minimum % of a block that the server can use for row data and overhead before new rows are added to the block. When a data block is filled to limits determined by the PCTFREE the database considers the block unavailable for insertion of new rows until % of that block falls below PCTUSED.

You can set the appropriate values for the PCTFREE and PCTUSED by using the formula displayed.

 

PCTFREE=(Average row size – Initial row size) X 100 / Average row size=(90-75)X100/90=17

 

PCTUSED = 100 – PCTFREE – Average row size * 100 / available data space=100-17-(90X100/1948)=78

 

A higher PCTFREE decreases row migration. There are 2 common situation when you   need to set a higher PCTFREE value. One situation is when rows in a table do not contain any or little values, but will be updated later. Another is when the columns are likely to increase in size after an update. A higher PCTFREE may lead to higher space utilization.

 

The cost based optimizer

 

There are different ways to execute a SQL statement; for example, by varying the order in which tables or indexes are accessed. The procedure used by the Oracle server to execute a statement can affect how quickly the statement is executed. The optimizer uses the internal rules or costing methods to determine the most efficient way of producing the result of the query. The output from the optimizer plan is a plan that describes an optimal method of execution. The Oracle server provides 2 methods of optimization – cost and rule based optimization. The CBO determines the most efficient execution plan by considering available access paths. The CBO factors the information based on statistics for the schema objects such as tables or indexes accessed by the SQL statement. The CBO also considers hints. Partitioned tables and indexes use CBO. Index-organized tables, reverse key indexes, bitmap indexes, index skip scans also use the CBO.

You set the optimizer mode with the OPTIMIZER_MODE init parameter. This parameter can take the values CHOOSE, ALL_ROWS (cost), FIRST_ROWS_n, FIRST_ROWS or RULE.

The CBO contains 3 components – query transformer, the estimator, the plan generator. When executing a query the CBO first generates a set of potential plans for the SQL statement. These plans are based on the available paths for the SQL statement. The CBO then estimates the cost of every plan based on the statistics in the data dictionary. The statistics are for the data distribution and storage characteristics of the tables, indexes and partitions. Cost is an estimated value proportional to the expected resource usage needed to execute the statement with a particular plan. Finally, the optimizer compares the costs of the plans and chooses the plan with the lowest cost.

 

Creating temporary tables

 

The temporary tables can be transaction or session specific.

SQL> CREATE GLOBAL TEMPORARY TABLE <>

AS <SELECt STATEMENT>;

SQL> CREATE GLOBAL TEMPORARY TABLE HR.employees

AS SELECt * FROM HR.employees;

Multiple sessions or transactions can use the same temporary table. A TRUNCATE statement issued on a session specific temp table truncates data in its own session. The statement does not truncate the data of other sessions that are suing the same table. When the session or transaction is over the rows inserted in that session are deleted. Temporary tables are useful if you want to buffer a result set or construct a result set by running multiple DML operations. DML statements on temp tables do nt generate redo logs for the data changes. However, Oracle generates UNDO loogs for the data and redo logs for the UNDO logs. Session termination occurs when the user logs off or when the session terminates abnormally.

There are 2 clauses used to control the duration of the rows in temp tables. The ON COMMIT DELETE ROWS clause explicitly specifies that the data is transaction specific.

SQL> CREATE GLOBALS  TEMPORARY TABLE HR.emloyees_temp

ON COMMIT DELETE ROWS AS SELECT * FROm HR.EMPLOYEES;

The On COMMIT PRESERVE ROWS clause specifies that the rows are visible for the entire session. The server creates a transaction specific table by default. You can create indexes on temporary tables. Such indexes are also temporary. You can lso create views and triggers on temporary tables. You can use import / export the definition of temp tables. However, the stored data is not exported. Once the temp table is exported is visible to all the sessions.

 

Storage parameters

 

NEXT – when you change the value of the NEXT parameter this change affects future extents in the table. The existing extents are not affected.

 

PCTINCREASE – usually 0 is adequate, applies to future sizes, kept in data dictionary.

 

MINEXTENTS – does not change immediately, unless you truncate the table. Specifies the minimum number of extents that must be present in a table at any given time. You can change the value of MINEXTENTS to any value that is less or equal to the current number of extents in the table.

 

MAXEXTENTS parameter specifies the maximum number of extents that a table can contain. You can change the value of MAXEXTENTS to any value that is less or equal to the current number of extents in the table.

 

Block utilization parameters

 

PCTFREE – affects future inserts, the new value applies only to those blocks that are palced on the free list

 

PCTUSED – affects all blocks

 

INITRANS – only new blocks are affected.

 

MAXTRANS – all blocks are affected

 

When modified, MINEXTENTS and MAXEXTENTS affect the growth of a table

 

 

 

The table is transaction specific unless explicitely specified as session specific.

 

You can modify the parameter values according to your requirements. You can do this only if the UNDO_MANAGEMENT parameter in the init.ora file set to manual.

 

ALTER TABLE hr.employees MOVE TABLESPACE data1;

 

Manually allocating extents

 

Extents are allocated to tables automatically when needed. To manually allocate:

SQL> ALTER TABLE <>.<>

ALLOCATE EXTENT (SIZE 100K

DATAFILE ‘d\ora9iServer\oradata\Oracle9i\Example01.dbf’);

 

Characteristics of the High Water Mark

 

A high water mark is a pointer that indicates the last block that was used to store data. Using this mark, you can identify the number of blocks that have never been used in the table. It shifts position with data insertion. When data is deleted from the table, the HWM is not reset. Upon truncation the HWM is reset. The HWM stored in the segment header of the table. You can retrieve the HWM by using DBMS_SPACE package. A HWM is also used in full tale scans. While performing a full table scan, Oracle reads all the blocks up to HWM, including deleted rows.

 

Deallocating unused space

 

If the HWM is at an extent less than the value of MINEXTENTS in the table, the Oracle server releases all the extents above minextents. If you want to release the unused space below MINEXTENTS, specify KEEP 0 in the ALTER TABLE command. 

SQL>  ALTER TABLE HR.loc

DEALLOCATE UNUSED KEEP 4K;

To avoid fragmentation set the MINEXTENT parameter appropriately.

 

Truncating tables

 

TRUNCATE TABLE HR.emp4

DROP/REUSE STORAGE;

The truncation resets the NEXT_EXTENT parameter for the table

Does not execute the delete trigger for the table

 

Dropping tables

 

The drop table command drops the table and releases the extents used by the table.

SQL> DROP TABLE <>.<>

CASCADE CONSTRAINTS;

 

Set unused clause

 

Removes any reference to the column

The column appears to be deleted, but the space is not released

SQL> ALTER TABLE <>.<> SET UNUSED COLUMN <>;

The DROP UNUSED COLUMNS actually removes data from the table. The space can be recovered using ALTER TABLE <> DROP UNUSED COLUMNS;

There are some restrictions on dropping columns. You can not drop a column object type tables, nested tables, or tables owned by SYS. Primary key columns in IOT, partitioning key columns and parent key columns also can not be dropped. All columns in a table can not be dropped. When using the SET UNUSED option, remember that if a LONG or LONG RAW column is set as unused but not dropped, you can not add a LONG or LONG RAW column to the table.

 

Retrieving table information

 

You can retrieve info about tables from DBA_OBJECTS, DBA_TABLES, and DBA_SEGMENTS.

 

SQL> SELECT t.table_name, o.data_object_id,

s.header_file, s.header_block

FROM DBA_TABLES t, DBA_OBJECTS o, DBA_SEGMENTS s

WHERE t.owner = o.owner

AND t.table_name=  o.object_name

AND t.owner=s.owner

AND t.table_name=s.segment_name

AND t.owner = ‘HR’;

 

Retrieving extent information

 

SQL> SELECT file_id , COUNT(*) AS extents,

SUM(blocks) AS blocks

FROm dba_extents

WHERE owner=’HR”

AND segment_name=’LOCATIONS’

GROUP BY file_id;

 

Obtaining physical row location

 

The DBMS_ROWID package provides several functions to extract individual ROWIDs.

SQL> SELECT location_id, ROWID,

DBMS_ROWID.ROWID_OBJECT(ROWID) AS OBJECT,

DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) AS “RELATIVE FILE”,

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) AS BLOCK

FROM HR.locations;

 

IOTs and regular tables

 

Regular table

 

The ROWID uniquely identifies a row

The physical rowed facilitates building of secondary indexes

Allows access based on rowed

A sequential scan returns all rows

The unique constraint is allowed

Can be stored in a cluster

Can contain LONG column

Support replication

IOT

 

The primary key is used to uniquely identify a row

The logical ROWID facilitates building of secondary indexes

Allows access based on the primary key

A full-index  scan returns all rows

A unique constraint is not allowed

Cannot be stored in a cluster

Cannot contain LON column

Do not support replication

 

Creating IOT

 

SQL> CREATE TABLE iot_num

(I Number,

J Number,

k Number,

I Number,

Primary key (I,j,k)

ORGANIZATION INDEX

COMPRESS;

 

SQL> CREATE TABLE HR.emp_new ( empno NUMBER(4) NOT NULL,

CONSTRAINT emp_pk PRIMARy KEY (loc_id, empno))

ORGANIZATION INDEX

PCTTHRESHOLD 20

OVERFLOW TABLESPACE example

INCLUDING ename NOCOMPERSS;

 

SQL> ALTER TABLE

Docindex

INITRANS 4

OVERFLOW INITRANS 6;

 

ALTER TABLE docindex

PCTTHRESHOLD 15

INCLUDING doc_id;

 

ALTER TABLE docindex

ADD OVERFLOW

TABLESPACE ovf_tbs;

 

ALTER TABLE HR.emp_new

INITRANS 5 OVERFLOW INITRANS 2;

 

ANALYZE TABLE docindex COMPUTE STATISTICS;

 

SELECT * FROM DBA_INDEXES

WHERE INDEX_NAME=’PK_DOCINDEX’;

 

SELECT * FROM DBA_TABLES

WHERE IOT_TYPE=’IOT_OVERFLOW’

AND IOT_NAME=’DOCINDEX’;

 

IOT are useful in information retrieval applications, spatial applications and OLAP applications.

1 NetG Thompson Oracle 9i Training Notes

NetG Thompson Oracle 9i Training Notes

 

1. Storage Structures - Admin Manage Storage Structure

 

Any storage parameter specified at the segment level overrride the corresponding paramter set at the tablespace level

except MINIMUM EXTENT and UNIFORM SIZE.

 

The OPTIMUM parameter can not be specified at the tablespace level, OPTIMUM is only specified at segment level.

 

The MINIMUM EXTENT parameter specified at the tablespace level applies to all extents allocated subsequently to the tablespace.

 

Extents alllocated when the segment is:

created,

extended,

altered

deallocated when the segment:

dropped

altered

truncated

automatically resized.

 

Contiguous extents are coalesced into one extent when:

SMON merges adjacent free extents

when the Oracle server needs to allocate an extent that needs space from more tahn one ajacent extent

When DBA uses coalesce extent command

The DBA_FREE_SPACE_COALESCED view about coalesced extents and blocks

 

To use non standard block sizes you configure sub caches within the buffer cache area of the SGA. You replace n with 2,4,6,8,16 or 32 in DB_nK_CACHE_SIZE to specify non standard db block sizes.

 

A data block consists of header, free space and data space. Block header - contains data block address, directories, and transaction slots. Free space - can be used by either new rows or additional transaction entries. data space - contains column values.

 

the parameters that control concurrency are INITRANS (initial number of transactions on the block, default 1 for tables 2 for indexes) and MAXTRANS (the maximum number of transactions slots, default 255).

the transaction slots store information about the transactions.

the PCTFREE specifies the % of space in each data block that is reserved for growth resulting from updates of rows in that data block, default 10%.

For example, the PCTFREE of 20% in a create table means that inserts to the block must stop as soon as free space drops to 20% or less. The free space then used for updates only.

The PCTUSED represents the minimum % of the space mantained for each data block (default 40%). when a data block is filled to the limit determined by the PCTFREE, the Oracle server considers block unavailable for inserts.

the block remains anavailable for inserts untill % by the data falls below PCTUSED. Until % of the block falls below PCTUSED, the Oracle server uses the free space only for updates.

For example, when PCTUSED is 40% the block is reused as soon as utilization drops to less than 40%. Insertions continue until utilization rechaes 80% and then the cycle repeats.

 

BMB (Bit Mapped Blocks)

The BMBs in a segment have a tree hierarchy. There can be a maximum of 3 levels of this hierarchy:

leaves,

intermediate,

root.

the leaves of this hirarchy point to a range of data blocks taht are potential candidates for an insert.

 

Auto management of free space - Benefits

1. Efficient space utilization

2. Improved handling of data concurrently

3. Better performance on RAC environment

 

views Modified to Support Auto-Management of Segment Free Space

DBA_TABLESPACES (stores segment information)

USER_TABLESPACES

DBA_TABLES (stores relational information)

DBA_SEGMENTS

 

DBMS_SPACE package

 

Analyzes growth and space requirements in segments. DBMS_SPACE runs with the SYS privilege, execution granted to PUBLIC, subprograms run under caller security, user must have ANALYZE granted.

Procedures of DBMS_SPACE:

UNUSED_SPACE - returns information about space avaialable, including HWM.

FREE_BLOCKS - number of blocks available for storing data (used on non-auto free space managed object).

SPACE_USAGE - show space usage of auto blocks  under the segment HWM.

 

DBMS_REPAIR package

Corrects data corruption by dropping and recreating an object after corruption detected.

DUMP_ORPHAN_KEYS - reports index entries that point to rows in corrupt data blocks

SEGMENT_FIX_STATUS - allows the user to correct a corrupted bitmap entry.

REBUILD_FREELISTS - recreates the free lists for a specified object

CHECK_OBJECT - validates specified objects and populates the repair table

ADMIN_TABLES - provides create, purge and drop functions

FIX_CORRUPT_BLOCKS - remedies corrupt data blocks in specified objects

SKIP_CORRUPT_BLOCKS - enables or disables scanning of all blocks

 

2. Storage Structures - Rollback Segments

 

A transaction uses only one undo segment to store all of the undo records. However, multiple transactions can write to a single undo segment.

The header of an undo segment contains a transaction table. The table contains information about the current transactions that are using the undo segment.

Undo segments are used for different purposes. One use of undo segments is transaction rollback. When a transaction changes any data, the old image of this data is stored in an undo segment. If the transaction is rolled back, the value is restored.

Undo segments are also used for transaction recovery. If an Oracle instance fails while transactions are in progress, the Oracle server ensures transaction recovery by rolling back the uncommitted changes after the database is reopened. This is possible because the undo segments are also protected by redo logs files.

Undo segments are used to provide read consistency for a given statement. Read consistency means that only the changes committed prior to the time when a statement starts execution are visible to the database users.

 

Automatic undo management

 

In previous releases of the database, undo data was managed manually by using rollback segments. In Oracle9i, rollback segments are replaced by undo segments that are managed by the Oracle server.

Every Oracle database maintains undo data by using a certain method. The Oracle server performs undo or rollback operations for the changes to the database based on the undo records. The undo records store the result of transactions before they are committed.

Undo records provide read consistency. I f a user accesses the data that is being updated by another user, the server provides the before image of the data thru undo records.

In Oracle 9i you can use the System Managed Undo (SMU) mode - automatic. When you specify the SMU mode, the server allocates all of your undo space in a single undo tablespace. There may be multiple undo tablespaces in the database. However, only one tablespace is active at a time.

The SMU mode eliminates the complexities of managing rollback segment space. In the SMU mode the Oracle server automatically manages the creation, allocation and tuning of undo segments.

Dynamic transfer of extents

The SMU mode supports dynamic transfer of extents between undo segments in an undo tablespace. When a transaction needs undo space, the Oracle server extents from the current undo segment or performs an extension. If these steps result in insufficient space, the Oracle steals extents fro other undo segments.

The dynamic transfer of extent space prevents the users from receiving the ORA-30036 error – unable to extend due to insufficient space in the undo tablespace.

Dynamic transfer of extents allows undo segment space to be reused efficiently. By using the SMU mode, you can control the duration for which undo records are retained before they are overwritten. You specify time for the UNDO_RETENTION system parameter.

Undo segments – read consistency

To construct a read consistent image of data the Oracle server uses the information in an undo segment and the information in the table being queried.

The changes to the data are permanent only when the Oracle server completes processing and commits the transaction. This means that any statement that starts before a transaction is committed is not able to read the uncommitted changes.

Oracle server ensures read consistency for a long query

The Oracle server uses the following process to ensure read consistency. A block may have changes that are not committed when the server begins the query process. If a long query executed when several changes are being made, the Oracle server constructs a read consistent image of the changed rows in the block.

The read-consistent image of the block is built through a series of four steps. When processing a select statement the Oracle server first determines the SCN. The SCN enables the server to ensure that any changes that were not committees before the SCN are not processed. The Oracle serve reads the necessary data blocks, based on the SCN value. If a block has not been modified after the query has been initiated, the Oracle server checks for suitable rows.

If the data blocks contain changes that were not committed, the Oracle server retrieves the before image of the uncommitted changes from the undo segment.

The Oracle server constructs a read-consistent copy of the block by applying the before image from the undo segment to the affected rows in the block.

The Oracle server always provides read consistency for a transaction. You can explicitely request for read consistency by issuing the SET TRANSACTION READ ONLY at the beginning of a long running transaction.

Alternatively, you can request read consistency for a transaction that is performing a DML statement by using SET TRANSACTIOn LEVEL SERIALIZABLE. The Oracle server ensures read consistency by using undo segments data.

 

Types of undo segments

The Oracle server storees undo data in the SYSTEm, non-system and deferred undo segments. In the manual undo management mode, the non-SYSTEM undo segments are classified as private and public. In the automatic undo management mode, the non-SYSTEM undo segments are contained in an undo tablespace.

The Oracle server stores the changes made to the objects in the SYSTEM tablespace in the SYSTEM undo segment. This undo segment created when the database is created. The SYSTEM undo segment exists and works in the same way.

The Oracle server needs a non-SYSTEM undo segment when a database has multiple tablespaces. In the manual undo management mode, a DBA creates at least one non-system undo segment. In contrast, the DBA creates at least one undo tablespace in the automatic undo management mode. In the manual mode the Oracle server stores the before-mages of changes to the objects in a non-system tablespace in a non-system undo segment. A non-system undo segment can be classified as a private undo segment or a public undo segment.

Private undo segments are segments that are brought online automatically by a instance because they are listed in the parameter file. You specify the private undo segments by using the ROLLBACK_SEGMENTS init parameter. You can also explicitly bring a private undo segment online by issuing the LATER ROLLBACK SEGMENT.

The DBA can use the ALTER ROLLBACK SEGMENT command with the ONLINE clause to acquire private undo segments manually. You specify the name of the undo segment in the ALTER ROLLBACK SEGEMNT command. In a RAC private undo segments allow each instance to aquire specific undo segments. Private undo segments are specified in the init file.

Public undo segments form a pool of undo segments that are available in the database. An instance can aquire an available public undo segment by using the values of TRANSACTIONS and TRANSACTIONS_PER_ROLLBACK_SEGMENT init parameters.

Public undo segments can be used in the RAC environment. An instance may acquire a public undo segment from the ones that are defined in the database. However, a public undo segment can be used by only one instance at a time.

In the automatic undo management mode the non-SYSTEM undo segment constitute an undo tablespace. A DBA can specify undo tablespace that an instance acquires in the parameter file by using the UNDO_TABELSPACE.

You can switch to another undo tablespace after instance has started. You can use ALTER SYSTEM SET UNDO_TABLESPACE. When a user rolls back a transaction involving data in a tablespace that is offline, a new undo segment is created in the SYSTEM tablespace and the transaction is transferred to the undo segment. This undo segment is deferred and remains deferred until the tablespace is online.

When the tablespace is brought online, the undo entries in the deferred undo segment are used to restore the old values to the rows. The deferred undo segment is dropped after the old values has been restored. The Oracle server creates maintaines and drops deferred undo segments automatically.

Undo retention control: Features

The SMU mode provides a method for explicitely controlling when the Oracle server cen reuse undo space. This means that you can use this characteristic to control the duration of retention of undo information.

Long running transactions

Normally, when the undo space is overwritten by a newer transaction, the committed undo information is lost. However for consistent read purposes long running transactions may need old undo information to undo the changes and produce older images of data blocks.

The snapshot too old error occurs when the undo data that is needed for read consistency is overwritten by the other transactions. Undo retention control helps in providing read-consistent data for long-running transactions by specifying the duration for which undo data is retained in the database.

You can specify a retention period by using the UNDO_RETENTION parameter. The DBA explicitly specifies the duration of time necessary to retain the information. Undo retention minimizes the chances of the snapshot too old error. Undo retention ensures that undo data is retained across a restart of the instance.  The default value of UNDO_RETENTION is 30 seconds.

When you set the undo_retention PARAMETER, ENSURE THAT THE AMOUNT OF UNDO DATA THAT IS RETAINED IS NOT VERY CLOSE TO THE SIZE OF THE UNDO TABLESPACE. THE SIZE OF 20% IS RECOMMENDED.

Hen you use the ALTER SYSTEM command to change the size of the UNDO_RETENTION parameter the effect of the parameter on the database is immediate. However, retention is possible only when the current undo tablespace has enough space for the undo information that is generated by active transactions.

When an active transaction needs undo space and undo tablespace does not have sufficient space, the system starts reusing unexpired undo space. Such an action can cause some queries to fail with the snapshot too old error. You can ensure that the undo data is retained for a specific period by using the UNDO_RETENTION parameter.

Formula to estimate the undo space.

Undo space in blocks = Retention period in secs * Number of undo blocks + Overhead (metadata).

Example: Undo space = (2 * 3600 * 200 * 4000) = 5.8GB

Retention period = 2 hours (2 * 3600 seconds)

Number of undo blocks = 200 undo blocks per second

Overhead = 4K

 

Undo management modes

To set UNDO_MANAGEMENT in AUTO change it in the init.ora. It can not be changed after the instance has started. You create an undo tablespace to store undo data that is generated by transactions. When the instance starts the server automatically selects for use the first available undo tablespace. If there is no undo tablespace it uses SYSTEM undo segment.

Another rule when you configure the SMU mode is to create at least one undo tablespace. Using the system undo segment to store undo data is not recommended. When the database is using the SYSTEM undo segment the server writes a warning to alert file.

When you create a database you can create an undo tablespace by using the CREATE DATABASE parameter. In addition you can use CREATE UNDO TABLESPACE.

Example:

CREATE DATABASE db01

UNDO TABLESPAEC undo01

DATAFILE ‘undo1db01.dbf’ SIZE 20M

AUTOEXTEND ON;

OR

CREATE UNDO TABLESPACE undo01

DATAFILE ‘undo1db01.dbf’ SIZE 20M;

 

A common rule while configuring the SMU mode is to specify the undo tablespace for an instance by using the UNDO_TABLESPACE init parameter. In a RAC environment, the UNDO_TABLESPACE parameter can be used to assign a specific undo tablespace to an instance.

Specifying the UNDO_TABLESPACE parameter is optional only if one undo tablespace exists in the database and the UNDO_MANAGEMENT is set to auto. Then the server automatically selects the undo tablespace.

If you have not already created the undo tablespace the instance fails to start when you set the UNDO_TABLESPACE parameter. This parameter can be altered dynamically by using the ALTER SYSTEM SET UNDO_TABLESPACE=<>;

The server automatically creates an undo tablespace under 2 conditions. First if the UNDO_MANAGEMENT is set to auto. Second, if you omit the UNDO TABLESPACE in the CREATE DATABASE.

The server automatically creates the SYS_UNDOTBS undo tablespace. The default data file is named dbu followed by the SID or instance name. The data file is stored in the dbs folder below Oracle home. The size is OS dependent and autoextend is set to ON.

Switching undo tablespaces: Features

In the SMU mode a single undo tablespace manages the undo data. You can create multiple undo tablespaces for the database. However, the instance uses only one undo tablespace at a time. You can switch them.

The instance uses the undo tablespace that you specify in the UNDO_TABLESPACE initialization parameter. You can also use ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;

While switching between undo tablespaces the server may report an error if the tablespace does not exist, be an undo tablespace and may not be already in use by another instance.

The Oracle server can switch undo tablespaces when the database is online. This feature enables users to run transaction while the server processes the tablespace switch operations. If the switch completes successfully afterwards are assigned to the new undo tablespace.

The switch operation does not wait for the original tablespace transaction to commit. This feature results in the old undo tablespace entering the PENDING OFFLINE mode. The old undo tablespace is being accessed by pending transactions. The new transactions are stored in the new undo tablespace. A feature of SMU is that you can not assign a PENDING OFFLINE undo tablespace to another instance. In addition, you can not drop a PENDING_OFFLINE undo tablespace.

When an undo tablespace is offline, you can assign the undo tablespace to other instances.

You switch out the current undo tablespace without switching to another tablespace by using the UNDO_TABLESPACE=’’ parameter.

Planning rollback segements

The factors that influence the planning of rollback segments are the application environment. Type of transaction performaed, and the volume of data.

Transaction environment

The OLTP, batch processing, and hybrid environment influence the number and size of rollback segments.

In a transaction environment, you can use rollback segments to manage rollback records. Each rollback segment contains a header. The header contains the transaction table entries that define the state of each transaction when writing to the rollback segment.

Each transaction that uses a rollback segment updates transaction table frequently. This could cause contention on the header especially in OLTP.

An OLTP environment is characterized by short transactions. Therefore, for this environment many small rollback segments are recommended. If possible, create one rollback segment for every 4 concurrent transactions.

A batch environment is characterized by transactions that are large in volume. Therefore, these transactions work better when associated with large rollback segments.

The size of a rollback segment is the expected maximum number of bytes of rollback information for each transaction multiplied by the expected number of concurrent transactions that will use the rollback segment. The size of the rollback information depends on the type of transaction – insert or delete. For example, inserting a record in a table generates less rollback information than a delete because the insert only stores ROWID, while the delete contains the actual data.

Another factor is the volume of data processed.

Steps before deciding the rollback segment size

Execute the longest expected transaction and check the size of the rollback segment. To minimize dynamic extension, create rollback segments with a large number of extents, for example MINEXTENTS 20.

Rollback segment creation guidelines

While creating a rollback segment you must specify the type because this property can not be changed. The type of the rollback segment specifies whether it is accessible to any instance or just this instance.

Create with a large number of initial extents – MINEXTENTS 20.

Do not set PCTINCREASE other than 0.

Set an optimal number of extents for each rollback segment. Set the OPTIMAL to a high value to minimize the snapshot too old error for long running queries.

Set a small OPTIMAL value for short queries.

Use INITIAL=NEXT for rollback segments to ensure that all extents are of the same size.

Avoid setting MAXEXTENTS to unlimited.

Place rollback segments in a separate exclusive non-SYSTEM uniform extent size tablespace.

Set the UNDO_MANAGEMENT init parameter is MANUAL by default. Choose manual in init.ora.

For OMS to be able to access rollback segments they have to be specified in init.ora: rollback_segments=(rb_01,rb_02).

Creating rollback segments

CREATE [PUBLIC] ROLLBACK SEGMENT

TABLESPACE <> [<STORAGE>]

CREATE ROLLBACK SEGMENT “RB_11A”

TABLESPACE “USERS” STORAGE

(INITIAL 100K NEXT 100K);

Setting the created rollback segment online:

ALTER ROLLBACK SEGMENT rollback_segment ONLINE;

The max number of online rollback segments is specified in the MAX_ROLLBACK_SEGMENTS.

Rollback segments – acquisition

When the instance opens  database the process of acquiring rollback segments involves series of steps.

First, the instance acquires all the private rollback segments named in the ROLLBACK_SEGMENTS. Then Oracle computes the number of rollback segments that the instance needs. The values of TRANSACTIONS and TRANSACTIONS_PER_ROLLBACK_SEGMENT are used for this computation. Thenumber of rollback segments that an instance needs determined by dividing the value of the TRANSACTIONS parameter by TRANSACTIONS_PER_ROLLBACK_SEGMENT.

In the third step the Oracle instance verifies the number of NON-SYSTEM rollback segments is at least as high as the estimated number of rollback segments

The fourth step involves a decision. If the number of rollback segments needed is greater than the number of non-SYSTEM rollback segments already available, Oracle acquires additional rollback segments. The public rollback segments must be created and available.

Transaction use of rollback segments

The process by which rollback segments are used by transactions involves a seriews of steps. When a transaction starts, a rollback segment has to be assigned to that transaction. The Oracle server verifies whether a request is made for a named rollback segment. If the request is for a named rollback segment – it is allocated to that transaction. SET TRANSACTION USE ROLLBACK SEGMENT roll_01;

When a request for a rollback segment is not made, Oracle allocates a rollback segments with the fewest transactions. Rollback information is generated during transaction processing. The server writes the rollback info. A pointer is used to specify the location where the entries will be written. When an extent is full, the pointer must move to the next available extent. When the next extent is free or inactive, it issued. If the extent is active, the transaction can not use the extent. In addition, it can not skip over an extent – it allocates an additional extent. The allocation of extents called extend. The rollback segment will extend until the MAXEXTENTS is reached.

Rollback segments – shrinkage

The Oracle server can shrink a rollback segment until it is less or equal of the OPTIMAL. The OPTIMAL can be specified during CREATE ROLLBACK SEGMENT or ALTER ROLLBACK SEGMENT. The OPTIMAL size should be based on the space required by an average transaction.

Shrinkage of a rollback segment

The server shrinks a rollback segment when the pointer of the rollback segment moves from one extent to another. During the shrinking process the server releases the inactive extents until an active extent is found. Or the size of the rollback segment is reduce to the optimal size.

There are two conditions that are required for the shrinking of a rollback segment. First the current size of a rollback segment exceeds the optimal parameter. Second, there are contigious inactive extents in a rollback segment.

Changing storage parameters

You redefine the parameters when the transactions generate more rollback data that was originally estimated. You increase the OPTIMAL parameter value to prevent frequent allocation and deallocation of extents. You use the alter rollback segment to redefine the maxextents and OPTIMAL parameters of a rollback segment. In contrast the INITIAL parameter is defined only at the time of creation of the rollback segment.

You can also use ALTER ROLLBACK SEGMENT <> STORAGE(OPTIMAL 500K);

You can use ALTER ROLLBACK SEGMENT <> SHRINK [TO 500K]; An active extent can not be deallocated.

Taking rollback segments offline: ALTER ROLLBACK SEGMENT rbs OFFLINE;

If you take a rollback segment offline, the status changes to PENDING OFFLINE.

 

Troubleshooting rollback segments

A transaction can write to only one rollback segment and may fail if there is insufficient space in the rollback segment.

Insufficient space for transaction in rollback segment generates the error ORA-01562. The error could be caused by 2 factors:

insufficient space in a tablespace or the MAXEXTENTS value for the rollback segmenthas been reached. One possible cause of the ORA-0162

 

test

 

4. Select factors to be considered while planning rollback segments

Number of concurrent transactions

Type of operation being perform;ed

error is the ORA-01560 error. When there is insufficient space in the tablespace for the rollback segment to extend, the ORA-01560 is generated.

As a solution, you can increase the tablespace available to the tablespace by extending or adding data to the tablespace. Alternatively, you can allow the datafiles

to autoextend. Another possible factor for the ORA-01560 error is the ORA-01628 error. Theserver returns the ORA-01628 error when the limit imposed by maxextents has been reached. In this case, no additional extents can be allocated to the rollback segment.

You can re-crreate the rollback segment or alter with higher maxextents value. To recreate rollback segments you first haVE TODROP ROLLBACK SEGMENTS.

Read consistency errors

The Oracle server guarantees that a statement processes only the data that is committed prior to when the statement started. That means that statements

that are committed after the statement starts executing will not be seen by the statement.

If the Oracle server cannot construct a read consistent imnage of the data the ORA-01555 SNAPSHOOT TOO OLD error is generated. A read consistency error can occur when the transaction that made the change has been committed and the data rquired fi=or the read consistent image has been

overwritten by other transactions. There are 2 probable causes of the read consistency errors. First, the transaction slot in the rollback segment has been reused. Second, the before image of the rollback segment may have been overwritten by another transaction. Read consistency errors can be avoided with

creating rollback segments with higher MINEXTENTS, larger extent values or higher optimal parameters. Hopwever, read consistency errors can notbe avoided by increeeeasing MAXEXTENTS values.

Blocking session

A blocking session can cause unlimited growth of a rollback segment. When a transaction writes to an extent in a rollback segment and the extent runs out of space, the Oracle server attempts to reuse the next extent in the rollback segment.

If the extent contains even one rollback entryu generated by an active transaction, then the extent can not be used. In addition, the rollback segment pointer can not skip and extent and continue writing the transaction to a subsequent inactive extent. Additional extents have to be allocated.

A blocking session has an active transaction that is writing to the extent that the server is trying to access.

When the blocking section is idle for a long time the DBA needs to cancel the transaction. Once you have determined which idle transaction caused the blocking you can kill the transaction. To check for any blocking transaction you can use the V$SESSION, V$TRANSACTION, and V$ROLLSTAT views.

SELECT s.sid, s.serial#, t.start_time, t.xidusn, s.username

FROM V$SESSION s, V$TRANSACTION t, V$ROLLSTAT r

WHERE s.saddr = t.ses_addr AND t.xidusn = r.usn

AND ((r.curext = t.start_uext -1)

OR ((r.curext = r.extents - 1) AND t.start_uext = 0));

Error in taking a tablespace offline

A tablespace with one or more active rollback segments can not be taken offline.

First, if you cannot take a tablespace offline because there are online rollback segments you

peform a query to ascertain that the rollback segments are online in DBA_rollback_SEGS. Second, you take all the online rollback segments in the tablespace offline by using the ALTER ROLLBACK SEGMENT segment_name OFFLINE. Then locate

the transactions that are currently using the rollback segments in V$ROLLSTAT view for pending offline mode. Forurth, you identify thesessions that initiated the transactions by using a rollback segment that has the PENDING OFFLINE STATUS. You use

a join in V$SESSION and V$TRANSACTION views to get the session info. Terminate the sessions.

Take the tablespace offline.

Undo staistics - querying

The V$UNDOSTAT view stores statistical data that indicates the working of the system. Every row records the statistics

collected in the instance in 10 minutes interval. You use the V$UNDOSTAT to estimate the amount of undo space needed of for current load

This view is available in SMU and RBU mode, but the staistics are collected only in the SMU mode.

The V$UNDOSTAT contains 15 columns. UNDOBLKS, TXNCOUNT, BEGIN_TIME, END_TIME, and MAXCUNCURRENCY are are some of the columns in the V$UNDOSTAT view. The UNDOBLKS column

stores the total number of undo blocks used by the transactions. The TXNCOUNT column returns the total number of transactions executed in the period.

The MAXCONCURRENCY column returns the highest number of transactions executed concurrently in the period.

The UNDOTSN column returns the ID of the last active undo tablespace. MAXQUERYLEN identifies the length of the longest query in number of seconds. The NOSPACEERRPCNT returns number out of spce errors.

General rollback segment information

Database performance deterioratesif insufficient rollback segments are availoable for transactions.

To improve this situation you identify any offline rollback segments that can be brought online before you crEATE NEW ONES.

You can view information about rollback segments in DBA_ROLLBACK_SEGS. This information is not available in the dynamic views because

only the rollback that are currently in use by the instance. The DBA_ROLLBACK_SEGS shows info about rollback segments. including id, location, type and status.

A rollback segment can be identified by segment_id and segment_name, and a tablespace_name.

The tuype of rollback segment is specified as public or sys type.

Rollback segment statistics

When tuning the database you can monitor the actual size of rollback segments against their optimal size. This can be done by querying

V$ROLLNAME and V$ROLLSTAT. The column names in V$ROLLNAME - USN - number of a rollback segment, name - name of arollback segment.

RSSIZE - number of extents in each rollback segment. XACTS - number of transactions using the rollback segment, OPTSIZE - optimal size.

To verify that the optimal size has been specified for the rollback segment, query V$ROLLSTAT and V$ROLLNAME.

SELECT n.name, s.extents, s.rssize, s.optsize

FROM V$ROLLNAME n, V$ROLLSTAT s

WHERE n.usn = s.usn;

Current rollback segment activity

While performing certain maintenance operations you may want to find out the sessions that are currently using rollback segments. You can retrieve this information

from the V$SESSION and V$TRANSACTION views. Some of the columns in the V$SESSION are: SADDR - session address, USERNAME, SID, SERIAL#.

You query the XIDUSN in V$TRANSACTION to id rollback segment. UBAFIL, UBABLK, UBASQN, UBAREC - current location where the transaction is writing.

USED_UBLK: number of blocks of rollback information, START_UEXT - rollback segment extent for which the rollback segment started writing,

START_UBAFIL - rollback segment file number for which the transaction started writing.

To query information about name of the user, rollback segment name and number and number of blocks of rollback information generated by this transaction

SELECT s.username, t.xidusn, t.used_ublk

FROM V$SESSION s, V$TRANSACTION t

WHERE s.saddr = t.ses_addr;

 

Test

5. The rows containing the statistics about rollback segments from the V$SESSION and V$TRANSACTION:

SELECT c.username, d.xidusn, d.ubafil

FROM V$SESSION c, V$TRANSACTION d

WHERE c.saddr = d.ses_addr;

 

Identify the features of undo retention control

 

CAn be altered dynamically by using the ALTER SYSTEM command

Affects UNDO settings immediately

Minimizes the occurance of snapshot tooold error

Retains undo data across restatring the instance

Specified by using the UNDO_RETENTION parameter

 

The sequence of events in which rollback segemnts are used by transactions

1. The server verifies if the rollback segment has been requested

2. The server allocates the rollback segemnt

3. The transaction writes to the current extent

4. The transaction finds thenext free extent when the current extent is full

5. The server allocates an extent when the next extent is active

 

Identify the characteristics of a rollback segment that avoids read consistency errors

1. High MINEXTENTS

2. High OPTIMAL

3. Large EXTENT size

 

Identify thefeatures of awitching undo tablespaces

A DBA can change the UNDO_TABLESPACE parameter

A transaction can continure to run during a switch operation

A switch operation can execute on an online database

A switch operation requires an undo tablespace that is not in use

A pending offline tablespace can not be used by another instance

 

Select the features of automatic undo management

Ensure read consistency by using undo data

Simplify undo management by using undo segments

Control the retention period of undo data

Store undo data in a single undo tablespace

Prevent the ORA-30036 error by dynamic transfer of extents

 

Select guidelines for creating rollback segments

Ensure that the OPTIMAL value is being based on volume of transactions

Ensure that the rollback segment is in user managed tablespace

Ensure that you do not set the PCTINCREASE

Ensure that the rollback segments are created in dedicated tablespaces

Ensure that the MAXEXTENTS value is other than UNLIMITED

 

Sequence of steps that the Oracle server uses to ensure read consistency when a transaction occurs

The current SCN is determined

The data is read

The before image of uncommitted changes is retrieved

The changes are applied from the undo segment to a copy of undo block

 

Sequence the steps to resolve an error in taking a tablespace offline

Query to find the online rollback segments in the tablespace

Take all the rollback segments in the tablespace offline

Locate transactions that are using the rollback segment

Obtain information about the user sessions and related transactions

Terminate sessions that have inactive transactions

Take the tablespace offline

 

 

 

Temporary segments

The amount of space a necessary is determined by the value of SORT_AREA_SIZE parameter. If the volume of data being sorted

exceeds this parameter value, the data is sorted using several sort operations. The intermediate results of sorting are stored on the disk.

The examples of statemens needed to create temp segments:

CREATE INDEX

SELECT with ORDER BY

SELECT with DISTINCT

SELECT with UNION

SELECT with GROUP BY

The server creates the temporary segments in different tablespaces - permanent or a temporary

tablespace.

Temp segments in permanent tablespaces

An instance can have more than one temp segment in temp tablespace

Temporary segments ARE CREATED IN A PERMANENT TABLEspace only when executed statements need more sort space and the user has been assigned a permanent tablespace for sorting.

Server overhead decreaSES

 

Temporary segments in permanent tablespaces -

SMON cleans up temp segments no longer needed

When a perm tablespace is used for sorting it may be highly frag,mented. Trefeore, it should be used exclusively for sorts.

 

Temporary segments in a temporary tablespace

An Oracle server create3s only a single temporary segment for each temporary tablespace. However, there is no limit to the number of extents that can be allocated to a temporary segment created in a temporary tablespace. The number of extents in a tablespace is limited by DEFAULT STORAGE.

A temporary segment in a temporary tablespace is created when the first disk sort occurs after startup. The server creates one sort segment for every instance that performs a sort operation. Therefore, in RAC the server may have more than one sort segment in a temp tablespace.

The details of the temp segmentin SGA aremaintained in the Sort Area Pool. Based on the information in the sort extent pool several transactions that need more space for sorting reuse the extents in the temporary segment.

A temporary segment in a temporary tablespace is released when an instance shuts down.

Temporary segments - guidelines

You must set up different temporary tablespaces based on your sorting needs. Temporary tablespaces imporve concurrency and decrease the frequent allocation and deallocation of extents.

Temporary segments use the default storage settings specified for their tablespace. You must define identical values for the INITIAL and NEXT parameters. In addition, you must specify the value of PCTINCREASE as zero.

You must specify the MAXEXTENTS for permanent tablespaces in which the server creates temporary segments. You must specify the MAXEXTENTS for permanent tablespaces in which temporary segmetns are created, the MAXEXTENTS does not affect the temp segments in temp tablespaces.

Create tablespaces with different default storage clauses and assign them based on user sorting requirements

 

Sort segment statistics

The views that are used to retrieve statistics for temporary segments are DBA_SEGMENTS, V$SORT_USAGE, and V$SORT_SEGMENT.

THE DBA_SEGMENT is used to retrieve information on temporary segments. The V$SORT_USAGE view is used to retrieve info on active sorts.

THe V$SORT_SEGMENT describes the status of the Sort Extent Pool. This view is updated only when the tablespace is of the temporary type.

To estimate the size of the temporary tablespaces that are needed to store sort segments, you need to know the number of blocks and extents used by the largest sort operation. This information can be retrieved from the MAX_SORT_SIZE and MAX_SORT_BLOCKS columns.

SELECT max_sort_size, max_sort_blocks

FROM V$SORT_SEGMENT;

 

Temporary segment activity

You can tune a SQL statement based on the sort data from the V$SORT_USAGE view.

SELECT username, contents, extents

FROM V$SORT_USAGE;

 

Test

 

Identify guidelines for using temporary segments

 

Create temporary tablespace according to sort operations

Create and assign independently configured tablespaces

Specify the MAXEXTENTS for a permanent tablespace

Specify storage parameters for temporary tablespace

 

Select the features of the various types of segments

 

One temporary segment is created for every instance

Reclaimed by SMON after a SQL statement completed

Created in a temporary tablespace for the first sort task of an instance

Stores details in the Sort Extewnt Pool in a temporary tablespace

Fragments a permanent tablespace

 

 

 

 

 

Managing Indexes

 

When an index runs out of storage space you can modify the index storage parameters to increase the space available to the index. The storage available to an index includes the transaction and extent area allocated to the index. You can increase the space available to an index by increasing the maximum number of extents allocated to the index.

ALTER INDEX <> [STORAGE CLAUSE] [INITRANS INTEGER] [MAXTRANS INTEGER];

ALTER INDEX hr.emp_name

MAXTRANS 255

STORAGE (MAXEXTENTS 255);

 

Index - extents allocation

ALTER INDEX hr.emp_ix ALLOCATE EXTENT (SIZE 200K);

 

Index - extents deallocation

Index space is deallocated when the table is truncated. You can manually deallocate the extent:

ALTER INDEX hr.emp_ix DEALLOCATE EXTENT UNUSED;

 

Index information

DBA_INDEXES and DBA_IND_COLUMNS provide indexes information.

SELECT index_name, tablespace_name, index_type

FROM dba_indexes

WHERE owner=’HR’;

You can use DBA_IND_COLUMNS to display the column names and order in the index.

SELECT column_name, column_position,

Column_length, index_name

FROM dba_IND_COLUMNS

WHERE index_owner=’HR”

ORDER BY index_name;

 

Index space usage – monitoring

 

When you insert, pdate or delete key values in an index frequently, the index can waste the acquired space over time. You can use INDEX_STATS to analyze the structure of an index.

ANALYZE INDEX hr.emp_ix VALIDATE STRUCTURE;

When you run analyze index the index stats view is populated with index data. The data from this view enables you to develop a history of space that is used by an index. The percentage of space used by an index varies according to the frequency with which the server inserts, updates or deletes the index keys. This percentage value is stored in the PCTUSED column of the INDEX_STATS view. You retrieve the PCTUSED data by querying the INDEX_STATS view.

When the index space being used drops below the average you can condense the index space. You shrink the index by dropping and rebuilding or coalescing the index.

SELECT name, blocks, lf_rows, del_lf_rows

FROM index_stats;

SELECT name, blocks, pct_used

FROM index_stats;

Unused indexes – identifying

If an index is not being used – then you can drop the index and improve the performance of the oracle server.

You monitor index usage by using the ALTER INDEX command with the MONITORING USAGE or NOMONITORING USAGE clause. When you use the ALTER INDEX command, the Oracle server populates the V$OBJECT_USAGE with statistics on the index being monitored. You can only view information in the V$OBJECT_USAGE for indexes you own. When specify the monitoring usage clause in the alter index command, the V$OBJECT_USAGE is reset for the specified index. When the view is reset the server clears or resets the previous information.

SELECT index_name, monitoring, used

FROM v$object_usage

WHERE index_name=’EMP_NAME’;

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

3. Managing an Oracle Instance

 

System Global Area

 

The Oracle server is made up of the Oracle instance and oracle database. The Oracle instance consists of memory space called the SGA, background processes and the database related files. The SGA stores the data and control information of the Oracle server. Every instance has its own SGA and oracle server allocates memory to the SGA when an instance starts. This memory space is released when the instance shuts down. The memory region allocated to SGA is shared. The SGA memory spaced must be non swapped and non paged. Because the purpose of the SGA is to store data in memory for fast access, the SGA must be within main memory. If pages of the SGA are swapped to disk then its data is no longer quickly accessible.

The SGA consists of a group of memory structures. These structures are allocated space during instance startup. The three main memory structures are the shared pool, database buffer cache, and redo log buffer. There is also a large pool and a Java pool. You can view the SGA memory allocations by using SHOW SGA command.

The shared pool is used to store information such as the most recently used data dictionary is also stored in the shared pool.

The database buffer cache stores the most recently used user data. This cache contains the write list and the least recently used LRU list. The write list holds buffers which contain data that is modified but not written to disk. The LRU list holds free buffers, dirty buffers, and the pinned buffers that are currently in use. The redo log is used to record the changes to the database.

In releases prior to Oracle 9i the SGA was a static memory space. In Oracle 9i the SGA configuration can be changed without shutting down the instance. This means that the SGA can grow and sink while the instance is running.

The dynamic SGA allows you to set the size to which the SGA can grow by using the SGA_MAX_SIZE parameter.

Background processes

When an instance is started the Oracle database creates a set of background processes – OS processes.

Mandatory background processes:

DBWn – database writer, LGWR – log writer, SMON – system monitor, PMON – process monitor, CKPT – checkpoint.

The DBWN process writes the changed data from the database buffer cache to the data files. The Oracle server processes all database writes in memory before the DBWn process writes the data to the data files. When a transaction modifies the data blocks, the before image of the data is written to the rollback segment. The rollback segment to which the before image is written is present in the buffer cache.  The committed changes are written to the data files. That means that the data files store the data dictionary and the user object, which contain the last committed changes.

The LGWR process records the changes that are registered in the redo log buffer to the redo log files. When data blocks are modified in the database buffer cache, the modified data blocks are written to the redo log buffer after the changes are committed. The redo log buffers that contain the data blocks are written to the redo log files by the LGWR process. The redo log files are used to record all the changes made to the data. The entries in the redo log buffer are also referred to as the redo log data.

The SMON process performs an instance recovery by rolling back the pending transactions and recovering data that is recorded in the buffer, but not recorded in the database. The SMON cleans up the temporary segments that are no longer in use. The temporary segments are used to store data temporarily during transactions. The SMON process recovers the dead transactions skipped during a system failure. The dead transactions are the transactions that were active during the system failure. The SMON also puts together the fragmented free space in the data files.

The PMON process performs process recovery when a user process fails. The PMON cleans up the cache and unlocks the system resources held by the user process.

The CKPT process updates the database status information such as the database file headers. The updating process occurs at the checkpoint event. During the checkpoint the server records the changes in the database buffer cache permanently to the data files. For example, if the log is full the server switches the log at the checkpoint.

In addition to the mandatory processes, there are optional processes – RECO, ARCn, Dnnn, etc.

 

Database files

An Oracle database is a collection of data. The server stores the data in the database and retrieves the information from the database. One of the responsibilities of the DBA is to manage the files. There are 3 types of files – data files, redo log files and control files.

In an Oracle database there can be one or more data file. A data file stores the data dictionary and user objects. The data files also store the last committed changes to data.

The server reads the data from the data file and stores the information in the database buffer cache. The server does not write immediately modified or new data to data files. The data is stored in the data base buffer cache.

Another type of data base files is the redo log file. Every Oracle database has at least 2 redo log groups, where each group has at least one redo log file. The redo log files are used to record all the changes made to the data. This storage helps ensure recovery of data in case of instance failure. Oracle supports multiplexed redo logs to protect the redo log files against disk failures. This means that you can maintain two or more copies of the redo log files on different disks. The redo log file copies that are maintained on different disks are called mirrored log files. Each member in a log group is amirrored log file of the same size.

The control files store the information that is needed to maintain and verify database integrity. These files store the database structure, database name, time stamp of creation, names and locations of data files and redo log files. The information stored in the control files is used to identify the data files and control files. The control files must be already opened when you open  the database. If the DBA changes the structure of the database, the Oracle server automatically modifies the control file. When there is a system failure, the control files are used along with redo log files for database recovery. Each Oracle database must have at least 1 control file.

Database related files

The parameter file is used to define the characteristics of an Oracle instance. When the Oracle server starts an instance, the server uses the parameter file to get the instance configuration. The instance configuration is specifed by the initialization parameters in the parameter file.the parameter file specifies the name of the database, the amount of memory to allocate the names of the control files, etc.

Password file another type of the database related files which is used to authenticate or validate the privileged database users.

The archived redolog file is another type of non database file. This file stores offline copies of the redo log files which are used during instance recovery. Oracle creates archived redo log files by archiving the redo log files that are full. The online redo log files are reused only after they have been archived. If automatic archiving is enabled, the redo log files are archived by the ARC0 process. In ARCHIVELOG mode the database can be completely recovered from both instance and media failure.

 

Connecting to an Oracle server

There are 3 types of connections that a database user can establish with the Oracle server. The three types are local, multi-tiered and two tiered.

When a user connects to a machine that runs the Oracle server to access a database, two processes are involved – the user process and the server process. A user process is created when a user starts the Oracle worksheet or Oracle Forms. A user process is a mechanism that is used to execute the code of an application or an Oracle tool such as OEM. The server process is created when a user connects to the Oracle server by specifying a username, a password, and a database name. The server process is created on the same machine where the Oracle server is running. A server process is a mechanism of interaction between the Oracle server and the user process. The server process holds the operations requested by the user and performs them on the Oracle server. A communication pathway between a user process and the Oracle server is called a connection. When the user starts to interact with the Oracle server, a connection is established. The user can run a tool on the same machine as the Oracle server. In this case, the communication pathway uses the inter-process communication mechanism that is available on the server.

When the user runs a tool or an application on a client machine, the network software is used to communicate between the user and the Oracle server.

A specific connection of a user is called a session. A session starts when the Oracle server validates the user. The session ends when the user logs off or if there is an abnormal termination of the user process.

User processes

A user process is also called the client. I a client-server environment, this process runs on the machine to which the user directly logged on. The user process starts when a user starts a tool. It ends when the user disconnects or there is a system failure. The user process contains UPI. A UPI is a mechanism used by a user process to communicate with a server process. The UPI is a method of standard communication between any client tool or application and the Oracle server. The UPI generates a call to the Oracle server when a user makes a request to the server. Once Oracle processes the request the result is sent to the user.

Server process

The user requests are submitted by the user process to the server process. The server process then handles these user requests. The server process runs on the same machine as the Oracle server. The user process can run on a separate client. However, the server process interacts closely with the Oracle server. This server can be dedicated or shared.

On a dedicated Oracle server a server process handles a single user process. The server process when a user requests a connection and is terminated when the user disconnects from the server. Every server process uses an area of memory called the PGA. A PGA is created when a server process is started. The server process includes the OPI. The OPI is used to communicate with the Oracle server at the request of a user process. The server process returns the status information of the user request and results to the user process.

In a client-server environment there will be separate user and server processes. In the MTS configuration several user processes share a single server process. This is in contrast to the dedicated server configuration, when each user process has a separate server process to connect to.

 

Oracle security types

The Oracle server offers multiple layers of security to amange the data and user information.

Security of data – The Oracle 9i database provides data security through enhancements to the Virtual Private Database (VDP) and selective data encryption capabilities. VDP offers fine grained access control where the users can only access the rows of data pertaining to them. The selective data encryption hides very sensitive data in the database from the super-privileged users who may misuse their privileges. The data encryption also protects the data from malicious users attempting to read data files from the OS. Oracle 9i also provides security through new technologies such as Oracle label security and fine grained auditing. Fine-grained auditing keeps track of all of the database activities including the statements and their results. This auditing prevents users from misusing their privileges because auditing tracks illegal actions. Oracle 9i attaches access control directly to the data through Oracle label security. By using this type of data access the security can not be bypassed. The technology is ideal for application service providers who host data from multiple companies in the same database and therefore need to separate the data securely.

Security for user

Oracle 9i offers a number of security features that scale to internet size support million of users and are practical to administer. These features include enhancements to database proxy authentication, public key infrastructure (PKI) support and the new web single sign on feature. Proxy authentication creates multiple and scalable database sessions. These sessions carry the identity of the Web user. This feature allows enterprises to identify users through all the tiers of the network. The single sign on feature enables users to access web applications through the enterprise sung a single account. The DBAs use a single Light-weight Directory Access Protocol (LDAP) directory to manage all of the access control information. This means better security and lower cost. Oracle 9i provides PKI integration for easier deployment and management of PKI in the enterprise. PKI based authentication systems issue digital certificates to users, which authenticate the users directly to servers in the enterprise without direct involvement of an authentication server. A certificate is a data structure that stores the user name, id info, and public key. The public key is a key the PKI system uses to authenticate the user. A certificate is created when a user’s public key is signed by a certificate authority. The certificate ensures that the public key actually belongs to that user.

Security for companies

The principal security challenge for a company is to store the data from different user communities separately. The simplest way of separating data si to create physically separate systems for each community – costly. The Oracle 9i database reduces the cost for hosting a provider. The cost reducing mechanism allow multiple users communities to share a single hardware and software instance. The scheme of sharing a single instance among communities retains the data of each user community separately. Oracle 9i obtains the security of data by using Oracle 9i Virtual private database and Oracle label Security  technologies.

 

Stages in processing a query

When a use issues a query the user process submits these queries to the server process. The server process receives these queries and processes them.

PARSE

  1. Server receives the query form the user process
  2. Server validates the syntax of the statement and security privileges of the user in shared pool.
  3. The server builds a parse tree.
  4. The server process returns the status information to the user.

EXECUTE

  1. If the user request is an UPDATE or DELETE command, the server process locks all the rows that would be affected by the statement.
  2. The statement is executed

FETCH

  1. The final stage in the processing a query is a fetch. In this stage the server process retrieves the rows and returns the rows to the user.
  2. The server may need to perform one or more fetches to transfer the results to the user process.

 

Shared pool

The shared pool is a part of the SGA. In Oracle9i the shared pool can be dynamically resized to grow or shrink by using the ALTER SYSTEM SET SHARED POOL command. The shared pool size needs to be an integer multiple of the granule size and the total SGA size can not exceed the SGA_MAX_SIZE value. The granule is a new unit of allocation that depicts a continuous virtual memory.

The shared memory components of the shared pool are the library cache and the data dictionary cache. The library cache stores information about the most recently used SQL statements. This cache stores the execution plan, parse tree, and text of the SQL statement. The execution plan contains the steps determined by optimizer to run the SQL statement. The information associated with a particular SQL statement is stored in the library cache. If a statement is sent again by any user process before the execution plan is aged out, the execution plan and the parse tree can be reused. In addition to the parse tree the library cache stores control structures, such as locks and library cache handles. These handles store the location of the library cache and data on which the locks are applied. The data dictionary cache contains information about the objects in the database, the structure of the database, and the column definitions. The data dictionary cache stores the valid user names and their passwords and the user privileges for the database. The data dictionary cache is also known as row cache.

 

Database buffer cache

When a query is processed the server process searches the database buffer cache, which is a part of the SGA for the data blocks needed for processing. You can set the size of the buffer cache. The database buffer cache holds the blocks of data that are read from the data files. If the server process is unable to find a particular block of data in the database buffer cache, the server reads the data from the data file and places a copy in the database buffer cache. The buffer cache is used store the most recently and frequently used data blocks. The database buffer cache is a set of database buffers. The size of each buffer in the database buffer cache is equal to the size of a data block in the database. In Oracle9i you can change the buffer cache size dynamically. You can resize the buffer cache by using the ALTER SYSTEM command. The allocation size has to be less than the SGA_MAX_SIZE value. The SGA_MAX_SIZE value specifies the maximum size of the SGA. The size of the database buffer cache is equal to the product of the number of buffers in the database buffer cache and the DB_BLOCK_SIZE.

Buffer cache = DB_CACHE_SIZE * DB_BLOCK_SIZE

Two new parameters DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE are introduced in Oracle 9i. The DB_KEEP_CACHE_SIZE parameter is used to store data blocks in memory. The DB_RECYCLE_CACHE_SIZE parameter is used to eliminate data blocks from memory when they are not in use. The buffer cache contains both modified and unmodified data blocks. The oracle server uses a LRU to phase out the buffers that have not been accessed recently. In addition, the algorithm helps make space for the new blocks to be accommodated in the database buffer cache.

 

Buffer cache advisory

In Oracle9i, you can dynamically resize the buffer cache to grow or shrink. The allocation size of the buffer cache has certain limitations. The buffer cache advisory helps predict the behavior of various buffer sizes. You can dynamically resize the cache by using the ALTRER SYSTEM command. The buffer cache advisory feature enables and disables the collection of statistics. You use these statistics to predict the behavior with different cache sizes.

ALTER SYSTEM SET DB_CACHE_ADVISE=ON|OFF|READY

When the DB_CACHE_ADVISE is on the advisory is on and both CPU and memory overhead is incurred. If you attempt to set the value from OFF to ON, the ORA-4031 error occurs because the server is unable to allocate shared pool memory. The value is set to ON from READY because the memory is already allocated. When the advisory is set to READY the advisory is off, but the memory for the advisory is still allocated.

A new view V$DB_CACHE_ADVISE is introduced in Oracle 9i. This view is used to collect and display the buffer cache advisory information. This dynamic view contains the buffer cache statistics that are gathered. The V$DB_CACHE_ADVISE contains different rows that predict the estimated number of physical reads for different cache sizes. The rows also compute a physical read factor. The physical read factor is the ratio of the number of estimated rows to the number of actual reads.

 

PGA (Program Global Area)

When a server process is started the Oracle server allocates a PGA. The PGA is a memory buffer that contains the data and control information for a server process. The PGA is a writable memory region that can not be shared. The PGA is allocated when a server process starts and is deallocated when the server process terminates. The PGA memory is classified to differentiate between the tunable and untunable memory. This division is used to determine the part of the PGA memory that is affected by automatic tuning. The tunable area is the area that is used by the SQL working areas. The untunable area comprises the rest of the PGA memory.

Contents of the PGA

  1. Sort area – used for storing intermediate results of sorting
  2. Session information – privileges for the session
  3. Cursor state – indicates a stage in the processing of different cursors that are being used by the session. It is a handle or pointer in memory that is associated with a specific SQL statement.
  4. Stack space – stores the variables and arrays of the session.

 

Execute phase of a DML statement

The DML statements need two phases  - parse and execute. In the PARSE phase a statement is passed from the user process to the server process. The server process validates the statement for syntax errors. A parsed representation of the statement is loaded into a shared SQL area. The errors in the SQL syntax, together with the errors due to user privilege issues can be trapped during the parse phase. If the processing of the phase is successful, the next phase is the execute phase. The execute phase consuists of several steps. An example of the execute phase:

  1. Server process reads from the buffer cache the database blocks and the rollback blocks.
  2. if the data is not found in the buffer cache, the server process reads from the data files, placing a copy in the buffer cache.
  3. the server process places locks on the data that is going to be changed.
  4. the before image is stored in the rollback segment.
  5. finally, the server process records the changes in the buffer cache. The changes are also recorded in the redo log buffer, therefore protecting the after-image.
  6. the changed blocks in the buffer cache are marked as dirty buffers.

The DELETE and INSERT use similar steps. The before image for a delete statement contains the column values in the deleted row. In contrast, an INSERT statement only needs the row location.

 

Rollback segments

When you issue a DML statement to change data, the server first saves the old values in a rollback segment. Rollback segments are used to reverse the changes of a rollback transaction. Rollback changes also see that other transactions do not see uncommitted changes. The Oracle server assigns a transaction automatically to the next rollback segment. The transaction assignment to a rollback segment occurs for the first DML statement in the transaction. Transactions that contain only queries can read any rollback segment. This rollback segment may contain old values. The number of transactions that one rollback segment can handle is set when the rollback segment is created. The maximum size is OS specific. If needed, rollback segments are used to reverse the changes made to the data. Rollback segments are also used for read consistency. They are also used to recover the database to a consistent state in case of failures.

 

Redo log buffer

The redo log buffer records all of the changes to the data during DML statement processing. The redo log buffer is a part of the SGA. You can set the size of the redo log buffer in bytes by using LOG_BUFFER parameter. The redo log buffers store redo entries. A redo entry records the blocks that are changed the location of the change anf the new value. The redo entry does not identify the type of block where a change is made. This means that redo log entry does not distinguish between a data block, an index block or a rollback segment. The redo log buffer is used sequentially. The changes made by one transaction are interleaved with changes made by other transactions. The redo log buffer is a circular buffer that is released for reuse after the buffer is filled up. However, the buffer is released after all the existsing redo entries are recorded in the online redo log files. The size of the redo log buffer is fixed. The redo log buffer is created at the time of instance startup.

 

Database writer

The DBWn writes data from the database buffer cache to the disk when:

1. The database writer writes to data files when the number of dirty buffers reaches a threshold value. The threshold is

determined by calculating as a percentage ratio of free list waits / request. The performance is better when the number of buffers is high.

2. DBWn writes to data files when a server is unable to locate a free reusable buffer after scanning thethreshold number of buffers.

3. The DBWn writes to data files when a timeout occurs, such as three seconds.

4. The DBWn writes when a DBWn checkpoint occurs. The checkpoint is the means of syncronyzing the database buffer cache with the data file.

 

Log writer

The LGWR writes redo entries from redo log buffer to the redo log files. The LGWR writes only when any one of four events occur:

1. The redo log buffer writes redo log buffer to redo log files when it is 1/3 full

2. When a timeout occurs - 3 seconds

3. Before DBWn writes modified blocks to the data files

4. When transaction commits.

 

System change number

SCN is assigned on every transaction commit. The SCN uniquely identifies a transaction. When a transaction is committed

it is recorded in the rollback segment used by the transaction. The SCN is assigned to the transaction and recorded in the rollback segment.

The SCN is also recorded in the redo log file. The SCN that are assigned to the transactions are unique for every transaction in the database.

An SCN is used as a time stamp by the Oracle database to synchronize data. The SCN provides read consistency when data is retrieved fromthe data files.

. The highest SCN number depicts the latest data. The Oracle database uses different schemes to create SCNs such as the lock SCN scheme (incrementally, used in dedicated env) and the lamport SCN

(used in parallel server mode, based on the transactions committed in different instances) scheme.

An SCN is used during recovery of the database.

 

Commit processing: Steps

A transaction is permanently recorded by issuing a commit.

1. A server process places a commit record and its SCN in the redo log buffer.

2. The LGWR performs continuous write operation of all the redo log buffer entries that are lost in case of failure.

3. The server records the information that indicates the completion of transaction. This step allows the resource locks that are held on tables and rows to be released.

4. User is notified that commit is complete.

These are the fast commit stages - only the redo log is written to. The DBWn independently flushes the dirty buffers to the data files - before or after the commit. Only one single write is needed per commit process.

If multiple users simultaniously issue commits, Oracle piggybacks the commits into one write. The write operation helps in achieving less than one I/Oper commit on a busy system.

Fast commit - Advantages

1. Performs sequential writes to the redo log files

2. Writes only thje information needed to record the changes to the redo log file

3. Commits multiple transactions simultaneously into a single write

4. Needs only one synchronous write per transaction. the size of the transaction does not matter.

 

Managing an Oracle instance

 

Privileged accounts

The SYSDBA contains all system privileges such as creating a database, starting up and shutting down the database and altering the database. A SYSDBA can also change the database character set. With SYSOPER you can startup and shutdown the database. You can also alter some of the database properties, such as mount/open.

 

Setting up OS authentication

Special privileges are granted to the DBA users for performing administrative tasks. There are certain activities done on OS level , without logging on to the database – shutting down database, etc.

There are 2 methods available to authenticate valid IDs. These methods are the OS authentication method and the password file authentication method. The selection of the method depends on whether or not the database is administered locally or remotely. If the database is managed locally, you can use either OS authentication or the password file authentication. DBAs can also remotely administer a group of database users from a remote single client. If you have a secure database connection, then either the OS authentication method or the password file authentication method can be used. If you do not have a secure database connectiojn, then you use the password file authentication. This is the most secure method for remote database administration if you do not have a secure database connection.

Steps in setting up a user on NT with no passwordfile – OS authentication

  1. Create a group
  2. Add the DBA user to the group
  3. Set the REMOTE_LOGIN_PASSWORDFILE = NONE in int.ora. this means that privileged connections are not allowed over non secure connection.

 

REMOTE_LOGIN_PASSWORDFILE

To start up a database instance from a remote machine, you need an initialization file on that machine. You set the value for the password file parameter in the init.ora file to access the remote database. The REMOTE_LOGIN_PASSWORDFILE can be set to NONE (the passwordfile does not exist, no privileged connections are allowed over non secure connections, default), EXCLUSIVE (only one database, the password file can contain users others then SYS granted to individual users), SHARED (only SYSDBAand SYSOPER recognized). The greatest level of security is to set REMOTE_LOGIN_PASSWORDFILE to EXCLUSIVE.

 

Passwordfile

When the passwordfile is used, the SYS user can grant DBA privileges to individual users. The password file is usually in the ORACLE_HOME\database directory on NT and ORACLE_HOME/dbs.

 

PARAMETER FILE

The parameter file is stored in the ORACLE_HOME/dbs directory or ORACLE_HOME\database directory. The parameter file is only read during instance startup. The parameters are specified in the parameter=value format. The literals must include “”. Some parameters, such as CONTROL_FILE, can take multiple values. They have to be in parentheses and separated by a comma. The comment lines are separated by the #. The IFILE is used to include additional filenames in the parameter file. IF OS system is case sensitive, filenames are also sensitive.

Essential parameters

DB_NAME

DB_CACHE_SIZE

DB_BLOCK_SIZE

COMPATIBLE

CONTROL_FILES

SHARED_POOL_SIZE

BACKGROUND_DUMP_DEST

USER_DUMP_DEST

 

Undo mode – initialization parameters

The Oracle server manages the rollback information by using rollback segmnetns and undo tablespaces. You must specify the undo management mode at instance startup by using parameters UNDO_MANAGEMENT and UNDO_TABLESPACE. When you use system managed Undo instead of Rollback managed undo scheme, set the UNDO_MANAGEMENT to AUTO. Setting the UNDO_TABLESPACE sets the undo tablespace. When you omit the UNDO_TABLESPACE, the server selects the first available undo tablespace in the database. When there is no undo tablespace available the instance uses the SYSTEM rollback segment. The UNDO_RETENTION parametercontains the retention time of undo. The default time is 5 minutes or 300.

 

When RBU is in SMU mode, the UNDO_SUPRESS_ERROR parameter helps suppress or display error messages that are raised. When the UNDO_SUPRESS_ERROR=TRUE the error messages are suppressed. When the parameter file contains parameters that relate to RBU, the parameter is ignored.

Example:

UNDO_MANAGEMENT=AUTO

UNDO_TABLESPACE=UNDOTBS

UNDO_RETENTION=300

UNDO_SUPRESS_ERRORS=TRUE

 

When RBU is in RBU mode, the UNDO_MANAGEMENT=MANUAL. When you do not specify UNDO_MANAGEMENT the instance starts in RBU mode. When the instance starts up the server brings a number of rollback segments online. The number is determined by the values of the ROLLBACK_SEGENTS, TRANSACTIONS and TRNSACTIONS_PER_ROLLBACK_SEGMENT parameters. The ROLLBACK SEGMENT parameter allocates rollback segments to the current instance. When you set this parameter the instance acquires all of the rollback segments you specify, even if the number exceeds the minimum number needed by the instance. The parameter is static.

The TRANSACTIONS static parameter specifies the maximum number of concurrent transactions. The size of the SGA and the number of rollback segments that are allocated increase with the parameter value. You specify the number of concurrent transactions that each rollback segment can handle by using the TRANSACTIONS_PER_ROLLBACK segment parameter. You determine the minimum number of rollback segments acquired at startup by dividing the TRANSACTIONS parameter value by the value of  TRANSACTIONS_PER_ROLLBACK. The example below shows 51/10=approx 6.

UNDO_MANAGEMENT=MANUAL

ROLLBACK_SEGMENTS=9rbs_01,rbs_02,rbs_03)

TRANSACTIONS=51

TRANSACTIONS_PER_ROLLBACK_SEGMENT=10

MAX_ROLLBACK_SEGMENTS=5

 

License parameters

A user site can have a concurrent user or a named user license. When your site is licensed for concurrent use, you can track and limit the number of sessions that are concurrently connected to the database. You can use a named user to limit the number of users. When you have a license you must control the licensing facilities. You can enable licensing facilities and set the appropriate limits by setting the LICENSE_MAX_SESSIONS (when the number of sessions is exceeded, only restricted session users can connect, default is zero which is unlimited), LICENSE_MAX_USERS and LICENSE_SESSION_WARNIING (additional users can connect, but the server writes an error message in the alert file for every new connection) parameters.

 

Commonly modified parameters

The most commonly modified parameters are IFILE, LOG_BUFFER, MAX_DUMP_FILE_SIZE, PROCESSES, SQL_TRACE and TIMED_STATISTICS.

 

SPFILE

SPFILE is Server Side Parameter File. You use SPFILE in RAC. In this environment multiple Oracle instances access a single database. The parameters that relate to all instances are termed as global parameters. You create an SPFILE form an init.ora:

CREATE SPFILE=’$ORACLE_HOME/dbs/spfiledb01.ora’ FROM PFILE=’$ORACLE_HOME/dbs/initdb01.ora’;

Only the SYSDBA can create SPFILE. The CREATE SPFILE can be executed before or after the database is opened.

In RAC multiple instances access the same SPFILE. In the TRAC environment a method called Cache fusion to transfer latest data blocks between instances. During Cache Fusion an instance that holds the latest data blocks to the instance transfers the blocks to the instance that wants to perform an operation on the data. The transafer of the data blocks is performed through a buffer cache. During the transfer of data from one instance to another the system keeps track by using a past image (PI) of the data. Every instance maintains an image of the data that was transferred to another instance. This image is called PI (Past Image). Each PI is assigned an incremental SCN. You can configure multiple SPFILE in RAC, but only one can be used at a time. You create a single SPFILE in RAC by combining the PFILES from various instances into a text file, you then change these parameters to SID.parameter=value format. If you want the parameter to apply to all instances, you choose *.UNDO_RETENTION=10. You then create the SPFILE form the text file.

 

Instance startup stages

NOMOUNT – only the instance is started. You can start this mode when you create the database or recreate control files. When the instance is started the server uses the parameter file to allocate the SGA and start the background processes. In the NOMOUNT mode, the trace file and alert files are also opened. The database name should be either specified in the DB_NAME parameter or the STARTUP command.

MOUNT – used for renaming data files, enabling and disabling redo logs and performing full database recovery. During this stage the Oracle server locates control files. Once the control files are opened, they a re read to specify names and status of the data files and redo log files. However, at this point no verification checks are performed at all.

OPEN – all users can connect. The server verifies that all of the data files and redo log files can be opened. Next, the server opens the online data files and redo log files. AT the time of opening, if any of the data files are not present, the server returns an error and does not open the database. In this stage the server synchronizes data files. Finally, SMON performs instance recovery, if needed.

 

ALTER SYSTEM QUIESCE RESTRICTED – this command wait until all non-DBA transactions are complete. In addition, the inactive sessions are not allowed to become active. Can not be used for backup. To restore the database- ALTER SYSTEM UNQUIESCE.

 

Shutting down the database

  1. First Oracle writes all the buffer cache changes to the data files. All the redo log buffer cache is also written to the online redo log files.
  2. Online data files and redo logs are closed.
  3. The database is closed, but still mounted and the control file is open.
  4. Dismounted – control files is closed, the instance is still open
  5. The instance is shut down – the trace and alert files are closed.
  6. The SGA is deallocated from the memory and the background processes are terminated

 

Dynamic performance tables

The SYS user owns the dynamic performance tables. Most of the Oracle users are not allowed to use these tables.

V$SPPARAMETER – new to Oracle 9i. Shows contents of SPFILE. The V$SPPARAMETER view returns NULL when PFILE used for startup. The columns are SID, NAME. VALUE, ISSPECIFIED, ORDINAL, UPDATE_COMMENT.

Most V$ performance views are accessible only after the database is mounted. However, the V$ views that access data from memory are accessible in even NOMOUNT mode.

ACCESSIBLE IN NOMOUNT

V$PARAMETER – initialization parameters.

V$SGA – summary information on the SGA. Used in tuning the shared server.

V$OPTION – options installed

V$PROCESS – currently active processes – SMON, PMON, etc

V$SESSION – current session information

V$VERSION – database and OEM versions

V$INSTANCE – the state of the current instance.

 

ACCESSIBLE IN MOUNT

V$THREAD – redo log groups, thread # status of the thread

V$CONTROLFILE – names of the control files

V$DATABASE – db name and creation date

V$DATAFILE – data file name, status and other control file info

V$DATAFILE_HEADER – data file header info

V$LOGFILE – online redo log files

 

Dynamic buffer cache – parameters

The buffer cache in Oracle9i can be dynamically resized to grow or shrink. There are 3 parameters that store values for sizing the buffer cache. The buffer cache consists of independent subcaches for buffer pools. The DB_BLOCK_SIZE determines the primary block size that is used for the SYSTEM tablespace. You use ALTER SYSTEM to dynamicall resize the buffer cache – ALTER SYSTEM SET db_cache_size=50M. There are 3 other new parameters that define the sizes of the caches for buffer for the primary block size. These are (all can be changed dynamically):

DB_CACHE_SIZE – default 48M, size of the default buffer pool for buffers with the primary block size. Has to be a size of at least one granule. A value of zero is illegal because zero is the size of the standard block size. The size of the standard block size is the block size of SYSTEm tablespace.

DB_KEEP_CACHE_SIZE – number of buffers in the KEEP buffer pool. The KEEP buffer pool retains the schema objects blocks in the memory. The size of the buffers in the KEEP buffer pool is the primary block size.

DB_RECYCLE_CACHE_SIZE – the RECYCLE buffer pool eliminates data blocks from memory as soon as they are no longer needed. The size of the buffers in this pool is the primary block size. No default.

 

Dynamic initialization parameters

Some parameters can be changed while the instance is running. You can set the dynamic init parameters by either ALTER SESSION SET <> =<>, applies only to the current session; or ALTER SYSTEM SET <>=<>; - global change, valid until the instance is shut down or changed again. ALTER SYSTEM DEFERRED modifies the parameter value for future sessions that connect to the database. The changed value is not seen for the future sessions. To display information on all modified parameters you query the V$PARAMETER or the V$SYSTEM_PARAMETER (independent of session). They have the following columns:

ISSES_MODIFIABLE – parameter can be modified by the ALTER SESSION

ISSYS_MODIFIABLE – can be modified by ALTER SYSTEM

ISMODIFIED – parameter already modified

ISADJUSTED – if the RDBMS adjusted the input for more appropriate value

 

Diagnostic files – difference

Trace files are used to track internal errors. Provide information for tuning applications and instances

Contain information about internal errors detected by background processes

Can be generated by server processes

Alert files contain a log of messages in chronological order

Contain messages for errors that occur while the Oracle instance is running

The BACKGROUND_DUMP_DEST is usually in the ORACLE_HOME\RDBMS\TRACE or \\LOG on UNIX.

Trace file parameters

BACKGROUND_DUMP_DEST

USER_DUMP_DEST

MAX_DUMP_FILE_SIZE

 

 

 

 

Globalization Support

 

 

Oracle globalization support enables you to interact with a database in your native language and run applications in different language environments. Different countries may use different data formats. With Globalization support you can display data in format needed by different regions or territories.

There are certain restrictions to use of Globalization support – for example:

  1. All keywords are displayed in English
  2. You can not use a fixed-width multi-byte character set as the database character set.
  3. Variable-width character sets use more efficiently than fixed width character sets. The variable-width character sets use only the space that is needed to store the data. In contrast, the fixed width character sets block the memory irrespective of the actual data size.

Character encoding schemes

Yu can use character encoding schemes to interpret character data into meaningful symbols from a workstation to a host machine. Different classes of character encoding schemes are available such as single byte character sets, varying width multi-byte character sets, fixed – width character sets and Unicode character sets.

The single-byte character set is used for European and Middle Eastern languages. In this character set each character set occupies one byte. The two types of single-byte character sets are single byte 7 bit and single byte 8-bit. In the single-byte 7-bit character set, 7 bits are used to define up to 128 characters. This character set supports the 26 letters of the Latin alphabet, A to Z. The single byte 7 bit character set normally supports only one language.

In the single byte 8-bit character set eight bits are used to define up to 256 characters. This character set supports a group of languages such as a group of Western European languages. Some of the single byte character sets are ASCII 7 bit American ISO 8859 West European and EBDCDI code page 500 8-bit Western European.

Another character – encoding scheme is the varying width multi-byte character set. This encoding scheme represents each character set with one or more bytes. This character set supports Asian languages. Some examples of varying – width multi byte character sets are Japanese extended UNIX code and Chinese GB2312. Some varying width multi byte character sets use the value of the most significant bit to differentiate between a single byte and double byte character. Other varying width multi – byte character sets use control codes to differentiate between single-byte and double – byte characters.

Fixed width multi – byte character sets provide support similar to varying width character sets. The only difference is that the format is a fixed number of bytes for each character.

Some examples of fixed width multi – byte character sets are 16-bit Japanese (JA16EUC), 16 bit Japanese (JA16SIJS) and Korean Microsoft Windows code page 929. The traditional Chinese Microsoft Windows code page 950 also belongs to the fixed width multi byte character set.

The Unicode character set is another type of the character encoding scheme. The Unicode character set represents all of the characters needed for computer operations, including technical symbols and characters that are used in publishing. The Unicode character set version 2.0 can represent a total of 38,885 characters. The Unicode character set offers different character formats. One format of the Unicode character set is the Universal character set two byte form USC2. Another is the Universal Character Set Transformation Format UTF8. USC2 is a double byte fixed width format, and UTF8 is a varying width multi – byte format.

Character set features

You preferably select a character set based on the corresponding character encoding scheme that is supported by the OS platform. The character set however, does not have to be the same. The Oracle server provides database character sets and national character sets. The database and national character sets are defined at the time of database creation. You can not change the character set after the database creation. Re-creation of the database is the only way to change the character sets. The database character sets define the storage of data columns of the data type CHAR, VARCHAR2, CLOB and LONG.

The national character sets define the storage of data columns of the type NCHAR, NVARCHAR, NVARCHAR2, and NCLOB. The national character set does not support the LONG data type. The database character sets and the national character sets can store varying width multi-byte character sets. The fixed-width multi-byte character set can only be stored by the national character sets.

Guidelines on selecting the database and national character sets

Select a closely related database and national character set

Consider performance for string operations

Consider the shortcomings of converting between character sets

 

Globalization parameters – server side

There are 3 methods for specifying the Globalization support parameters:

  1. Initialization parameters
  2. Environment variables
  3. ALTER SESSION parameters

You can use initialization parameters to specify the language dependent behavior default settings for a server. The default settings for the server do not affect settings for the client. You can use the environment variables to specify the language dependent behavior defaults for a session on the client machine. These default settings override the default settings for the server. You can use alter session parameters to change language-dependent behavior settings for a session. These parameters override the default values for a session or for a server. The initialization parameters that specify the default language dependent behavior are NLS_LANGUAGE and NLS_TERRITORY.

You can use NLS_LANGUAGE to specify the value for the language dependent session characteristics such as error messages. The server messages are stored in binary format files. Multiple versions of these files exist, one for each supported language. The files use specific naming conventions. For example, the name of the file storing server messages in French is ORAF.MSB. Another session characteristic is the language used for the display of day and month names and their abbreviations. For example, if the default language is French, the day and month are also in French. The language dependent session characteristics also include symbols such as AM, PM, BC, etc. The language dependent session characteristics also include a default sorting sequence for character data. The sorting sequence for character data occurs when the ORDER BY clause is used.

The NLS_TERRITORY parameter specifies the conventions for the default date and numeric formatting characteristics such as date format, local currency format, and ISO currency symbol. Some European territories have dual currency support for the EURO currency. Some of the other default conventions that are specified by the NLS_TERRITORY are the decimal character and group separator, the list separator, the week start day, the credit and debit symbols and the ISO week number calculation.

The NLS_LANGUAGE and the NLS_TERRITORY parameters can be specified either as initialization parameters or as ALTER SESSION parameters, but they cannot be specified as environment variables.

Derived Globalization Support Parameters

To override the default values of NLS_LANGUAGE or NLS_TERRITORY you can set the respective derived parameters.

The derived parameters of NLS_LANGUAGE are NLS_DATE_LANGUAGE and NLS_SORT. The NLS_DATE_LANGUAGE parameter specifies the language for the day month names. This parameter overrides the value specified by the NLS_LANGUAGE parameter. NLS_DATE_LANGUAGE has the same syntax as the NLS_LANGUAGE parameter. The derived parameter NLS_SORT specifies the type of linguistic sort sequence for character data. The sort sequence is either binary or the name of a linguistic sort sequence. NLS_SORT overrides the value defined by NLS_LANGUAGE. The derived parameters of NLS_TERRITORY are NLS_CURRENCY, NLS_ISO_CURRENCY, NLS_DATE_FORMAT and NLS_NUMERIC_CHARACTERS. The NLS_CURRENCY parameter specifies a new local currency symbol that is the character string returned by the number format mask L. The NLS_CURRNECY value overrides the value defined by NLS_TERRITORY.

The NLS_ISO_CURRENCY parameter specifies the character string returned by the number format mask C. This is the ISO currency symbol. You can use the NLS_ISO_CURRENCY parameter to eliminate the ambiguity produced by the local currency symbols. For example a local currency symbol $ can be either US dollar or Australian dollar. Unique international currency symbols for specific territories or currencies are defined by the ISO specifications. According to the ISO specification, the ISO currency symbol is USD for the US dollar and AUD for the Australian dollar.

The NLS_DATE_FORMAT parameter defines a new default date format to be used with the TO_CHAR or TO_DATE functions. The value of this parameter can be any valid date format mask, but the value must be enclosed in double quotation marks. The NLS_NUMERIC_CHARACTERS parameter specifies a new decimal character and group separator. You can use any single byte character to define the decimal or group separator. The character used for the decimal separator must be different from the character sued for group separator. The example is G999G999D99 for 555,555.55. There are certain characters that can not be used for either decimal or group separators - +, -, <> or the numeric symbols.

Overriding the default behavior

The value of NLS_LANG environment overrides the values of the Globalization Support parameters. The syntax is NLS_LANG=<language>_<territory>.<charset>

The language component in the syntax specifies conventions such as the language used for the server messages, the day names, and the month names. The values of the language component

overrides the value of NLS_LANGUAGE and the language component supports the same features as NLS_LANGUAGE.

 

Overriding server settings for a session

When NLS_LANG is defined for a client, an implicit ALTER SESSION command is executed for any session. ALTER SESSION SET NLS_LANGUAGE=<language> NLS_TERRITORY=<territory>;

Globalization and sorting

The conventional sorting mechanizm is binary sorting. By setting the NLS_SORT parameter, you specify the type of sort for the character data and also overrides value defined by NLS_LANGUAGE.

The linguistic sorting of multi-byte character set is not supported. Therefore, if the database character sert is multi-byte the result of any sort is according to binary sort sequence.

ALTER SESSION SET NLS_SORT=german;

NLS_COMP controls the method used by the comparison operators such as <> and =. If the parameter is set to ANSI the comparison is done by the linguistic sorting sequence.

There are certain parameters that are not supported in the SQL functions - NLS_LANGUAGE, NLS_TERRITORY and NLS_DATE_FORMAT.

In Oracle 9i you can create linguistic indexes by using the NLSSORT SQL function. CREATE INDEX nls_ename ON employees (NLSSORT(first_name, 'NLS_SORT=SPANISH'));

 

Globalization support view

 

NLS_INSTANCE_PARAMETERS - what is set in the parameter file.

SELECT * FROM nls_instance_parameters;

 

NLS_SESSION_PARAMETERS - session NLS parameters.

SELECT * FROM NLS_SESSION_PARAMETERS;

 

V$NLS_VALID_VALUES - dynamic performance view. Used for retreiving correct values for LANGUAGE, TERRITORY, etc.

 

V$NLS_PARAMETERS - dynamic. current< values for globalization support parameters.

 

test:

 

1. Use ALTER SESSION to change globalization support language to Japanese and country to Japan.

SQL> ALTER SESSION SET nls_language=japanese nls_territory=japan;

 

2. Retrieve information about the characterset

SELECT parameter, value FROM nls_database_parameters

WHERE parameter LIKE '%CHARACTERSET%';

 

 

CREATING A DATABASE

 

OFA Guidelines

The OFA standard provides guidelines for organizing the OS directory structure. To create an OFA compliant directory structure you must crreate a different directory for each database under the Oracle data directory.

You place the control files, redo log files, and data files under the specific database directory. Another quideleine for creating an OFA compliant directory structure is to store groups of objects with different fragmentation characteristics in different tablespaces.

You must place the objects the contend disk resources in different tablespaces. This means that you separate the data that participate in disk contention on different physical disks. You place tables that need daily or monthly backups in different tablespaces. You also

have at least 2 different copies of control files on different physical disks. Every online redo log group must have at least 2 online redo log members on 2 different disks.

 

Oracle software directory structure

ORACLE_HOME - main Oracle software directory.

BIN - binary files - .dll and .exe files.

DBS - message files and scripts for all databases.

ORADATA - contains a subdirectory for every database, incl control files, redol og files and data files.

ORAINST - installation files.

RDBMS - server files, scripts and libraries. Also contains trace files and readme files.

PLSQL - message files and SQL scripts and demo files for PL/SQL.

NETWORK - message files for Oracle Net.

 

Database creation requirements

Distribution of OS files: OMF to easily maintain your files.

Size of data files, database blocks (standard plus four non-standard), tables and indexes.

Characterset of the database.

Use of undo tablespaces

 

Database creation - prerequisites

Oracle software must be installed

The directories for software and database files must be established

The DBA must have a privileged account authenticated by the OS or a password file.

The server has to have enough memory to start an instance (at least 6.5M for small databases)

Sufficient space for database files, control files and redo log files.

 

Physical database design - guidelines

2 control files on different disks

Multiplexed online redo log files on different disks

Separate application, temporary and index data files on different disks

 

Database: manual creation

Unique instance and database name

Specify character set - US7ASCII or WE8MSWIN1252.

Set the OS variables - register instance, database character set, etc.

Create parameter file - SGA size, name of the database, control files,size of the blocks, etc.

Set Oracle services

Create password file

Start an instance

Create the database

Execute scripts generating the data dictionary views

 

Environmental variables

The parameters reside in the HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE subkey. In a character set ORA_NLS33 registry variable enter the default character set US7ASCII.

 

Creating password files

When you grant SYSDBA and SYSOPER privileges to a user the user name and privilege information are added to the password file.

D:\ORAPWD FILE=d:\oracle\ora9i\orclpwd.ora PASSWOPRD=eagle ENTRIES=5

 

Parameter file: types

PFILE - is a text file, it does not show changes to dynamic parameters. After database creation you can use either PFILE or SPFILE.

SPFILE is recommended for RAC. It is recommended for performance management because it changes persist across shutdowns.

Only SYSDBA and SYSOPER can create SPFILE.

 

SQL> STARTUP NOMOUNT SPFILE=e:\ora9i\poradata\ora9i\initora9i.ora

 

SQL> CREATE DATABASE ora9i

DARAFILE 'e:\ora9i\oradata\ora9i\system1.dbf' SIZE 250M REUSE

LOGFILE 'e:\ora9i\oradata\ora9i\log1.log' SIZE 5M REUSE,

'e:\ora9i\oradata\ora9i\log2.log' SIZE 5M REUSE,

'e:\ora9i\oradata\ora9i\log3.log' SIZE 5M REUSE

CONTROLFILE REUSE

MAXLOGFILES 32

MAXDATAFILES 30

CHARACTERSET WE8MSWIN1252

DEFAULT TEMPORARY Tablespace Def_Temp TEMPFILE

'e:\ora9i\oradata\ora9i\def_temp.tmp' SIZE 10M REUSE

UNDO TABLESPACE undo_tbs DATAFILE 'e:\ora9i\oradata\ora9i\undo1.dbf' SIZE 50M REUSE;

 

Database creation errors

Syntax errors

Files already exist

OS errors

Insufficient space for files

 

To recreate after the failure:

Shutdown instance

Delete all OS files

Restart the instance in NOMOUNT

Recreate with REUSE

 

 

 

 

Data dictionary and standard packages

 

The data dictionary is the central source of information for an associated database. The data dictionary record, verify and provide the latest information about the associated database. The data dictionary describes the database and its objects.

Logical structure of the database

Physical database structure

Names, definitions of schema objects and space allocated

Integrity constraints

Database users and privileges

Auditing information

 

Data dictionary components

Consists of base tables ((first objects created by the Oracle database during the SQL.BSQL run) and data dictionary views created during database creation with CATALOG.SQL in ORACLE_HOME\rdbms\admin\.

CATALOG.SQL is automatically run by the Oracle server

CATALOG.SQL Is run mamnually during migration or upgrading to a new Oracle release

Needs a SYS connection

 

Types of dictionary views

DBA_ -info from all the schemas

ALL_ - info about all objects accessible to the user

USER_ - info about objects owned by the user

 

Views

Views

Description

DICTIONARY

DICT_COLUMNS

General overview

DBA_TABLES

DBA_OBJECTS

DBA_LOBS

DBA_TAB_COLUMNS

DBA_CONSTRAINTS

Information about objects – tables, constraints, LOBs and columns

 

DBA_USERS

DBA_SYS_PRIVS

DBA_ROLES

User privileges and roles

DBA_EXTENTS

DBA_FREE_SPACE

DBA_SEGMENTS

Space allocation for database objects

DBA_ROLLBACK_SEGS

DBA_DATA_FILES

DBA_TABLESPACES

General database structures

DBA_AUDIT_TRAIL

DBA_AUDIT_OBJECTS

DBA_AUDIT_OBJ_OPTS

Auditing information

 

Dynamic performance tables

 

The dynamic performance tables are called virtual tables because they exist in memory only as long as the instance is running. Contain real time information. The dynamic tables contain information about availability of objects – locks, session status, etc. They are called X_$ and are owned by SYS. The views can not be updated. Another faeature of dynamic performance views is that is that they are identified by V_$. The public synonyms are V$. The V$FIXED_TABLE contains information about all the dynamic performance tables and views.

 

Creating PL/SQL functionality

The CATPROC.SQL script is run automatically when you create a database. It is in the ORACLE_HOME\rdbms\admin directory.

 

Administrative scripts

 

UTL*.SQL – creates additional views and tables for database utilities – UTLSAMPL.SQL creates and populates sample tables, the UTLDTREE.SQL enables metadata management by creating objects and views that show object dependencies.

 

CAT*.SQL – create data dictionary tables and views – CATALOG.SQL, RMAN is created with CATRMAN.SQL.

 

DBMS*.SQL – creates database package specifications, DBMSPOOL.SQL enables DBAs to lock PL/SQL packages, SQL statements and triggers into the shared pool.

 

PRTV*.PLB – creates package bodies for the packages created by the DBMS*.SQL scripts. The PL/SQL source code in these packages is in encrypted format.

 

NO scripts – remove dictionary scripts for various optional services or components that are installed with Oracle server, such as queuing, partitioned tables and indexes.

 

Migration scripts – for migration, upgrades and downgrades.

 

Java scripts – only useful if you install Jserver with your Oracle database.

 

 

Stored procedures: Properties

Stored in the data dictionary of the database. Do need recompilation. Can be used by many users at the same time. Reduce memory requirements.

 

Packages

A package is a group of stored procedures, functions, variables, and data types that are stored in the database as a single unit. Consists of specification and a body. Both are stored separately in a database. The package specification is an interface to the application and declares all public constructs of a package. The public constructs of a database refer to data types, variables, constants, exceptions, cursors and subprograms of a database which can be directly accessed by the users of the package. Package body defines the public constructs declared in the specification. The package body also declares and defines private constructs. The private constructs of a package are variables, cursors and procedures that are hidden from the users of a package and can accessed only by the procedures within the package.  

Another property of a package is that when it is called for the first time the entire package is loaded into the memory in one operation. Finally, the contents of a package can be shared by multiple applications and users at the same time. A package is stored in memory after it is written and compiled. This property allows multiple users to access a package simultaneously.

 

Stored program units – benefits

 

You can use stored PL/SQL program units to access and manipulate the data. They are stored as schema objects. Procedures, functions and packages are examples of stored program units. These units reduce disk retrieval because after they are called for the first time they are stored in the shared pool. Another benefit of stored units is that they enforce data security. Database operations can be restricted by enabling users to access data thru procedures and functions. Finally, multiple users can share a single copy of the program unit.

 

Packages supplied by Oracle

 

DBMS_LOB – provides routines for performing operations on LOBs

DBMS_SESSION – provides access to ALTER SESSION and SET ROLE statements.

DBMS_UTILITY – provides utility routines to analyze and compile schema and resolve names.

DBMS_SPACE – provides information about availability of segment space.

DBMS_ROWID – creates and obtains information about ROWIDs

DBMS_SHARED_POOL – enables DBAs to keep objects in the shared pool and protect them from being aged out.

 

Invalid objects – specifications

Every schema object has a valid or invalid status (a view based on a table that has been dropped). You can retrieve information about the status of the objects by querying DBA_OBJECTS:

SELECT object_type, object_name

FROM DBA_OBJECTS

WHERE object_type = ‘PACKAGE BODY’ 

AND status = ‘INVALID’;

 

Invalid objects – troubleshooting

Tables, synonyms and sequences are always valid. Usually, the objects become invalid if there is a RENAME or DROP was performed. A dependent object also may be INVALID if the dependent views and stored procedures are loaded with the IMPORt. The change of status occurs because the IMPORT may not be able to create a dependent object after creating a referenced object.

 

 

 

Maintaining redo logs and control files

 

If the control file becomes unavailable, the database can not function properly.

The control file stores:

DB_NAME

Names and locations of the data files and online redo log files.

Tablespace names

Timestamp of database creation

Current log sequence number

Checkpoint information

Log history

Backup information

Archive log information

 

Querying control file information

You can use V$PARAMETER, V$CONTROLFILE, V$CONTROLFILE_RECORD_SECTION.

 

V$CONTROLFILE contains the name column

SQL> SELECT name FROM V$CONTROLFILE;

 

V$PARAMETER –

SQL> SHOW PARAMETER CONTROL_FILES

 

V$CONTROLFILE_RECORD_SECTION – different sections of the control file – DATABASE, REDO LOG, DATAFILE, FILENAME, etc. Also contain size in bytes, number of records allocated to the section, index positions of the forst and the last records and the last record ID.

SQL> SELECT type, records_total

FROM V$CONTROLFILE_RECORD_SECTION

WHERE type=’DATAFILE’;

 

Multiplexing control files

 

You should have at least 2 identical copies of control files and store them on separate disks – multiplex them. You can specify up to 8 control files.

 

Online redo log files

 

An Oracle database needs a minimum of two online redo log groups.

MAXLOGFILES –default 32, maximum 255

MAXLOGMEMBERS – OS dependent

 

Log switches

 

The LGWR process writes the redo log buffer entries to the current online redo log group. The LGWR event occurs:

  1. Before the DBWR writes modified blocks in the database buffer cache to the data files.
  2. When a commit occurs
  3. When the redo log buffer is 1/3 full
  4. When a LGWR timeout occurs – 3 seconds

 

The LGWR writes to the redo log files in a cyclic fashion. Each redo log file group is identified by a log sequence number. LGWR writes sequentially to the online redo log files. The log switch initiates a checkpoint, which causes the DBWR to write the dirty buffers to data files.

The log switch occurs when:

  1. When the current online redo log file is full
  2. The DBA forces a log switch
  3. A DBA can specify that a log switch occurs after a certain amount of time has elapsed, regardless of whether the current online redo log file is filled. This is called time-based thread advance.

To force a switch:

ALTER SYSTEM SWITCH LOGFILE

 

Checkpoints

During a checkpoint all the dirty database buffers are ritten to the data files by the DBWn. The CKPT updates the headers of all the data files and control files. This implies that the checkpoint has successfully completed.

Checkpoints occur when:

  1. Log switches
  2. When an instance shuts down
  3. When FAST_START_MTTR_TARGET signals to checkpoint.
  4. When forced by DBA
  5. When the tablespace is taken offline or made read only

 

Info about each checkpoint is written in the alert file if the LOG_CHECKPOINTS_TO_ALERT is set to TRUE (default is FALSE). You can force checkpoint by setting the FAST_START_MTTR_TARGET in number of seconds to perform crash recovery.

 

Checkpoint intervals – parameters

During normal database operations the initiation of the checkpoint depends on the size of the online redo log file. When the database has large online redo log files, you can control database checkpoints by setting the LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT init parameters. The LOG_CHECKPOINT_INTERVAL control the frequency of checkpoints by specifying the number of redo blocks. A checkpoint is initiated as soon as the LGWR process writes the number of OS blocks specified by this parameter. Regardless of the LOG_CHECKPOINT_INTERVAL value, a checkpoint occurs when switching from one redo log file to another. If the value of LOG_CHECKPOINT_INTERVAL exceeds the actual online redo log file size, checkpoints occur only while switching logs. Specifying 0 for the interval may initiate frequent checkpoints. This happens because a new request will be started even if a single redo log buffer is written after initiating the last request. Specifying 0 is not recommended for performance reasons. The value of the LOG_CHECKPOINT_TIMEOUT specifies the maximum duration before another checkpoint occurs. The value is in seconds. A value of 0 disables time based checkpoints.

 

Archive log mode

SELECT name, log_mode FROM V$DATABASE;

SELECT instance_name, archiver FROM V$INSTANCE;

 

Dynamic performance views for Online redo Log Files

 

V$THREAD contains thread information from the control file. It provides information about the total number of online redo log groups, the current log groups, and log sequence number.

SELECT groups, current_group#, sequence# FROM V$THREAD;

 

V$LOGFILE info about redo log files. Info about group number, the type, the status, and the file names of all the members of the group.

SELECT * FROM V$LOGFILE;

Status:

INVALID – inaccessible (usually newly added redo logs)

STALE – redo log was switched before log was full

DELETED – no longer used

Blank – in use

 

V$LOG number of online redo log groups, the status, sequence number, and number of members in each group.

SELECT group#, sequence#, bytes, members, status FROM V$LOG;

Status:

UNUSED – never been written to (usually newly added)

CURRENT – currently used

ACTIVE – active, but not currently used

INACTIVE – no longer needed for instance recovery

CLEARING – has been recreated as a new log file with the CLEAR LOGFILE command. Once the log is cleared, it is changed to UNUSED.

CLEARING_CURRENT – being cleared of a closed thread.

 

Number of online redo log files

When the number of transactions is limited a database instance may need only 2 groups. When the number of transactions is large, a database instance may need additional online redo log groups to guarantee that a recycled online redo log group is always awailable to LGWR. An easy way to determine if the current online redo log configuration is satisfactory is to examine the checkpoint messages in the LGWR trace file and the alert log. If the messages indicate that the LGWR has to wait frequently for an online redo log group to become available, you must add an online redo log group. The LGWR may have to wait for a redo log group because either a checkpoint has not completed or an online redo log group has not been archived. The final factor is the MAXLOGFILES and MAXLOGMEMBERS. Always build a symmetric configuration with equal # and their sizes.

Location of online redo log files

Should be stored on different disks. They should be separated from archived log files. They should be stored separately from data files.

 

Sizing online redo log files

The minimum size of redo log files is 50KB and maximum is OS specific. However, the sizing of online redo log files is dependent on the number of log switches and checkpoints. If the online redo log fiels are small, a large  number of log switches will occur. This reduces database efficiency. Another factor that influences the sizing of online redo log files is the number of redo entries. If the number of redo entries is high, the online redo log files will fill faster. This leads to the generation of a large number of log switches. The space available on the medium is another factor. Members of different groups can have different sizes. However, you must avoid having groups of different sizes.

 

Adding online redo log groups

If the LGWR has to wait the DBWn also has to wait and the system slows down. To avoid this, additional online redo log groups may be required.

ALTER DATABASE ADD LOGFILE GROUP 4 (‘E:\ora\oradata\log4.log’) SIZE 100M;

 

Adding online redo log members

ALTER DATABASE ADD LOGFILE MEMBER ‘d:\ora\redolog11.log’ TO GROUP 1;

 

Relocating online redo log file

  1. Shut down database
  2. Copy online redo log file to the new location
  3. Mount the database
  4. Execute the SQL command to rename the file:
  5. ALTER DATABASE RENAME FILE ‘FILENAME’ TO ‘FILENAME’;
  6. Open database

 

Dropping online redo log groups

ALTER DATABASE DROP LOGFILE GROUP 7;

At least 2 redo log groups are needed

Active online redo log groups can not be dropped

Unarchived redo log groups can not be dropped

Dropped redo log files need to be deleted manually

 

Dropping online redo log members

ALTER DATABASE DROP LOGFILE MEMEBR ‘filename’;

Last valid online redo log member cannot be dropped

Log switch must be forced before dropping an active online redo log member

Unarchived online redo log member ca not be dropped

 

Clearing Online Redo Log Files

ALTER DATABASE CLEAR [UNARCHIVED] LOGFILE GROUP n

Or

ALTER DATABASE CLEAR [UNARCHIVED] LOGFILE ‘FILENAME’;

This command can be used even if there are only two groups with one file each left.

 

The logminer

 

LogMiner processes redo log files and translates the contents into SQL statements. By using LogMiner you can access redo log information without restoring archived redo log files. By using LogMiner you can determine when a logical corruption in the database may have begun, pinpointing the time or SCN to which you can perform incomplete recovery. You can also use LogMiner to undo changes to the database. It will work with version 8.0 and up.

  1. Translates index cluster-associated DML
  2. Provides support for chained and migrated rows
  3. Provides support for direct path inserts

 

LogMiner dynamic performance views

V$LOGMNR_CONTENTS – changes made to user and table information

V$LOGMNR_DICTIONARY – info about LogMiner dictionary

V$LOGMNR_LOGS – log id, file name, oldest changes

V$LOGMNR_PARAMETERS – info about optional LogMiner parameters

 

Logminer session – steps

before you can use LogMiner utility to perform database recovery you have to perform a few steps.

Extracting a dictionary file – DBMS_LOGMNR_D.BUILD

Specifying redo log files for analysis – DBMS_LOGMNR.ADD_LOGFILE

Starting LogMiner

Analyzing output of V$LOGMNR_CONTENT view

Performing object level recovery

Ending LogMiner session – DBMS_LOGMNR.END_LOGMNR

 

LGWR errors and effects

 

When one member in an online group becomes unavailable and the group has two other members – it still writes just fine.

If all the members of the redo log group are not available to LGWR at a log switch because of media failure, the instance shuts down.

The same occurs if the current online redo log group becomes unavailable - a shut down.

 

LGWR: Troubleshooting

One error is that one member of an online redo log group of two is not available and the online redo log group is not active. In this case

dropping and adding a new log group solve the problem. However, if the online redo log group is active you must force a log switch.

If all the members of the next group are inaccessible to LGWR at a log switch, and if the group is not active then

dropping and adding a new group solves the problem. The database might need a media recovery. If all the members of the current redo log group suddenly become inaccessible to LGWR the database needs media recovery.

 

 

 

 

 

 

 

 

Oracle 9i Data Storage

 

In Oracle 9i data has 2 major components – row header and row data. The row header contains the number of columns, lock status and chaining information. For every column, the server stores the length followed by the column value. If the size of the column is less than 250 bytes, the server only needs 1 byte to store the column length, otherwise it is 3 bytes. The server stores the value of the column after the column length bytes in the row data component. A NULL column uses only one length byte. The server does not need any storage space for trailing NULL columns.

 

Data types

 

The built-in data types are divided into 3 categories. Theses are the scalar, relationship and collection data types.

Scalar data types

 

Character  -

 

CHAR – fixed length alpha numeric strings, default 1, maximum 2000

 

VARCHAR2 – variable length, 4000 bytes, maximum required

 

NCHAR – fixed length alpha numeric, 2000 bytes, NLS supported other languages than English,

 

NVARCHAR2 - variable length, 4000 bytes, maximum required, NLS supported other languages than English,

 

Number – up to 38 digits of precision. Stored in variable length. Oracle does not store leading and trailing zeroes.

 

Date – includes time and can store any date between 4712 BC and 9999 AD. Date values require 7 bytes of storage.

 

RAW – stores small binary data maximum 2000 bytes. This column is not converted even if you move the tables with RAW columns between databases with different character sets

 

ROWID – 10 bytes of binary data representing row addresses.

 

The LONG and LONG RAW data types, internal LOBS and external LOBsused for storing large objects such as long text strings, images, clips or vaweforms.

 

LONG and LONG RAW were used in previous software versions for unstructured data, such as binary images, documents and geographic information. Retained for backwards compatibility. LONG data type to store variable width character data and the LONG RAW data type to store unstructured data. The LONG and LONG RAW store up to 2GB of data in each row. Only one column using the LONG or LONG RAW allowed in each table.

LOBs or large objects are large unstructured data stored in the database or as OS files. Internal LOBS can be binary LOB (BLOB, raw unstructured binary data, 4GB max), character LOB (CLOB, single-byte characters up to 4GB) and national character LOB (NCLOB, 4GB of data – fixed and variable national character set).

External LOBs represent BFILE (stores a pointer inside a table to OS files.

 

REF- The use of REF data type needs the OBJECT option. Access to REF is only possible when an OCI application has been initialized in object mode. When REFs are retrieved from the server they are stored in the client-side object cache. REFs provide the ability to capture an association between two row objects.

 

Collection type

VARRAY is an ordered set of elements that can store a predefined maximum number of elements that can store a predefined maximum number of elements in each row.

 

New Oracle9i data types

TIMESTAMP – is an extentsion of the DATE data type. Stores the year, month, and day of the DATE plus hour, minute and second values. You specify the TIMESTAMP as TIMESTAMP’2001-01-31 09:26:50.124’ Supports fractional seconds but does not support time zone data.

The TIMEZONE(fractional_seconds_precision) WITH TIMEZONE supports all values of TIMESTAMP as well as timezone displacement value which indicates the hours and minutes before or after UTC. The datatype also supports fractional seconds.

The TIMEZONE(fractional_seconds_precision) WITH LOCAL TIMEZONE supports all values of TIMESTAMP but normalizes data to database timezone. Further, the time zone displacement is not stored as part of the column data. When user retrieve this data Oracle returns it in users local timezone. For example, a user in LA sees LA time adjusted.

Apart from datetime data types, Oracle 9i provides two new datatypes to handle XML data.

The XML datatype is an opaque datatype which supports the storage of XML with NCLOB storage. You can insert an XML document in this column. This XML type without indexes can store any XML document. If the XML datatype is a LOB column, you can import, export and load the columkn using the same method as for regular LOB columns. When you define a column of the XML type, you can use XML type with structured storage. In Oracle 9i an XMLIndex index type automatically indexes the XML schema associated with each XML document. You can use the XMLIndex type to constraint the list of schemas to which the document can conform.

 

Types of ROWID

The ROWID can be of 2 types – physical ROWID and logical ROWID. Physical ROWID stores the addresses of rows in ordinary tables, clustered tables, table partitions and subpartitions. The physical ROWID can store one of the two rowed formats – the extended and restricted ROWID.

The extended rowed format supports tablespace-relative data block address. In addition, the format efficiently identifies rows in partitioned tables and indexes as well as non-partitioned tables and indexes. Tables and indexes created by an Oracle8i or higher always have extended rowids. Oracle 8i and higher also use the restricted ROWID which uses the same format as Oracle7.

Logical rowids store the addresses of rows in index-organized tables. Each logical rowid used in a secondary index can include a physical guess. A physical guess identifies the block location of the row in the IOT at the time when the guess was made. Guesses are made when the secondary index is created or rebuilt. A single datatype called  universal rowed or UROWID supports logical and physical rowids. UROWID also supports rowids of foreign tables such as non-Oracle tables. You must set the value of compatible parameter to 8.1 or higher to user UROWID.

The ROWID format consists of four components:

Data object number – 32 bits of storage, unique within the database

Tablespace-relative data file number – uniquely identifies every file in a tablespace and occupies 10 bits of storage.

Block number – uniquely identifies block in database, requires 22 bits of storage.

Row number – position of the row directory slot in the block header that contains the row. Requires 16 bits of space.

A ROWID is displayed using a base-64 encoding scheme.

AAAArs – data object number

BBD – tablespace relative data file number

AAAAUa – block number

EEE – row number

 

Collection data types – comparison

VARRAYS

 

Store ordered sets of static data elements

Store small volumes of data

Store elements that contain indexes

Can contain a variable number of elements but the maximum number is specified during declaration

Store elements of the same datatype

Nested Tables

 

Store unordered sets of records

Can be a large data set

Store pointers to the rows stored in separate table

Do not have a predetermined size

Store rows with the same structure

 

 

Unicode data

 

The limitations of the existing character set encodings can be overcome by using a universal character set. The global character set is called Unicode. This character set works on any platform, program or language. Unicode is a prerequisite to all modern standards such as XML, JVA, LDAP, etc.

The 2 different ways to encode Unicode are UTF-16 and UTF-8 encoding. You can store Unicode characters in an Oracle database in two ways. You can create a Unicode database or you can create Unicode datatypes to store the Unicode characters.

 

Data storage structures

 

You can use 4 structures to store data – tables, partitioned tables, IOTs and clustered tables.

SQL> CREATE TABLE HR.EMP

(ECODE VARCHAR2(3) NOT NULL,

DEPT_ID NUMBER(4) NOT NULL)

STORAGE (INITIAL 200K NEXT 200K PCTINCREASE 0)

TABLESPACE “EXAMPLE”;

 

Setting PCTFREE

 

Each block in the Oracle 9i database is associated with 2 parameters – PCTFREE and PCTUSED. The PCTFREE parameter defines the minimum % of data block to be reserved as free space. This free space is reserved  for possible updates to rows that exist in the block.

PCTUSED value is the minimum % of a block that the server can use for row data and overhead before new rows are added to the block. When a data block is filled to limits determined by the PCTFREE the database considers the block unavailable for insertion of new rows until % of that block falls below PCTUSED.

You can set the appropriate values for the PCTFREE and PCTUSED by using the formula displayed.

 

PCTFREE=(Average row size – Initial row size) X 100 / Average row size=(90-75)X100/90=17

 

PCTUSED = 100 – PCTFREE – Average row size * 100 / available data space=100-17-(90X100/1948)=78

 

A higher PCTFREE decreases row migration. There are 2 common situation when you   need to set a higher PCTFREE value. One situation is when rows in a table do not contain any or little values, but will be updated later. Another is when the columns are likely to increase in size after an update. A higher PCTFREE may lead to higher space utilization.

 

The cost based optimizer

 

There are different ways to execute a SQL statement; for example, by varying the order in which tables or indexes are accessed. The procedure used by the Oracle server to execute a statement can affect how quickly the statement is executed. The optimizer uses the internal rules or costing methods to determine the most efficient way of producing the result of the query. The output from the optimizer plan is a plan that describes an optimal method of execution. The Oracle server provides 2 methods of optimization – cost and rule based optimization. The CBO determines the most efficient execution plan by considering available access paths. The CBO factors the information based on statistics for the schema objects such as tables or indexes accessed by the SQL statement. The CBO also considers hints. Partitioned tables and indexes use CBO. Index-organized tables, reverse key indexes, bitmap indexes, index skip scans also use the CBO.

You set the optimizer mode with the OPTIMIZER_MODE init parameter. This parameter can take the values CHOOSE, ALL_ROWS (cost), FIRST_ROWS_n, FIRST_ROWS or RULE.

The CBO contains 3 components – query transformer, the estimator, the plan generator. When executing a query the CBO first generates a set of potential plans for the SQL statement. These plans are based on the available paths for the SQL statement. The CBO then estimates the cost of every plan based on the statistics in the data dictionary. The statistics are for the data distribution and storage characteristics of the tables, indexes and partitions. Cost is an estimated value proportional to the expected resource usage needed to execute the statement with a particular plan. Finally, the optimizer compares the costs of the plans and chooses the plan with the lowest cost.

 

Creating temporary tables

 

The temporary tables can be transaction or session specific.

SQL> CREATE GLOBAL TEMPORARY TABLE <>

AS <SELECt STATEMENT>;

SQL> CREATE GLOBAL TEMPORARY TABLE HR.employees

AS SELECt * FROM HR.employees;

Multiple sessions or transactions can use the same temporary table. A TRUNCATE statement issued on a session specific temp table truncates data in its own session. The statement does not truncate the data of other sessions that are suing the same table. When the session or transaction is over the rows inserted in that session are deleted. Temporary tables are useful if you want to buffer a result set or construct a result set by running multiple DML operations. DML statements on temp tables do nt generate redo logs for the data changes. However, Oracle generates UNDO loogs for the data and redo logs for the UNDO logs. Session termination occurs when the user logs off or when the session terminates abnormally.

There are 2 clauses used to control the duration of the rows in temp tables. The ON COMMIT DELETE ROWS clause explicitly specifies that the data is transaction specific.

SQL> CREATE GLOBALS  TEMPORARY TABLE HR.emloyees_temp

ON COMMIT DELETE ROWS AS SELECT * FROm HR.EMPLOYEES;

The On COMMIT PRESERVE ROWS clause specifies that the rows are visible for the entire session. The server creates a transaction specific table by default. You can create indexes on temporary tables. Such indexes are also temporary. You can lso create views and triggers on temporary tables. You can use import / export the definition of temp tables. However, the stored data is not exported. Once the temp table is exported is visible to all the sessions.

 

Storage parameters

 

NEXT – when you change the value of the NEXT parameter this change affects future extents in the table. The existing extents are not affected.

 

PCTINCREASE – usually 0 is adequate, applies to future sizes, kept in data dictionary.

 

MINEXTENTS – does not change immediately, unless you truncate the table. Specifies the minimum number of extents that must be present in a table at any given time. You can change the value of MINEXTENTS to any value that is less or equal to the current number of extents in the table.

 

MAXEXTENTS parameter specifies the maximum number of extents that a table can contain. You can change the value of MAXEXTENTS to any value that is less or equal to the current number of extents in the table.

 

Block utilization parameters

 

PCTFREE – affects future inserts, the new value applies only to those blocks that are palced on the free list

 

PCTUSED – affects all blocks

 

INITRANS – only new blocks are affected.

 

MAXTRANS – all blocks are affected

 

When modified, MINEXTENTS and MAXEXTENTS affect the growth of a table

 

 

 

The table is transaction specific unless explicitely specified as session specific.

 

You can modify the parameter values according to your requirements. You can do this only if the UNDO_MANAGEMENT parameter in the init.ora file set to manual.

 

ALTER TABLE hr.employees MOVE TABLESPACE data1;

 

Manually allocating extents

 

Extents are allocated to tables automatically when needed. To manually allocate:

SQL> ALTER TABLE <>.<>

ALLOCATE EXTENT (SIZE 100K

DATAFILE ‘d\ora9iServer\oradata\Oracle9i\Example01.dbf’);

 

Characteristics of the High Water Mark

 

A high water mark is a pointer that indicates the last block that was used to store data. Using this mark, you can identify the number of blocks that have never been used in the table. It shifts position with data insertion. When data is deleted from the table, the HWM is not reset. Upon truncation the HWM is reset. The HWM stored in the segment header of the table. You can retrieve the HWM by using DBMS_SPACE package. A HWM is also used in full tale scans. While performing a full table scan, Oracle reads all the blocks up to HWM, including deleted rows.

 

Deallocating unused space

 

If the HWM is at an extent less than the value of MINEXTENTS in the table, the Oracle server releases all the extents above minextents. If you want to release the unused space below MINEXTENTS, specify KEEP 0 in the ALTER TABLE command. 

SQL>  ALTER TABLE HR.loc

DEALLOCATE UNUSED KEEP 4K;

To avoid fragmentation set the MINEXTENT parameter appropriately.

 

Truncating tables

 

TRUNCATE TABLE HR.emp4

DROP/REUSE STORAGE;

The truncation resets the NEXT_EXTENT parameter for the table

Does not execute the delete trigger for the table

 

Dropping tables

 

The drop table command drops the table and releases the extents used by the table.

SQL> DROP TABLE <>.<>

CASCADE CONSTRAINTS;

 

Set unused clause

 

Removes any reference to the column

The column appears to be deleted, but the space is not released

SQL> ALTER TABLE <>.<> SET UNUSED COLUMN <>;

The DROP UNUSED COLUMNS actually removes data from the table. The space can be recovered using ALTER TABLE <> DROP UNUSED COLUMNS;

There are some restrictions on dropping columns. You can not drop a column object type tables, nested tables, or tables owned by SYS. Primary key columns in IOT, partitioning key columns and parent key columns also can not be dropped. All columns in a table can not be dropped. When using the SET UNUSED option, remember that if a LONG or LONG RAW column is set as unused but not dropped, you can not add a LONG or LONG RAW column to the table.

 

Retrieving table information

 

You can retrieve info about tables from DBA_OBJECTS, DBA_TABLES, and DBA_SEGMENTS.

 

SQL> SELECT t.table_name, o.data_object_id,

s.header_file, s.header_block

FROM DBA_TABLES t, DBA_OBJECTS o, DBA_SEGMENTS s

WHERE t.owner = o.owner

AND t.table_name=  o.object_name

AND t.owner=s.owner

AND t.table_name=s.segment_name

AND t.owner = ‘HR’;

 

Retrieving extent information

 

SQL> SELECT file_id , COUNT(*) AS extents,

SUM(blocks) AS blocks

FROm dba_extents

WHERE owner=’HR”

AND segment_name=’LOCATIONS’

GROUP BY file_id;

 

Obtaining physical row location

 

The DBMS_ROWID package provides several functions to extract individual ROWIDs.

SQL> SELECT location_id, ROWID,

DBMS_ROWID.ROWID_OBJECT(ROWID) AS OBJECT,

DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) AS “RELATIVE FILE”,

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) AS BLOCK

FROM HR.locations;

 

IOTs and regular tables

 

Regular table

 

The ROWID uniquely identifies a row

The physical rowed facilitates building of secondary indexes

Allows access based on rowed

A sequential scan returns all rows

The unique constraint is allowed

Can be stored in a cluster

Can contain LONG column

Support replication

IOT

 

The primary key is used to uniquely identify a row

The logical ROWID facilitates building of secondary indexes

Allows access based on the primary key

A full-index  scan returns all rows

A unique constraint is not allowed

Cannot be stored in a cluster

Cannot contain LON column

Do not support replication

 

Creating IOT

 

SQL> CREATE TABLE iot_num

(I Number,

J Number,

k Number,

I Number,

Primary key (I,j,k)

ORGANIZATION INDEX

COMPRESS;

 

SQL> CREATE TABLE HR.emp_new ( empno NUMBER(4) NOT NULL,

CONSTRAINT emp_pk PRIMARy KEY (loc_id, empno))

ORGANIZATION INDEX

PCTTHRESHOLD 20

OVERFLOW TABLESPACE example

INCLUDING ename NOCOMPERSS;

 

SQL> ALTER TABLE

Docindex

INITRANS 4

OVERFLOW INITRANS 6;

 

ALTER TABLE docindex

PCTTHRESHOLD 15

INCLUDING doc_id;

 

ALTER TABLE docindex

ADD OVERFLOW

TABLESPACE ovf_tbs;

 

ALTER TABLE HR.emp_new

INITRANS 5 OVERFLOW INITRANS 2;

 

ANALYZE TABLE docindex COMPUTE STATISTICS;

 

SELECT * FROM DBA_INDEXES

WHERE INDEX_NAME=’PK_DOCINDEX’;

 

SELECT * FROM DBA_TABLES

WHERE IOT_TYPE=’IOT_OVERFLOW’

AND IOT_NAME=’DOCINDEX’;

 

IOT are useful in information retrieval applications, spatial applications and OLAP applications.

1