Materialized Views Refresh Mechanisms

 

  1. Complete refresh – all data in master replaces all data in snapshot
  2. Fast – only changes since the last refresh are sent
  3. Force – tries fast, then – if it does not work – complete refresh (default)

 

Triggering a refresh

  1. Automatic
  2. Manual

 

Refresh groups (a set of snapshots)

Adding a MV to a group – EXECUTE DBMS_REFRESH.ADD(‘ORDERS’,’EMPLOYEES_MV’);

Refreshing a group – DBMS_REFRESH.REFRESH(‘ORDERS’);

Removing a mv from a group – EXECUTE DBMS_REFRESH.SUBTRACT(‘ORDERS’,’EMPLOYEES_MV’);

 

Fast Refresh Materialized Views

Subquery Restrictions

Restrictions for fast refresh with subqueries include:

Materialized views must be of primary key type.

The subquery must be positive.

The subquery must use EXISTS operator; IN operator is not allowed.

Each table can only be joined once.

The join expression must use exact match or equality comparisons (equijoins).

All tables referenced in the subquery must reside in the same database.

No more than 1,000 columns referenced in query.

 

Fast Refresh Materialized Views

EXPLAIN_MVIEW

To determine the capabilities of the oe.orders materialized view, enter: EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW ('oe.orders');

Then query the MV_CAPABILITIES_TABLE to see the results. You must build this table by running the utlxmv.sql script in the Oracle_home/rdbms/admin directory.

To determine the capabilities of a materialized view that does not yet exist, execute:

DBMS_MVIEW.EXPLAIN_MVIEW ('SELECT * FROM oe.orders@dba01.world
WHERE EXISTS (SELECT * FROM oe.customers@dba01.world
WHERE o.customer_id = c.customer_id AND c.credit_limit > 500)');

Then query the MV_CAPABILITIES_TABLE to see the results. You must build this table by running the utlxmv.sql script in the Oracle_home/rdbms/admin directory.

The materialized view must be created with fast refresh.

If force refresh is specified, then you may not receive any errors. When a force refresh is requested, Oracle automatically performs a complete refresh if it cannot perform a fast refresh.

Use the EXPLAIN_MVIEW procedure in the DBMS_MVIEW package to determine the following information about an existing materialized view, or for a proposed materialized view that does not yet exist:

The capabilities of a materialized view

Whether each capability is possible

If a capability is not possible, why it is not possible

No more than 1,000 columns referenced in query.

This procedure can also be used in order to determine whether a materialized view's subquery satisfies the restrictions detailed in the previous slide for fast refresh materialized views. Oracle returns errors if the materialized view violates any restrictions for subquery materialized views.

Complex Materialized Views

A materialized view is considered complex if any of the underlying points are true for the subquery:

It contains a CONNECT BY clause

It violates a subquery restriction

It contains MINUS, UNION ALL, or INTERSECT operation

It contains an aggregate function

It contains a join, other than those in the subquery

A materialized view is complex in some cases when it contains:

the DISTINCT or UNIQUE keyword

a UNION operation

For example, in Oracle9i, a materialized view with a UNION operation is complex if any query within the UNION is complex, or if the outermost SELECT list columns do not match for the queries in the UNION.

 

Nested Materialized Views

Definition

A nested materialized view is a materialized view whose definition is based on another materialized view. A nested materialized view can reference other relations in the database in addition to referencing materialized views.

Why Use Nested Materialized Views?

In a data warehouse, you typically create many aggregate views on a single join (for example, rollups along different dimensions). Incrementally maintaining these distinct materialized aggregate views can take a long time, because the underlying join has to be performed many times. By using nested materialized views, the join is performed just once (while maintaining the materialized view containing joins only). Incremental maintenance of single-table aggregate materialized views is very fast due to the self-maintenance refresh operations on this class of views.

Restrictions When Using Nested Materialized Views

Only nested materialized join views and nested single-table aggregate materialized views can use fast refresh. If you want complete refresh for all of your materialized views, then you can still nest these materialized views.

Some restrictions exist on the way you can nest materialized views. Oracle allows nesting a materialized view only when all the immediate dependencies of the materialized view do not have any dependencies among themselves. Thus, in the dependency tree, a materialized view can never be a parent as well as a grandparent of an object.

Updatable Materialized Views

CREATE MATERIALIZED VIEW hr.departments FOR UPDATE AS
SELECT * FROM hr.departments@dba01.world;

 

 

You can make a materialized view updatable during creation by including the FOR UPDATE clause or enabling the equivalent option in the Replication Management
tool. For changes made to an updatable materialized view to be pushed back to the master during refresh, the updatable materialized view must belong to a
materialized view group.

Updatable materialized views enable you to decrease the load on master sites because users can make changes to the data at the materialized view site

Demo

Creating a Materialized View

In the previous demonstration (Module 3) you created a database link between a local database and a remote one. You will now use this link to create a materialized view on the local database that refers to a base table in the remote database. In the example, there is a local database (dba01) and a remote database (dba02).

You want to replicate the orders table in the dba02 database. You already have a database link named dba02.world. You want to store this materialized view in the HR schema.

1. Connect to database dba01.world as user hr/hr.

SQL> CONNECT hr/hr

2. Create the materialized view.

SQL>

create materialized view orders_mv

2>

as

3>

select *

4>

from orders@dba02.world;

3. Confirm the creation of the materialized view by counting the number of rows in the orders_mv view.

SQL>

select count(*)

2>

from orders_mv;

Note: Because the orders_mv materialized view was created using default values for the refresh clause, the data changes to the orders table in dba02 will not be changed on the materialized view.

4. Update the ORDER_TOTAL column of the ORDERS table on the remote (dba02) database. Set all rows to the same value (The value 6 is used in the example).

SQL>

update orders@dba02.world

2>

set order_total = 6;

5. Confirm that the base table (orders@dba02.world) and the materialized view (orders_mv) have different values.

SQL>

select r.order_total, l.order_total

2>

from orders@dba02.world r, orders_mv l

3>

where r.order_id = l.order_id;

6. Perform a manual refresh using the DBMS_MVIEW.REFRESH procedure, and confirm that the data changes have been migrated through to the materialized view by running the same query as in step 5.

SQL>

exec dbms_mview.refresh('orders_mv');

7. Confirm that the materialized view has the new values.

SQL>

select r.order_total, l.order_total

2>

from orders@dba02.world r, orders_mv l

3>

where r.order_id = l.order_id;

 

Demo

Using EXPLAIN_MVIEW

Before running the DBMS_MVIEW.EXPLAIN_MVIEW procedure you must run the utlxmv.sql script in the Oracle_home/rdbms/admin directory.

1. Execute the UTLXMV.SQL script.

SQL>

@?/rdbms/admin/utlxmv.sql

2. Execute the DBMS_MVIEW.EXPLAIN_MVIEW procedure against the ORDERS_MV materialized view.

SQL>

exec dbms_mview.explain_mview ('orders_mv');

3. Query the mv_capabilities_table table to see if the ORDERS_MV view can be refreshed using the fast option.

SQL>

select mvowner, mvname, possible, msgtxt

2>

from mv_capabilities_table;

A fast refesh is not possible for this view at present. Amongst all the output from this query you will see one line that under msgtxt has the value "the detail table does not have a materialized view log." In order to have a fast refresh, the table (ORDERS) on the remote database (dba02.world) requires a materialized view log.

This log will keep a track of all DML against the base table, thus enabling a refresh to only update the rows that have changed, called a fast refresh. In the next module you will create the required log.

 

Job Queue Parameter

The JOB_QUEUE_PROCESSES initialization parameter controls whether a coordinator job queue process is started by an instance. If this parameter is set to 0 (which is the default value), no coordinator job queue process is started at database startup, and consequently no job queue jobs are executed. The JOB_QUEUE_PROCESSES initialization parameter also specifies the maximum number of Jnnn processes that can concurrently run on an instance. The maximum number of processes that can be specified is 1000.

This parameter is dynamic, and thus the number of job processes can be changed by using the command:

ALTER SYSTEM SET JOB_QUEUE_PROCESSES = 20;

MView Implementation

Implementation Steps

Create schema.
Assign privileges.
Create database link.
Allocate job queue processes.

 

The following seven steps will ensure successful implementation and ease the administration of materialized views:

1.

Design

2.

Create materialized view log on the master table

3.

Setup Materialized View site

4.

Create materialized view

5.

Create refresh groups and register materialized views

6.

Grant access privileges to materialized view

7.

Monitor materialized views and materialized views logs

 

MView Implementation

MView Using ROWID

CREATE MATERIALIZED VIEW hr.my_emps
REFRESH FAST
START WITH sysdate NEXT sysdate + 1
WITH ROWID
AS SELECT employee_id, last_name, first_name FROM hr.employees@dba01.world
WHERE manager_id = 100;

A ROWID materialized view is based on the physical row identifiers (row IDs) of the rows in a master table. ROWID materialized views are compatible with previous releases of Oracle software, specifically version 7.3.

This form of a materialized view should only be used in the following situations:

The master table is on an Oracle7 database

The materialized view does not contain all the primary key columns of the master table

ROWID materialized views must be based on a single remote table and cannot contain any of the following:

Distinct or aggregate functions

GROUP BY or CONNECT BY clauses

Subqueries

Joins

Set operations

A materialized view that uses ROWID will require a complete refresh if the master table is reorganized, and ROWID materialized views are not eligible for a fast refresh after a master table reorganization until a complete refresh has been performed.

MView Implementation

MView Using OID

The types must be exactly the same at the materialized view site and master site.

 

If a materialized view is based on an object table and is created using the OF type clause, then the materialized view is called an object materialized view. An object materialized view is structured in the same way as an object table. That is, an object materialized view is composed of row objects, and each row object is identified by an object identifier (OID) column.

To create an object materialized view, you must first create the object type used in the master table on the materialized view site. For example, if the master table CATEGORIES_TAB is created using the CATEGORY_TYP type, then you must create this type on the materialized view site before creating the CATEGORIES_OBJ_MV materialized view. This is the type which is specified in the OF clause of CREATE MATERIALIZED VIEW.

To be fast refreshable, there must be a materialized view log on the master object table. When you create a materialized view log on an object table, you must log the object identifier by specifying the WITH OBJECT ID clause, but you can also specify that the primary key is logged if the object identifier is primary key-based.

MView Implementation

Additional Commands for Materialized Views

Alter a materialized view in order to change the storage parameters or refresh modes and times.

Alter a materialized view log to change the storage parameters.

Drop a materialized view in order to remove it from the data dictionary.

 

There are two further commands useful in the implementation of materialized views. These are:

ALTER

DROP

 

MView Registration

This can be confirmed by querying the view DBA_REGISTERED_MVIEWS This view gives information (such as the select statment, and remote site name) on each materialized view created against the master site.

 

It is possible that complex materialized views may not be registered, or that registration may fail due to an unavailable connection.

 

When a materialized view is created it is automatically registered with the master site. In a similar way the materialized view is deregistered when the view is dropped. This registration assists in the monitoring and administration of distributed transactions.

Oracle cannot guarantee the registration or unregistration of a materialized view at its master site during the creation or drop of the materialized view, respectively. If Oracle cannot successfully register a materialized view during creation, then you must complete the registration manually using the REGISTER_MVIEW procedure in the DBMS_MVIEW package. If Oracle cannot successfully unregister a materialized view when you drop the materialized view, then the registration information for the materialized view persists in the master site until it is manually unregistered.

 

Mview Log Scenario

Description of Materialized View Logs

This table is used to store the primary keys, ROWIDs, and/or Object IDs, filter columns, and timestamps of rows updated in the master table. Oracle does not update the timestamp column until the log is first used by a materialized view refresh.

 

The purpose of the trigger is to insert the primary keys (or ROWIDs or Object IDs), filter columns, and timestamps of inserted, updated, and deleted rows into the materialized view log.

 

When you create a materialized view log, the Oracle database does the following:

Creates the base table MLOG$_<table_name>

Creates and uses an internal trigger on the master table

Mview Log

Creating Materialized View Logs

Records changes to the master table based on the primary key of the modified rows.

 

Records changes to the master table based on the ROWID of the modified rows. This option should only be used when the master table is on an Oracle7 database.

 

Records changes to the master object table based on the object identifier of the modified row objects. You can specify OBJECT ID only when creating a log on an object table.

 

Works in the same way as the other types of materialized view logs, except that more than one type of value is recorded. Combination materialized view logs are helpful for environments that have more than one type of materialized view based on the master table.

 

There are four basic types of materialized view logs:

Primary Key

Row ID

Object ID

Combination

 

Purging Materialized View Logs

This allows the Oracle database to purge rows from the materialized view log.

 

For example, delete rows required only by the nth least recently refreshed materialized views, in other words those that were last refreshed the longest ago.

f you purge or TRUNCATE a materialized view log before a materialized view has refreshed the changes that you deleted, then the materialized view must perform a complete refresh.

DBMS_MVIEW.PURGE_LOG lets you specify the number of oldest materialized views, whose materialized view log rows should be deleted.

DBMS_MVIEW.PURGE_MVIEW_FROM_LOG lets you specify individual materialized views; the rows pertaining to these materialzied views are deleted from the log.

Always try to keep a materialized view log as small as possible to minimize the database space that it uses. To remove rows from a materialized view log and make space for newer log records, you can perform one of the following actions:

Refresh the materialized views associated with the log

Manually purge records in the log

 

Monitoring Mview Processes and Logs

Data Dictionary Views

Lists information about all materialized views using materialized view logs.

 

Describes materialized view logs.

 

Describes refresh times of materialized views.

 

Describes all materialized views.

 

Describes all the refresh groups.

 

Lists all the objects in refresh groups.

 

Describes all materialized views registered at a master site or master materialized view site.

 

Lists all columns (excluding primary key columns) being logged in the materialized view logs.

 

Use the following data dictionary views in order to monitor the materialized view processes and logs. The main item of concern is that the views are being updated at their regular intervals.

[DBA|ALL|USER]_BASE_TABLE_MVIEWS

[DBA|ALL|USER]_MVIEW_LOGS

[DBA|ALL|USER]_MVIEW _REFRESH_TIMES

[DBA|ALL|USER]_MVIEWS

[DBA|ALL|USER]_ REFRESH

[DBA|ALL|USER]_ REFRESH_CHILDREN

[DBA|ALL|USER]_REGISTERED_ MVIEWS

DBA_MVIEW_LOG_FILTER_COLS

 

Reorganizing Master Tables

When you reorganize a table, any ROWID information of the materialized view log will be invalidated. Oracle detects a table reorganization automatically only if the table is truncated as part of the reorganization.

If the table is not truncated, Oracle must be notified of the table reorganization. To support table reorganizations when using DELETE, use the procedures in the DBMS_MVIEW package, MVIEW_BEGIN_TABLE_REORGANIZE and MVIEW_END_TABLE_REORGANIZE. They notify Oracle that the specified table is being reorganized.

These procedures perform clean-up operations, verify the integrity of the logs and triggers that the fast refresh mechanism needs, and invalidate the ROWID information in the table’s materialized view log. The input arguments for the PL/SQL procedure are the owner and name of the master table to be reorganized. There is no output for this procedure. See the dbmssnap.sql script for more information on using these procedures.

Online redefinition is not allowed on master tables with materialized view logs, master materialized views, or materialized views. Online redefinition is allowed only on master tables that do not have materialized view logs.

Switching Master Sites (Remastering a Mview)

Name of the replication group whose master definition you want to relocate.

 

Fully qualified database name of the current master definition site.

 

Fully qualified database name of the existing master site that you want to make the new master definition site.

 

If this is true, then the procedure synchronously multicasts the change to all masters (including old_masterdef only if include_old_masterdef is true). If any master does not make the change, then roll back the changes at all masters.

If just the master definition site fails, then you should invoke RELOCATE_MASTERDEF with notify_masters set to true and include_old_masterdef set to false. If several master sites and the master definition site fail, then the administrator should invoke RELOCATE_MASTERDEF at each operational master with notify_masters set to false.

.

 

If notify_masters is true and if include_old_masterdef is also true, then the old master definition site is also notified of thechange.

 

This parameter is for internal use only. Do not set this parameter unless directed to do so by Oracle Support Services.

.

 

If the location of the master tables changes from one database to another then using the RELOCATE_MASTERDEF procedure changes your master definition site to another master site in your replication environment.

It is not necessary for either the old or new master definition site to be available when you call RELOCATE_MASTERDEF. In a planned reconfiguration, you should
invoke RELOCATE_MASTERDEF with notify_masters set to true and include_old_masterdef set to true.

The procedure definition is:

DBMS_REPCAT.RELOCATE_MASTERDEF (

 

gname IN VARCHAR2,

 

old_masterdef IN VARCHAR2,

 

new_masterdef IN VARCHAR2,

 

notify_masters IN BOOLEAN := true,

 

include_old_masterdef IN BOOLEAN := true,

 

require_flavor_change IN BOOLEAN := false);

 

 

Demo

Altering the Value of JOB_QUEUE_PROCESSES

In order to automatically have a materialized view refresh itself from its base tables, the job scheduling system has to be active. This is done by setting the number of JOB_QUEUE_PROCESSES to greater than 0. The actual number will be dependent on the number of simultaneous job processes will be running.

1. Determine what the current value is for JOB_QUEUE_PROCESSES.

SQL>

show parameter job

2. If the value is 0, or too low, then the value must be changed. This can be performed by changing the value in the parameter file, and bouncing the database or by using the ALTER SYSTEM command.

SQL>

alter system set JOB_QUEUE_PROCESSES = 8;

 Demo

Creating a Materialized View Log

On the remote database (dba02.world) you will want to create a materialized view log against the base table ORDERS. This will be a place to store information about which rows have been modified so that at the time a refresh runs, only modified rows will be updated in the materialized view, and not all rows.

1. Log on to the remote database as user OE/OE.

SQL>

connect oe/oe@dba02.world

2. Create the materialized view log for the ORDERS table.

SQL>

create materialized view log on

2>

oe.orders

3>

with primary key;

3. Confirm the creation of the log tabl,e which should be named MLOG$_ORDERS, and count how mant rows it contains. Because there have been no modifications on the ORDERS table since the log was created, there should be no rows in the log table.

SQL>

desc mlog$_orders

Now count the number of rows in the log table.

SQL>

select count(*)

2>

from mlog$_orders;

5. Now update the first 10 rows of the ORDERS table, and see if the log table increases by the same number of rows.

SQL>

update orders

2>

set order_total = 10

3>

where rownum < 11;

Now do the count:

SQL>

select count(*)

2>

from mlog$_orders;

There should now be 10 rows in the log table.

Remote vs. Distributed Transactions

Definitions

A remote transaction is a transaction containing one, or more, DML or DDL statements that is completely resolved using only one remote database.

A distributed transaction is a transaction containing many DML, or DDL, statements that requires many different databases in order to be resolved.

The following DDL and DML commands are supported in distributed transactions:

CREATE TABLE AS SELECT

DELETE

INSERT

LOCK TABLE

SELECT

SELECT FOR UPDATE

INIT.ORA Parameter Affecting Distributed Transactions

The DISTRIBUTED_TRANSACTIONS parameter specifies the maximum number of distributed transactions in which this database can concurrently participate. If set to 0, no distributed transactions are allowed for the database. The recovery (RECO) process will not start when the instance starts up if this parameter is set to 0. The recommended setting for this parameter is the total number of distributed database sites in your environment.

Range of values: 0–TRANSACTIONS.

Default: OS dependent.

Two Phase Commit

Definition

An in-doubt transaction is automatically resolved when the database or network is restored. If it cannot be resolved because the down database is being re-created or destroyed, then the administrator can manually resolve in-doubt transactions from the other nodes.

 

A client is a database that requests information from another database.

 

A database server is a database that receives a request for information from another database involved in the distributed transaction.

 

The global_coordinator is the node where the distributed transaction originated. This node is responsible for sending messages to the other nodes to prepare and commit or rollback.

 

A local coordinator is a node that is forced to reference data on other nodes in order to complete its part of the transaction. It is also responsible for coordinating the transaction among the nodes with which it communicates directly.

 

The commit point site is the database that commits or rolls back the transaction as instructed by the global coordinator.

 

The node in the transaction with the highest value for COMMIT_POINT_STRENGTH will usually be the commit point site. If you do not set a commit point site, the Oracle database will determine which site becomes the commit point site.

Range of values: 0–255.

Default value: Operating system–dependent

The two-phase commit mechanism is used to coordinate the committing or rolling back of the changes in a transaction as a self-contained unit. This mechanism is automatically used during all distributed transactions. If one of the remote databases were to shut down or the network went down during the commit statement execution, the transaction may become an in-doubt transaction. The user application could trap for the errors which would possibly indicate this status.

The hierarchical model of each database and its role in the transaction is called the session tree. Each database that is involved in a distributed transaction will be classified into one of the following roles:

Client

Database Server

Global Coordinator

Local Coordinator

Commit Point Site

The commit point site in a distributed transaction is determined by the value given to the COMMIT_POINT_STRENGTH parameter.

A database’s commit point strength should be set relative to the amount of critical shared data in the database. For example, a database on a mainframe computer typically shares more data among users than one on a personal computer. Therefore, the COMMIT_POINT_STRENGTH parameter should be set to a higher value for the mainframe computer.

The commit point site stores information about the status of transactions. Other computers in a distributed transaction require this information, so it is desirable to have machines that are always available as commit point sites. Therefore, set the COMMIT_POINT_STRENGTH parameter to a higher value on your more available machines.

The Three Different Phases

When asked to prepare, the initiating database (where the commit was performed) asks each of the other databases involved in the distributed transaction to promise to commit or roll back when told to do so at some later stage.

 

The commit site database commits first and local locks are released, then the other databases commit and notify the commit site database they have committed. After notifying the commit site the other databases forget about the transaction.

 

The commit site database forgets about the transaction and deletes the transaction status information.

 

A two phase commit can be broken down into three distinct phases. All three phases take place quickly and transparently to the user application.The phases are:

Prepare

Commit

Forget

During the Prepare phase each node except the commit site:

Locks the relevant tables to prevent both reads and writes until the commit phase is complete.

Flushes the data changes in the redo log buffer to disk in preparation of committing.

Determines the highest System Commit Number (SCN) at all nodes and designates it as the global commit SCN for this transaction. It is passed to the other databases. If SCN recovery becomes necessary later on, this is important information that will be required.

If there is an instance failure at this point, the information in the log files can be used for recovery back to the prepare phase. This is what is meant by promising to commit or rollback only when instructed to do so.

Failure During a Two Phase Commit

What happens if any of the three phases fails because of a system or network failure?

Data integrity may be compromised.

Assume that the RECO process will resolve any transactions by a ROLLBACK or COMMIT.

Crash during PREPARE results in: ROLLBACK of the transaction.

Crash during COMMIT results in: COMMIT or ROLLBACK of the transaction.

Examine the two scenarios in order to assist in understanding failure resolution during a two phase commit. For both scenarios assume that there are only two sites, one local, and the other remote. This is done purely for simplicity; the same mechanism works regardless of the number of databases.

SCNs

Definition of System Change Numbers

A System Change Number (SCN) is a clock value for the database that describes a committed version of the database. It functions as a sequence generator that controls concurrency and redo record ordering. Think of it as a timestamp that guarantees transaction consistency.

For example, when a transaction is committed, the LogWriter (LGWR) process writes the transaction’s redo entries from the log buffer to the current redo log file along with a local SCN.

Global System Change Numbers

A distributed transaction is committed with the highest global SCN among all of the databases involved. The commit global SCN is sent to all databases involved in the transaction.

 

For more information on rolling forward refer to the "Oracle9i Database: DBA Fundamental 2" course.

 

Oracle uses Global SCNs to coordinate distributed transactions between databases in the following ways:

If a transaction becomes an in-doubt transaction, an administrator can use it to coordinate changes made to the global database. Later on, that global SCN can be used to identify that committed transaction, perhaps during SCN based recovery of one or more distributed databases.

SCNs help maintain global integrity because they are a synchronized commit timestamp of a transaction, even after a failure. When Oracle recovers the SCN from the redo logs during instance recovery, the rolling forward of this information guarantees global integrity. Rolling forward is automatic. If incomplete media recovery is performed, the rolling forward is manual.

In-Doubt Transactions

Definition

Types of System Failures
A server machine with Oracle software running on it crashes.
A network connection between two or more Oracle databases involved in distributed processing is disconnected.
An unhandled software error (in the distributed application) occurs.

 

It should also be noted that capturing a transaction with an in-doubt status is extremely rare and actually difficult to capture.

 

The purpose of the RECO process is to resolve all transactions that are marked as in-doubt and resolve them as soon as possible.

 

DBA_2PC_PENDING contains status of in-doubt transactions
DBA_2PC_NEIGHBORS contains in-doubt transactions IDs

 

It is important to note that reads are blocked because, until the transaction is resolved, Oracle does not assume which version of the data to display for a query user.

 

An in-doubt transaction is a distributed transaction where one, or more, of the involved servers has become unavailable during the two phase commit. Thus the commit point site now does not know what to do with the transaction. Did the site commit before going down, should the transaction be rolled back? For this reason the transaction gets labelled as an in-doubt transaction until this dilemma is resolved.

The RECO process automatically resolves the in-doubt transactions when the machine, network, or software problem is resolved. Until RECO can resolve the transaction, the data involved remains locked for both reads and writes.

Once RECO has resolved the transaction, information from the pending data dictionary tables is removed.

Automatic Resolution

In order to resolve an in-doubt transaction automatically, repair the problem that is causing the database, or databases, to be unavailable. If the network is down then bring it back up.

RECO will take care of establishing data consistency between all the distributed databases. This is performed without assistance from the DBA, other than to ensure that the databases can communicate with each other.

In-Doubt Transactions

Manual Resolution

Performing a manual resolution requires a knowledge of the data involved. A wrong decision while performing a force commit or force abort can lead to database inconsistencies that can be difficult to trace and must be manually corrected when discovered.

A manual override should only be performed when the in-doubt transaction is:

Holding locks required by other transactions

Preventing the extents of a rollback segment from being reused

The database or network failure cannot be resolved in a reasonable timeframe

You may also want to contact Oracle Support for assistance.

To resolve an in-doubt transaction you need the system privilege FORCE TRANSACTION or FORCE ANY TRANSACTION. Then issue one of the two commands:

 

COMMIT [WORK] FORCE ‘text’, ‘integer’

or

ROLLBACK [WORK] FORCE text

where text is the local transaction ID and integer is the SCN number.

Once the transaction is manually resolved, the DBA may need to purge the pending view.

Transaction Naming

 

Before you start a transaction you can name the transaction by using the command SET TRANSACTION ... NAME '<Name>' :

SET TRANSACTION NAME 'Pretoria';

The text string of <Name> is limited to 255 bytes, and should be a simple and memorable text string.

Oracle9i allows a transaction to be named. Transaction names replace commit comments for distributed transactions, with the following advantages:

It is easier to monitor long-running transactions and to resolve in-doubt distributed transactions.

You can view transaction names along with transaction IDs in applications. For example, a DBA can view transaction names in the Oracle Enterprise
Manager, when monitoring system activity.

Transaction names are written to the transaction auditing redo record, if compatibility is set to Oracle9i, Release 1 (9.0.1) or higher.

LogMiner can use transaction names to search for a specific transaction from transaction auditing records in the redo log.

You can use transaction names to find a specific transaction in data dictionary tables, such as V$TRANSACTION.

When you name a transaction, you associate the transaction's name with its ID. Transaction names do not have to be unique; different transactions can have the same
transaction name at the same time by the same owner. You can use any name that enables you to distinguish the transaction.

Purging Pending Rows

PURGE_LOST_DB_ENTRY

execute dbms_transaction.purge_lost_db_entry (TRANS_ID);

where trans_id is the transaction identifier

 

Before RECO recovers the transaction, the transaction will show up in the STATE column of DBA_2PC_PENDING as Collecting, Committed, or Prepared. If the administrator has forced an in-doubt transaction to have a particular result by using commit force or rollback force, then the Forced Commit or Forced Abort (Rollback) states may also appear. Automatic recovery will normally delete entries in any of these states.

The only exception is when recovery finds a forced transaction which is in a state inconsistent with other sites in the transaction; in this case, the entry can be left in the table and the MIXED column in DBA_2PC_PENDING will have a value Yes.

The PURGE_LOST_DB_ENTRY procedure should only be used when the other database is lost or has been recreated. Any other use may leave the other database in an unrecoverable or inconsistent state. In other words, once you have manually resolved a transaction, the entries in the pending views will be deleted and you should not need to purge the data dictionary tables containing transaction states.
Automatic recovery is not possible because a remote database may have been permanently lost; when and if it is re-created, the database will receive a new database ID. This means that the recovery cannot identify the recreated database (a possible symptom is ORA-02062). In this case, the DBA should use the PURGE_LOST_DB_ENTRY procedure to clean up the entries in any state other than prepared. The DBA does not need to be in any particular hurry to clean up the entries in DBA_2PC_PENDING in this situation, because the forced transactions will not be holding any database resources (rollback segments or row locks).

Purging Pending Rows

PURGE_MIXED

execute dbms_transaction.purge_lost_db_entry (TRANS_ID);

where trans_id is the transaction identifier

 

Use the PURGE_MIXED procedure only if a significant reconfiguration has occurred so that automatic recovery (RECO) cannot resolve the transaction. Examples of appropriate cases in which to use this procedure are:

Total loss of the remote database

Reconfiguration in software resulting in loss of two-phase commit capability

Information from an external transaction coordinator such as a TP Monitor

Examine and perform any manual action to remove inconsistencies, then use the PURGE_MIXED procedure.

Demo

Two Phase Commit

Desription of Problem

If any nodes involved in a distributed transaction crash during the prepare phase, that is, prior to the commit point site (in this example, the local database) being told to proceed with commit, then the transaction will ultimately be rolled back at all sites by the RECO process. This will happen at the remote site when the site is restored.

 

At the local site, a distributed transaction on a remote table is performed and committed. Due to the values of the COMMIT_POINT_STRENGTH for each site, the local site is determined to be the commit point site.

When asked to prepare, the commit point site (where the commit was performed) requests each of the other databases involved in the distributed transaction to promise to commit or rollback when told to do so at some later stage.

Next, the remote site crashes before successfully notifying the commit point site that it has prepared and promised to commit.

What will happen to the transaction?

Two Phase Commit

Desription of Problem

RECO will ensure that all nodes eventually commit when the network is restored.
If the commit message did not reach the remote database before the network link failed, the transaction will ultimately roll back when the network is restored between the local and remote databases.

 

At the local site, a distributed transaction on a remote table is performed and committed. Due to the values of the COMMIT_POINT_STRENGTH for each site, the local site is determined to be the commit point site.

A message is returned from the remote site that it has prepared to commit. The commit point site then sends a message to proceed with commit to the remote site. The commit message reaches the remote database and then the network link between the local and remote sites crashes.

What will happen to the transaction?

Demo

Walking Through the Process of Coordinated Recovery

For this scenario you are going to have a distributed system that is made up of only two databases, dba01 and dba02. This is done purely to make the scenario simpler. The same method would be applied to complicated distributed systems, except that more coordination would be required.

In this example, dba01 is the database that has crashed. If you were able to perform a complete recovery then no action on the part of the DBA would need to be taken, as RECO will ensure that all distributed transactions between dba01 and dba02 are resolved.

To perform a point-in-time recovery will require DBAs of each database to coordinate actions. The other DBAs may well resist taking their database back in time due to the data that will be lost, and the time and effort involved. However, for global data consistency it may be unavoidable.

The following scenario should be followed:

Step 1

Determine if there has been any distributed transactions between the databases from the time at which recovery will stop on dba01 (from now one termed the recovery time) till the time of failure of dba01.

Step 2

If there has been no distributed transactions between the two databases during this time period then only dba01 need go back in time, and dba02 can be left alone. Ignore rest of scenario then and just recover dba01.

Step 3

If there has been even one distributed transaction, or if you are uncertain then both systems will have to go back in time. This would mean a loss of data on both dba01 and dba02, but doing this will ensure that the databases are in correlation with each other.

Step 4

In MOUNT mode ensure that all required data files are online.

Step 5

Perform an incomplete recovery on dba01. Obviously it would be advisable to get as close to the point of failure as possible.

Step 6

When dba01 has completed its recovery open the database. this will require the use of resetlogs because the database had an incomplete recovery.

Step 7

When opening a database with the resetlogs option a note is made in the ALERT.LOG file of what the last SCN was. Make a note of this number.

Step 8

Now recover the dba02 database using the UNTIL CHANGE option where the SCN is the number obtained from the ALERT.LOG. It is required to use the SCN as this is a far more accurate determination of synchronization than log switches, or time, because these two options need not have the same value on all databases.

Step 9

Once both databases are at the same SCN, they will be in correlation with each other, and so the databases can be opened and business can resume.

 

 

Note 1

There will be a temptation not to take dba02 back in time due to the amount of data that might be lost. This step has to be weighed up against the likelihood of having a distributed system where the information is not accurate due to distributed transactions having been committed on one site, and nonexistant on another. Thus this situation cannot be supported by Oracle.

Note 2

After performing an incomplete recovery on a database it is highly recommended that an immediate and complete backup be made of the database since the RESETLOGS option was used.

 

1