Configuration Generator for Oracle Maximum Availability Architectures

Physical Standby Creation

Summary

The following table summarizes the Physical Standby Creation procedure:

Task Sub Task Script/File User Cluster Nodes
Network Configuration Configure listener.ora listener.txt grid Both clusters All nodes
  Configure tnsnames.ora tnsnames.txt oracle Both clusters All nodes
Primary Preparation Enable Force Logging EnableForceLogging.sh oracle Primary cluster First node
  Enable Flashback Logging EnableFlashbackLogging.sh oracle Primary cluster First node
  Set Data Guard Broker Parameters SetBrokerParameters.sh oracle Primary cluster First node
  Set Filename Conversion Parameters SetConvertParameters.sh oracle Primary cluster First node
  Create Standby Redo Logs CreateStandbyRedoLogs.sh oracle Primary cluster First node
  Copy Password Files CopyPasswordFiles.sh oracle Primary cluster First node
Standby Preparation Create Administrative Directories CreateDirectories.sh oracle Standby cluster All nodes
Standby Duplication Start Standby Instance StartupNoMountPFILE.sh oracle Standby cluster First node
  Duplicate Standby Database DuplicateStandby.sh oracle Standby cluster First node
Parameter File Configuration Copy SPFILE to Standby Database RestoreSPFILE.sh oracle Standby cluster First node
  Convert SPFILE into a PFILE CreatePFILE.sh oracle Standby cluster First node
  Extract Global Parameters ExtractGlobalParameters.sh oracle Standby cluster First node
  Create SPFILE CreateSPFILE.sh oracle Standby cluster First node
  Create init.ora files CreateInitOraFiles.sh oracle Standby cluster First node
  Shutdown Instance ShutdownImmediateInstance.sh oracle Standby cluster First node
  Start Instance StartupNoMountSPFILE.sh oracle Standby cluster First node
Control File Configuration Set Control Files Parameter SetControlFiles.sh oracle Standby cluster First node
  Shutdown Instance ShutdownImmediateInstance.sh oracle Standby cluster First node
  Start Instance StartupNoMountSPFILE.sh oracle Standby cluster First node
  Restore Control Files RestoreControlFiles.sh oracle Standby cluster First node
  Shutdown Instance ShutdownImmediateInstance.sh oracle Standby cluster First node
OCR Configuration Add Database to OCR AddDatabase.sh oracle Standby cluster First node
  Check Database Status StatusDatabase.sh oracle Standby cluster First node
  Start Database StartDatabase.sh oracle Standby cluster First node
  Check Database Status StatusDatabase.sh oracle Standby cluster First node
Data Guard Broker Configuration Create Data Guard Broker Configuration DGBCreateConfiguration.sh oracle Primary cluster First node
  Edit Data Guard Broker Properties DGBEditProperties.sh oracle Primary cluster First node
  Enable Data Guard Broker Configuration DGBEnableConfiguration.sh oracle Primary cluster First node
  Check Data Guard Broker Configuration DGBShowConfiguration.sh oracle Primary cluster First node
Flashback Logging Configuration Disable Data Guard Broker Configuration DGBDisableConfiguration.sh oracle Primary cluster First node
  Cancel Managed Recovery CancelManagedRecovery.sh oracle Standby cluster First node
  Enable Flashback Logging EnableFlashbackLogging.sh oracle Standby cluster First node
  Enable Data Guard Broker Configuration DGBEnableConfiguration.sh oracle Primary cluster First node
  Check Data Guard Broker Configuration DGBShowConfiguration.sh oracle Primary cluster First node

This page describes how to build a standby using the MAAGEN generated scripts:

The following sections assume that scripts have already been generated and distributed to the target nodes using MAAGEN.

This page assumes that Grid Infrastructure has been installed by the grid user and that the RDBMS has been installed by the oracle user. The XML document allows any user to be configured to own either the Grid Infrastructure or the RDBMS.

This page also assumes that a primary database has already been created in the primary cluster.

Network Configuration

LISTENER.ORA

MAAGEN generates additional entries for the listener.ora file in a file called listener.txt. Note that the listener.ora file differs on each node in both clusters and therefore the listener.txt file also differs on each node

As the grid user on each node in both clusters edit listener.ora.

In the primary cluster, the listener.ora files should contain entries for the scan listeners and at least one database.

In the standby cluster, the listener.ora files should contain entries for the scan listeners.

For example the listener.ora might contain the following:

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON

LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2))))
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3))))

LISTENER =
  (DESCRIPTION=
    (ADDRESS_LIST=
      (ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))
      (ADDRESS=(PROTOCOL=TCP)(HOST=bristol-vip.juliandyke.com)(PORT=1521))
    )
  )

SID_LIST_LISTENER =
  (SID_LIST=
    (SID_DESC=
      (GLOBAL_DBNAME=PROD_DGMGRL)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1)
      (SID_NAME=PROD1)
    )
  )

Assume that listener.txt contains the following:

    (SID_DESC=
      (GLOBAL_DBNAME=TEST_DGMGRL)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1)
      (SID_NAME=TEST1)
    )

Add the new SID_DESC entry to listener.ora. For example:

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON

LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2))))
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3))))

LISTENER =
  (DESCRIPTION=
    (ADDRESS_LIST=
      (ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))
      (ADDRESS=(PROTOCOL=TCP)(HOST=bristol-vip.juliandyke.com)(PORT=1521))
    )
  )

SID_LIST_LISTENER =
  (SID_LIST=
    (SID_DESC=
      (GLOBAL_DBNAME=PROD_DGMGRL)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1)
      (SID_NAME=PROD1)
    )
    (SID_DESC=
      (GLOBAL_DBNAME=TEST_DGMGRL)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1)
      (SID_NAME=TEST1)
    )
  )

Restart the listener using the reload command:

[grid@bristol1]$ lsnrctl reload

Repeat the above procedure on each node in both clusters

TNSNAMES.ORA

MAAGEN generates additional entries for the tnsnames.ora file in a file called tnsnames.txt. Note that the tnsnames.ora file can usually be identical on all nodes in both clusters and this is strongly recommended to improve maintainability and to reduce configuration errors.

Note that the tnsnames.ora file may already contain definitions for other databases, the backup catalog etc. Therefore care should be taken when modifying tnsnames.ora.

For example for the PROD databases tnsnames.txt might contain the following:

PRODBRIS.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=bristol-scan.juliandyke.com)(PORT=1521))
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODBRIS)
    )
  )

PRODBRIS1.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=bristol1-vip.juliandyke.com)(PORT=1521))
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODBRIS)
      (INSTANCE_NAME=PROD1)
    )
  )

PRODBRIS2.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=bristol2-vip.juliandyke.com)(PORT=1521))
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODBRIS)
      (INSTANCE_NAME=PROD2)
    )
  )

PRODBRIS_DGMGRL.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS_LIST=
      (LOAD_BALANCE=OFF)
      (FAILOVER=ON)
      (ADDRESS=(PROTOCOL=TCP)(HOST=bristol1-vip.juliandyke.com)(PORT=1521))
      (ADDRESS=(PROTOCOL=TCP)(HOST=bristol2-vip.juliandyke.com)(PORT=1521))
    )
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODBRIS_DGMGRL)
    )
  )

PRODBRIS1_DGMGRL.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=bristol1-vip.juliandyke.com)(PORT=1521))
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODBRIS_DGMGRL)
      (INSTANCE_NAME=PROD1)
    )
  )

PRODBRIS2_DGMGRL.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=bristol2-vip.juliandyke.com)(PORT=1521))
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODBRIS_DGMGRL)
      (INSTANCE_NAME=PROD2)
    )
  )

PRODCARD.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=cardiff-scan.juliandyke.com)(PORT=1521))
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODCARD)
    )
  )

PRODCARD1.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=cardiff1-vip.juliandyke.com)(PORT=1521))
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODCARD)
      (INSTANCE_NAME=PROD1)
    )
  )

PRODCARD2.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=cardiff2-vip.juliandyke.com)(PORT=1521))
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODCARD)
      (INSTANCE_NAME=PROD2)
    )
  )

PRODCARD_DGMGRL.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS_LIST=
      (LOAD_BALANCE=OFF)
      (FAILOVER=ON)
      (ADDRESS=(PROTOCOL=TCP)(HOST=cardiff1-vip.juliandyke.com)(PORT=1521))
      (ADDRESS=(PROTOCOL=TCP)(HOST=cardiff2-vip.juliandyke.com)(PORT=1521))
    )
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODCARD_DGMGRL)
    )
  )

PRODCARD1_DGMGRL.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=cardiff1-vip.juliandyke.com)(PORT=1521))
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODCARD_DGMGRL)
      (INSTANCE_NAME=PROD1)
    )
  )

PRODCARD2_DGMGRL.JULIANDYKE.COM =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=cardiff2-vip.juliandyke.com)(PORT=1521))
    (CONNECT_DATA=
      (SERVER=DEDICATED)
      (SERVICE_NAME=PRODCARD_DGMGRL)
      (INSTANCE_NAME=PROD2)
    )
  )

The above example shows the minimum configuration for the tnsnames.ora file summarized in the following table:

TNS Service Name Type Location
PRODBRISDatabaseBristol
PRODBRIS1InstanceBristol
PRODBRIS2InstanceBristol
PRODBRIS_DGMGRLDatabase (Broker)Bristol
PRODBRIS1_DGMGRLInstance (Broker)Bristol
PRODBRIS2_DGMGRLInstance (Broker)Bristol
PRODCARDDatabaseCardiff
PRODCARD1InstanceCardiff
PRODCARD2InstanceCardiff
PRODCARD_DGMGRLDatabase (Broker)Cardiff
PRODCARD1_DGMGRLInstance (Broker)Cardiff
PRODCARD2_DGMGRLInstance (Broker)Cardiff

As the oracle user on each node in the cluster append tnsnames.txt to $ORACLE_HOME/network/admin/tnsnames.ora.

Alternatively, if you are certain that the tnsnames.ora files are currently identical across all nodes, then append the tnsnames.txt file to tnsnames.ora on the first node in the primary cluster and then copy the tnsnames.ora file to all other nodes on both clusters.

To determine whether files are equal use the sum utility. For example:

[oracle@bristol1]$ sum $ORACLE_HOME/network/admin/tnsnames.ora
21230     1

In the above example the sum utility has printed the checksum (21230) and the block count (1) for tnsnames.ora

Primary Database Preparation

Enable Force Logging

Force logging should be enabled on the primary to ensure that all changes are captured in the redo for transmission to the standby. If force logging is not enabled and users specify the NOLOGGING option then some changes may not be written to the redo logs.

Force logging is configured in the control file and is therefore not controlled by a database parameter.

To check if force logging is currently enabled use:

SQL> SELECT force_logging FROM v$database;

FORCE_LOGGING
-------------
NO

If force logging is not currently enabled then as the oracle user on the first node of the primary cluster execute the EnableForceLogging.sh script.

[oracle@bristol1]$ ./EnableForceLogging.sh

Enable Flashback Logging

With a Data Guard Broker configuration, flashback logging is optional, but strongly recommended. If flashback logging is enabled, the old primary can be automatically reinstated to a standby database following a failover. The Data Guard Broker uses the flashback logs to reinstate the standby database. For all databases reinstatement is usually faster than rebuilding. Reinstatement can be signficantly more efficient if data volumes are high or the network connection is slow.

Flashback logging is configured in the control file and is therefore not controlled by a database parameter.

To check if flashback logging is currently enabled use:

SQL> SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------
NO

If flashback logging is not currently enabled then as the oracle user on the first node of the primary cluster execute the EnableFlashbackLogging.sh script.

[oracle@bristol1]$ ./EnableFlashbackLogging.sh

Data Guard Broker Parameters

Three additional configuration parameters are required for the Data Guard Broker.

dg_broker_start

This parameter starts the Data Guard Broker Monitor (DMON) background process.

dg_broker_config_file1

This parameter specifies the location of the first mirrored copy of the Data Guard Broker configuration file.

dg_broker_config_file2

This parameter specifies the location of the second mirrored copy of the Data Guard Broker configuration file.

The dg_broker_config_file parameters default to single instance locations in $ORACLE_HOME/dbs which are not appropriate for RAC databases. Therefore these parameters should be set to locations in shared stored. In this case the broker configuration files can be assigned to ASM locations.

The Data Guard Broker parameters must be set on both the primary and the standby database. If the parameters are set on the primary database prior to database duplication then they will be copied to the standby database during SPFILE configuration.

As the oracle user on the first node in the primary cluster execute the SetBrokerParameters.sh script:

[oracle@bristol1]$ ./SetBrokerParameters.sh

All the Data Guard Broker parameters are dynamic so it is not necessary to restart the database after they have been updated.

Filename Conversion Parameters

There are two filename conversion parameters:

db_file_name_convert

This parameter specifies datafile name conversions between the primary and standby databases

log_file_name_convert

This parameter specifies logfile name conversions between the primary and standby databases

The values for both parameters are a list of source and target patterns that are applied to the file names when the database is copied from the primary to the standby. If the standby_auto_management parameter is set to TRUE then the when new files are created on the primary. the conversion parameters are applied to the equivalent files on the standby database.

For example:

ALTER SYSTEM SET db_file_name_convert='+PRODBRIS_DATA/PRODBRIS','+PRODCARD_DATA/PRODCARD' SCOPE=SPFILE;
ALTER SYSTEM SET log_file_name_convert='+PRODBRIS_REDO/PRODBRIS','+PRODCARD_REDO/PRODCARD' SCOPE=SPFILE;

In the above example files in PRODBRIS directory of the +PRODBRIS_DATA disk group in the primary cluster will be stored in the PRODCARD directory of the +PRODCARD_DATA disk group in the standby cluster.

Note that the parameter values are reversed on the standby cluster.

As the oracle user on the first node of the primary cluster, execute SetConvertParameters.sh. For example:

[oracle@bristol1]$ ./SetConvertParameters.sh

Note that the convert parameters are NOT dynamic. They can only be set in the SPFILE. Therefore the primary database must be restarted after the convert parameters are modified.

It is not necessary to restart the primary database prior to building the standby. However, the conversion parameters in the primary database MUST be updated before a switchover operation is attempted. This should not cause any operational issues as the primary database will be shutdown and restarted as a standby database during the switchover operation and therefore connected sessions will be interrupted anyway.

Standby Redo Logs

Standby redo logs act as a staging area for redo before it is applied to the standby database by the Managed Recovery process (MRP0)

There is one redo thread for each instance in a RAC database. Each redo thread contains two or more redo groups which may optionally be mirrored.

MAAGEN assumes that each redo thread contains the same number of redo groups. It also assumes each log file is the same size. If this is not the case then either manual modification of the scripts or an enhancement request will be required.

Oracle recommends that for each redo thread, the number of standby log groups is equal to the number of redo groups plus one. This recommendation is implemented by MAAGEN. The standby redo logs should be the same size as the redo log members. For example:

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 10 SIZE 512M;

The standby redo logs are created in the primary database prior to duplication. During the duplication process they are automatically recreated in the standby database.

As the oracle user on the first node in the primary cluster execute the generated CreateStandbyRedoLogs.sh script. For example:

[oracle@bristol1]$ ./CreateStandbyRedoLogs.sh

Copy Password Files

The password files contain passwords for SYSDBA and other privileged users In particular the password files contain the password for the SYS user. In order to create the standby database and for the Data Guard Broker to be able to manage the standby configuration successfully, the password files must be identical on all nodes.

As the oracle user on the first node of the primary cluster, copy the password file to all other nodes in the primary cluster and to all nodes in the standby cluster using the CopyPasswordFiles.sh generated script.

[oracle@bristol1]$ ./CopyPasswordFiles.sh

If the SYS password is updated by any instance then execute CopyPasswordFiles.sh again the node on which that instance is running to ensure that the password files are resyncronized across the clusters.

Standby Database Preparation

Create Administrative Directories

As the oracle user on each node of the standby cluster create administrative directories using the CreateDirectories.sh generated script. For example:

[oracle@cardiff1]$ ./CreateDirectories.sh

The administrative directories include the audit destination which is specified by the audit_file_dest parameter. The audit destination must be created on a node before an instance can be started on that node.

Standby Database Duplication

The next step is to duplicate the primary database to the standby database.

Start Standby Instance

As the oracle user on the first node in the standby cluster, start an instance in NOMOUNT mode using the StartupNoMountPFILE.sh generated script.

[oracle@cardiff1]$ ./StartupNoMountPFILE.sh

Duplicate Standby Database

As the oracle user on the first node in the standby cluster, duplicate the primary database to the standby database using the DuplicateStandby.sh generated scripts.

[oracle@cardiff1]$ ./DuplicateStandby.sh

The current version of DuplicateStandby.sh use the following RMAN command:

  DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE NOFILENAMECHECK;

The DuplicateStandby.sh command is executed on the standby cluster to minimize resource usage on the primary cluster.

By default DuplicateStandby.sh allocates four channels on the primary (target) instance and four channels on the standby (auxiliary) instance. This is usually sufficient for initial impplementations.

It may be necessary to tune the numbber of channels configured on the primary or the standby in order to reduce duplication time or to reduce the impact on the primary database.

Enterprise Edition databases can use multiple channels for the DUPLICATE operation. However, Standard Edition databases can only use a single RMAN channel and the DuplicateStandby.sh script will require modification.

If it is necessary to modify the number of channels assigned, either modify the DuplicateStandby.rman script manually or raise an enhancement request.

For larger data volumes or very slow network connections it may be necessary to create an RMAN backup of the primary database and to physically deliver the backup to the standby location where it can be restored as a standby database.

Parameter File Configuration

The next step is to configure the server parameter file (SPFILE) on the standby database. This procedure requires a number of steps in order to preserve parameter settings from the primary database whilst setting instance and site-specific parameters on the standby database.

Copy SPFILE to Standby Database

As the oracle user on the first node of the standby cluster, copy the current SPFILE for the primary database to the standby database using the RestoreSPFILE.sh generated script.

[oracle@cardiff1]$ ./RestoreSPFILE.sh

The SPFILE is restored using the RMAN BACKUP AS COPY REUSE command which copies the file across the network. The file is wrttten to a configurable temporary directory (by default /tmp)

Convert SPFILE into a PFILE

As the oracle user on the first node of the standby cluster, convert the temporary SPFILE into a temporary PFILE using the CreatePFILE.sh generated script:

[oracle@cardiff1]$ ./CreatePFILE.sh

The above command uses CREATE PFILE FROM SPFILE to convert the binary SPFILE into a text-based PFILE.

Extract Global Parameters

As the oracle user on the first node of the standby cluster, execute the ExtractGlobalParameters.sh generated script.

[oracle@cardiff1]$ ./ExtractGlobalParameters.sh

The above script extracts global parameters from the temporary PFILE and stores these in a temporary include file. In a RAC environment global parameters are prefixed by an asterisk (*).

ExtractGlobalParameters.sh also copies the generated mount.ora file to the temporary directory. The generated mount.ora file contains location-specific and instance-specific parameters.

Create SPFILE

As the oracle user on the first node of the standby cluster, create the SPFILE for the standby database using the CreateSPFILE.sh generated script.

[oracle@cardiff1]$ ./CreateSPFILE.sh

The SPFILE is based on the mount.ora file in the temporary directory. The mount.ora file includes the IFILE parameter to specify the temporary include file which contains the common parameters extracted from the SPFILE on the primary database.

Create init.ora files

As the oracle user on the first node of the standby cluster, execute the CreateInitOraFiles.sh generated script.

[oracle@cardiff1]$ ./CreateInitOraFiles.sh

The above script creates init.ora files on each node in the standby cluster.

The init.ora files are located in the $ORACLE_HOME/dbs directory and include the SID in the filename e.g. initPROD1.ora, initPROD2.ora etc.

The init.ora files contain the SPFILE parameter. For example:

SPFILE='+PRODCARD_DATA/PRODCARD/spfilePRODCARD.ora'

The SPFILE parameter is used by the instance to determine the ASM disk group and filename for the SPFILE.

Shutdown Instance

As the oracle user on the first node of the standby cluster, shut down the standby instance using the ShutdownImmediateInstance.sh generated script.

[oracle@cardiff1]$ ./ShutdownImmediateInstance.sh

Start Instance

As the oracle user on the first node of the standby cluster, restart the instance using the StartupNoMountSPFILE.sh generated script:

[oracle@cardiff1]$ ./StartupNoMountSPFILE.sh

The above script starts the instance in NOMOUNT mode using the SPFILE

Control File Configuration

The RMAN DUPLICATE operation copies over a single control file for each database. Oracle strongly recommends that two or more mirrored control files are configured for each database to increase resilience.

By MAAGEN creates two control files, one in the data disk group and the other in the FRA disk group. Additional control files can be created either by modifying the scripts manually or by raising an enhancement request.

Set Control Files Parameter

As the oracle user on the first node in the standby cluster, set the CONTROL_FILES parameter in the standby instance using the SetControlFiles.sh generated script.

[oracle@cardiff1]$ ./SetControlFiles.sh

For example SetControlFiles.sh might modify the CONTROL_FILES parameter as follows:

ALTER SYSTEM SET control_files = '+PRODCARD_DATA','+PRODCARD_FRA' SCOPE=SPFILE;

The CONTROL_FILES parameter is not dynamic, so the instance be shut down and restarted for changes to this parameter to take effect.

Shutdown Instance

As the oracle user on the first node of the standby cluster, shut down the standby instance using the ShutdownImmediateInstance.sh generated script.

[oracle@cardiff1]$ ./ShutdownImmediateInstance.sh

Start Instance

As the oracle user on the first node of the standby cluster, restart the instance using the StartupNoMountSPFILE.sh generated script:

[oracle@cardiff1]$ ./StartupNoMountSPFILE.sh

The above script starts the instance in NOMOUNT mode using the SPFILE

Restore Control Files

As the oracle user on the first node of the standby cluster, copy the temporary control file to the locations specified by the CONTROL_FILES parameter using the RestoreControlFiles.sh generated script.

[oracle@cardiff1]$ ./RestoreControlFiles.sh

The control file is restored using the RMAN RESTORE CONTROLFILE command. For example:

RESTORE CONTROLFILE FROM '+PRODCARD_DATA/PRODCARD/control01.ctl';

Shutdown Instance

As the oracle user on the first node of the standby cluster, shut down the standby instance using the ShutdownImmediateInstance.sh generated script.

[oracle@cardiff1]$ ./ShutdownImmediateInstance.sh

It is not necessary to restart the instance at this stage as it will be restarted during the next task.

OCR Configuration

The next step is to add the new standby database to the OCR.

Add Database to OCR

As the oracle user on the first node of the standby cluster execute the AddDatabase.sh generated script to add the database to the OCR.

[oracle@cardiff1]$ ./AddDatabase.sh

The AddDatabase.sh script performs the following actions:

For example:

srvctl add database -d PRODCARD -o /data/app/oracle/product/11.2.0.4/db_1
srvctl modify database -d PRODCARD -n PRODECO
srvctl modify database -d PRODCARD -p +PRODCARD/PRODCARD/spfilePRODCARD.ora
srvctl modify database -d PRODCARD -a PRODCARD_DATA,PRODCARD_FRA
srvctl add instance -d PRODCARD -i PROD1 -n cardiff1
srvctl add instance -d PRODCARD -i PROD2 -n cardiff2

Check Database Status

As the oracle user on the first node of the standby cluster execute the StatusDatabase.sh generated script to check the current status of the database.

[oracle@cardiff1]$ ./StatusDatabase.sh

At this stage the database should be shut down. For example:

Instance PROD1 is not running on node cardiff1
Instance PROD2 is not running on node cardiff2

Start Database

As the oracle user on the first node of the standby cluster execute the StartDatabase.sh generated script to start the database on the standby cluster.

[oracle@cardiff1]$ ./StartDatabase.sh

The StartDatabase.sh script executes the srvctl start database command. If successful this command does not currently return any output.

Check Database Status

As the oracle user on the first node of the standby cluster execute the StatusDatabase.sh generated script to check the current status of the database again

[oracle@cardiff1]$ ./StatusDatabase.sh

At this stage the database should be running on all configured nodes. For example:

Instance PROD1 is running on node cardiff1
Instance PROD2 is running on node cardiff2

Data Guard Broker Configuration

In this step the Data Guard Broker is configured. Use of the Data Guard Broker is strongly recommended for standard MAA configurations in Oracle 11.2 and above.

Note that the Data Guard Broker configuration must be created and enabled on the cluster supporting the primary database.

Create Data Guard Broker Configuration

As the oracle user on the first node of the PRIMARY cluster, create a Data Guard Broker configuration using the DGBCreateConfiguration.sh generated script. For example:

[oracle@bristol1]$ ./DGBCreateConfiguration.sh

The above script creates a configuration based on the primary database and then adds the standby database as a physical standby. For example:

CREATE CONFIGURATION 'DGPROD' AS
PRIMARY DATABASE IS 'PRODBRIS'
CONNECT IDENTIFIER IS 'PRODBRIS.JULIANDYKE.COM';
ADD DATABASE 'PRODCARD' AS
CONNECT IDENTIFIER IS 'PRODCARD.JULIANDYKE.COM' MAINTAINED AS PHYSICAL;

Edit Data Guard Broker Properties

As the oracle user on the first node of the PRIMARY cluster, execute the DGBEditProperties.sh script to set Data Guard Broker configuration properties. For example:

[oracle@bristol1]$ ./DGBEditProperties.sh

Properties set by DGBEditProperties.sh include:

StaticConnectIdentifier

This instance property is used by Data Guard broker to restart an instance following a switchover/failover operation. This property must be set individually for each instance on both clusters. For example:

EDIT INSTANCE 'PROD1' ON DATABASE 'PRODCARD'
SET PROPERTY StaticConnectIdentifier =
'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=cardiff1-vip.sns.sky.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PRODCARD_DGMGRL)(INSTANCE_NAME=PROD1)(SERVER=DEDICATED)))';

LogXptMode

This database property is set to synchronous (SYNC) by MAAGEN. For example:

EDIT DATABASE 'PRODCARD' SET PROPERTY LogXptMode='SYNC';

NetTimeout

This database property is set to 10 by MAAGEN

EDIT DATABASE 'PRODCARD' SET PROPERTY NetTimeout=10;

Protection Mode property

This configuration property is set to MaxAvailability by MAAGEN

EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;

If the default property settings are not appropriate, either the generated scripts can be modified or an enhancement request can be raised.

Enable Data Guard Broker Configuration

As the oracle user on the first node of the PRIMARY cluster, execute the DGBEnableConfiguration.sh generated script to enable the Data Guard Broker Configuration

[oracle@bristol1]$ ./DGBEnableConfiguration.sh

The DGBEnableConfiguration.sh script executes the DGMGRL ENABLE CONFIGURATION command which starts managed recovery on the standby database. Fetch Archive Logging (FAL) is used to copy any recently created archived redo logs from the primary database to the standby database. This command can take several minutes to complete depending on the state of the standby.

Check Data Guard Broker Configuration

As the oracle user on the first node of the PRIMARY cluster, execute the DGBShowConfiguration.sh generated script to check the current status of the Data Guard Broker Configuration.

[oracle@bristol1]$ ./DGBShowConfiguration.sh

For example:

[oracle@bristol1]$ ./DGBShowConfiguration.sh
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL>
Configuration - DGPROD

  Protection Mode: MaxAvailability
  Databases:
    PRODBRIS - Primary database
    PRODCARD - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGBShowConfiguration.sh should not report any errors. If any errors are reported, then wait for a few minutes and try again as some Data Guard Broker configuration is performed asynchronously in the background and may not have completed when then ENABLE CONFIGURATION command returns.

Flashback Logging Configuration

Once managed recovery has been enabled and the standby has caught up the primary, flashback logging can be enabled on the standby database.

Data Guard Broker must be disabled before flashback logging can be enabled.

Disable Data Guard Broker Configuration

As the oracle user on the first node of the PRIMARY cluster, execute the DGBDisableConfiguration.sh generated script to disable the Data Guard Broker Configuration

[oracle@bristol1]$ ./DGBDisableConfiguration.sh

Cancel Managed Recovery

Although the Data Guard Broker has been disabled, managed recovery may still be enabled. On the first node of the STANDBY cluster execute the CancelManagedRecovery.sh generated script to cancel managed recovery.

[oracle@cardiff1]$ ./CancelManagedRecovery.sh

Enable Flashback Logging

On the first node of the STANDBY cluster, execute the EnableFlashbackLogging.sh generated script to enable flashback logging.

[oracle@cardiff1]$ ./EnableFlashbackLogging.sh

It is not necessary to restart the instance after flashback logging has been enabled.

Enable Data Guard Broker Configuration

As the oracle user on the first node of the PRIMARY cluster, execute the DGBEnableConfiguration.sh generated script to enable the Data Guard Broker Configuration

[oracle@bristol1]$ ./DGBEnableConfiguration.sh

Check Data Guard Broker Configuration

As the oracle user on the first node of the PRIMARY cluster, execute the DGBShowConfiguration.sh generated script to check the current status of the Data Guard Broker Configuration.

[oracle@bristol1]$ ./DGBShowConfiguration.sh

This completes the physical standby creation procedure.