The following table summarizes the Physical Standby Recreation procedure:
Task | Sub Task | Script/File | User | Cluster | Nodes |
Standby Deletion | Disable Data Guard Broker Configuration | DGBDisableConfiguration.sh | oracle | Primary cluster | First node |
Remove Data Guard Broker Configuration | DGBRemoveConfiguration.sh | oracle | Primary cluster | First node | |
Stop Database | StopDatabase.sh | oracle | Standby cluster | First node | |
Remove Database | RemoveDatabase.sh | oracle | Standby cluster | First node | |
Remove ASM Files | RemoveASMFiles.sh | grid | Standby cluster | First node | |
Remove Administrative Directories | RemoveDirectories.sh | oracle | Standby cluster | All nodes | |
Standby Preparation | Create Administrative Directories | CreateDirectories.sh | oracle | Standby cluster | All nodes |
Standby Duplication | Start Standby Instance | StartupNoMountPFILE.sh | oracle | Standby cluster | First node |
Duplicate Standby Database | DuplicateStandby.sh | oracle | Standby cluster | First node | |
Parameter File Configuration | Copy SPFILE to Standby Database | RestoreSPFILE.sh | oracle | Standby cluster | First node |
Convert SPFILE into a PFILE | CreatePFILE.sh | oracle | Standby cluster | First node | |
Extract Global Parameters | ExtractGlobalParameters.sh | oracle | Standby cluster | First node | |
Create SPFILE | CreateSPFILE.sh | oracle | Standby cluster | First node | |
Create init.ora files | CreateInitOraFiles.sh | oracle | Standby cluster | First node | |
Shutdown Instance | ShutdownImmediateInstance.sh | oracle | Standby cluster | First node | |
Start Instance | StartupNoMountSPFILE.sh | oracle | Standby cluster | First node | |
Control File Configuration | Set Control Files Parameter | SetControlFiles.sh | oracle | Standby cluster | First node |
Shutdown Instance | ShutdownImmediateInstance.sh | oracle | Standby cluster | First node | |
Start Instance | StartupNoMountSPFILE.sh | oracle | Standby cluster | First node | |
Restore Control Files | RestoreControlFiles.sh | oracle | Standby cluster | First node | |
Shutdown Instance | ShutdownImmediateInstance.sh | oracle | Standby cluster | First node | |
OCR Configuration | Add Database to OCR | AddDatabase.sh | oracle | Standby cluster | First node |
Check Database Status | StatusDatabase.sh | oracle | Standby cluster | First node | |
Start Database | StartDatabase.sh | oracle | Standby cluster | First node | |
Check Database Status | StatusDatabase.sh | oracle | Standby cluster | First node | |
Data Guard Broker Configuration | Create Data Guard Broker Configuration | DGBCreateConfiguration.sh | oracle | Primary cluster | First node |
Edit Data Guard Broker Properties | DGBEditProperties.sh | oracle | Primary cluster | First node | |
Enable Data Guard Broker Configuration | DGBEnableConfiguration.sh | oracle | Primary cluster | First node | |
Check Data Guard Broker Configuration | DGBShowConfiguration.sh | oracle | Primary cluster | First node | |
Flashback Logging Configuration | Disable Data Guard Broker Configuration | DGBDisableConfiguration.sh | oracle | Primary cluster | First node |
Cancel Managed Recovery | CancelManagedRecovery.sh | oracle | Standby cluster | First node | |
Enable Flashback Logging | EnableFlashbackLogging.sh | oracle | Standby cluster | First node | |
Enable Data Guard Broker Configuration | DGBEnableConfiguration.sh | oracle | Primary cluster | First node | |
Check Data Guard Broker Configuration | DGBShowConfiguration.sh | oracle | Primary cluster | First node |
This page describes how to rebuild a standby database using the MAAGEN generated scripts:
The following sections assume that scripts have already been generated and distributed to the target nodes using MAAGEN.
This page assumes that Grid Infrastructure has been installed by the grid user and that the RDBMS has been installed by the oracle user. The XML document allows any user to be configured to own either the Grid Infrastructure or the RDBMS.
This page also assumes that a primary database has already been created in the primary cluster and a physical standby database has been created in the standby cluster. The standby database will be deleted and recreated.
As the oracle user on the first node of the PRIMARY cluster, execute the DGBDisableConfiguration.sh generated script to disable the Data Guard Broker Configuration
[oracle@bristol1]$ ./DGBDisableConfiguration.sh
As the oracle user on the first node of the PRIMARY cluster, execute the DGBRemoveConfiguration.sh generated script to remove the Data Guard Broker Configuration
[oracle@bristol1]$ ./DGBRemoveConfiguration.sh
As the oracle user on the first node of the standby cluster execute the StopDatabase.sh generated script to stop the database on the standby cluster.
[oracle@cardiff1]$ ./StopDatabase.sh
The StopDatabase.sh script executes the srvctl stop database command. If successful this command does not currently return any output.
As the oracle user on the first node of the standby cluster execute the RemoveDatabase.sh generated script to remove the database from the OCR configuration.
[oracle@cardiff1]$ ./RemoveDatabase.sh
The StopDatabase.sh script executes the srvctl remove database command. This command will prompt for confirmation before it deletes the database from the OCR.
As the grid user on the standby cluster, execute the RemoveASMFiles.sh generated script to remove all ASM files and directories for the standby database from the standby cluster.
[oracle@cardiff1]$ ./RemoveASMFiles.sh
WARNING. Take extreme care when executing this script. It will delete the data files, control files, parameter file, online and archive redo logs that exist for the standby database in the ASM disk groups on the standby cluster. Also take care that you are executing the script for the correct database.
The script will prompt for confirmation. It will only proceed if the response is "yes" (case-insentive). Any other reply including "y" or "Y" will cause the script to exit.
Note that this script must be run by the grid user (if one exists) and MUST be executed on the STANDBY cluster.
If in any doubt execute the steps manually using ASMCMD or sqlplus.
As the oracle user on the first node of the standby cluster execute the RemoveDirectories.sh generated script to remove the administrative directories created for the standby database.
[oracle@cardiff1]$ ./RemoveDirectories.sh
This script will delete the audit directory and any administrative directories created when the standby was created.
As the oracle user on each node of the standby cluster create administrative directories using the CreateDirectories.sh generated script. For example:
[oracle@cardiff1]$ ./CreateDirectories.sh
The administrative directories include the audit destination which is specified by the audit_file_dest parameter. The audit destination must be created on a node before an instance can be started on that node.
The next step is to duplicate the primary database to the standby database.
As the oracle user on the first node in the standby cluster, start an instance in NOMOUNT mode using the StartupNoMountPFILE.sh generated script.
[oracle@cardiff1]$ ./StartupNoMountPFILE.sh
As the oracle user on the first node in the standby cluster, duplicate the primary database to the standby database using the DuplicateStandby.sh generated scripts.
[oracle@cardiff1]$ ./DuplicateStandby.sh
The current version of DuplicateStandby.sh use the following RMAN command:
DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE NOFILENAMECHECK;
The DuplicateStandby.sh command is executed on the standby cluster to minimize resource usage on the primary cluster.
By default DuplicateStandby.sh allocates four channels on the primary (target) instance and four channels on the standby (auxiliary) instance. This is usually sufficient for initial impplementations.
It may be necessary to tune the numbber of channels configured on the primary or the standby in order to reduce duplication time or to reduce the impact on the primary database.
Enterprise Edition databases can use multiple channels for the DUPLICATE operation. However, Standard Edition databases can only use a single RMAN channel and the DuplicateStandby.sh script will require modification.
If it is necessary to modify the number of channels assigned, either modify the DuplicateStandby.rman script manually or raise an enhancement request.
For larger data volumes or very slow network connections it may be necessary to create an RMAN backup of the primary database and to physically deliver the backup to the standby location where it can be restored as a standby database.
The next step is to configure the server parameter file (SPFILE) on the standby database. This procedure requires a number of steps in order to preserve parameter settings from the primary database whilst setting instance and site-specific parameters on the standby database.
As the oracle user on the first node of the standby cluster, copy the current SPFILE for the primary database to the standby database using the RestoreSPFILE.sh generated script.
[oracle@cardiff1]$ ./RestoreSPFILE.sh
The SPFILE is restored using the RMAN BACKUP AS COPY REUSE command which copies the file across the network. The file is wrttten to a configurable temporary directory (by default /tmp)
As the oracle user on the first node of the standby cluster, convert the temporary SPFILE into a temporary PFILE using the CreatePFILE.sh generated script:
[oracle@cardiff1]$ ./CreatePFILE.sh
The above command uses CREATE PFILE FROM SPFILE to convert the binary SPFILE into a text-based PFILE.
As the oracle user on the first node of the standby cluster, execute the ExtractGlobalParameters.sh generated script.
[oracle@cardiff1]$ ./ExtractGlobalParameters.sh
The above script extracts global parameters from the temporary PFILE and stores these in a temporary include file. In a RAC environment global parameters are prefixed by an asterisk (*).
ExtractGlobalParameters.sh also copies the generated mount.ora file to the temporary directory. The generated mount.ora file contains location-specific and instance-specific parameters.
As the oracle user on the first node of the standby cluster, create the SPFILE for the standby database using the CreateSPFILE.sh generated script.
[oracle@cardiff1]$ ./CreateSPFILE.sh
The SPFILE is based on the mount.ora file in the temporary directory. The mount.ora file includes the IFILE parameter to specify the temporary include file which contains the common parameters extracted from the SPFILE on the primary database.
As the oracle user on the first node of the standby cluster, execute the CreateInitOraFiles.sh generated script.
[oracle@cardiff1]$ ./CreateInitOraFiles.sh
The above script creates init.ora files on each node in the standby cluster.
The init.ora files are located in the $ORACLE_HOME/dbs directory and include the SID in the filename e.g. initPROD1.ora, initPROD2.ora etc.
The init.ora files contain the SPFILE parameter. For example:
SPFILE='+PRODCARD_DATA/PRODCARD/spfilePRODCARD.ora'
The SPFILE parameter is used by the instance to determine the ASM disk group and filename for the SPFILE.
As the oracle user on the first node of the standby cluster, shut down the standby instance using the ShutdownImmediateInstance.sh generated script.
[oracle@cardiff1]$ ./ShutdownImmediateInstance.sh
As the oracle user on the first node of the standby cluster, restart the instance using the StartupNoMountSPFILE.sh generated script:
[oracle@cardiff1]$ ./StartupNoMountSPFILE.sh
The above script starts the instance in NOMOUNT mode using the SPFILE
The RMAN DUPLICATE operation copies over a single control file for each database. Oracle strongly recommends that two or more mirrored control files are configured for each database to increase resilience.
By MAAGEN creates two control files, one in the data disk group and the other in the FRA disk group. Additional control files can be created either by modifying the scripts manually or by raising an enhancement request.
As the oracle user on the first node in the standby cluster, set the CONTROL_FILES parameter in the standby instance using the SetControlFiles.sh generated script.
[oracle@cardiff1]$ ./SetControlFiles.sh
For example SetControlFiles.sh might modify the CONTROL_FILES parameter as follows:
ALTER SYSTEM SET control_files = '+PRODCARD_DATA','+PRODCARD_FRA' SCOPE=SPFILE;
The CONTROL_FILES parameter is not dynamic, so the instance be shut down and restarted for changes to this parameter to take effect.
As the oracle user on the first node of the standby cluster, shut down the standby instance using the ShutdownImmediateInstance.sh generated script.
[oracle@cardiff1]$ ./ShutdownImmediateInstance.sh
As the oracle user on the first node of the standby cluster, restart the instance using the StartupNoMountSPFILE.sh generated script:
[oracle@cardiff1]$ ./StartupNoMountSPFILE.sh
The above script starts the instance in NOMOUNT mode using the SPFILE
As the oracle user on the first node of the standby cluster, copy the temporary control file to the locations specified by the CONTROL_FILES parameter using the RestoreControlFiles.sh generated script.
[oracle@cardiff1]$ ./RestoreControlFiles.sh
The control file is restored using the RMAN RESTORE CONTROLFILE command. For example:
RESTORE CONTROLFILE FROM '+PRODCARD_DATA/PRODCARD/control01.ctl';
As the oracle user on the first node of the standby cluster, shut down the standby instance using the ShutdownImmediateInstance.sh generated script.
[oracle@cardiff1]$ ./ShutdownImmediateInstance.sh
It is not necessary to restart the instance at this stage as it will be restarted during the next task.
The next step is to add the new standby database to the OCR.
As the oracle user on the first node of the standby cluster execute the AddDatabase.sh generated script to add the database to the OCR.
[oracle@cardiff1]$ ./AddDatabase.sh
The AddDatabase.sh script performs the following actions:
For example:
srvctl add database -d PRODCARD -o /data/app/oracle/product/11.2.0.4/db_1 srvctl modify database -d PRODCARD -n PRODECO srvctl modify database -d PRODCARD -p +PRODCARD/PRODCARD/spfilePRODCARD.ora srvctl modify database -d PRODCARD -a PRODCARD_DATA,PRODCARD_FRA srvctl add instance -d PRODCARD -i PROD1 -n cardiff1 srvctl add instance -d PRODCARD -i PROD2 -n cardiff2
As the oracle user on the first node of the standby cluster execute the StatusDatabase.sh generated script to check the current status of the database.
[oracle@cardiff1]$ ./StatusDatabase.sh
At this stage the database should be shut down. For example:
Instance PROD1 is not running on node cardiff1 Instance PROD2 is not running on node cardiff2
As the oracle user on the first node of the standby cluster execute the StartDatabase.sh generated script to start the database on the standby cluster.
[oracle@cardiff1]$ ./StartDatabase.sh
The StartDatabase.sh script executes the srvctl start database command. If successful this command does not currently return any output.
As the oracle user on the first node of the standby cluster execute the StatusDatabase.sh generated script to check the current status of the database again
[oracle@cardiff1]$ ./StatusDatabase.sh
At this stage the database should be running on all configured nodes. For example:
Instance PROD1 is running on node cardiff1 Instance PROD2 is running on node cardiff2
In this step the Data Guard Broker is configured. Use of the Data Guard Broker is strongly recommended for standard MAA configurations in Oracle 11.2 and above.
Note that the Data Guard Broker configuration must be created and enabled on the cluster supporting the primary database.
As the oracle user on the first node of the PRIMARY cluster, create a Data Guard Broker configuration using the DGBCreateConfiguration.sh generated script. For example:
[oracle@bristol1]$ ./DGBCreateConfiguration.sh
The above script creates a configuration based on the primary database and then adds the standby database as a physical standby. For example:
CREATE CONFIGURATION 'DGPROD' AS PRIMARY DATABASE IS 'PRODBRIS' CONNECT IDENTIFIER IS 'PRODBRIS.JULIANDYKE.COM'; ADD DATABASE 'PRODCARD' AS CONNECT IDENTIFIER IS 'PRODCARD.JULIANDYKE.COM' MAINTAINED AS PHYSICAL;
As the oracle user on the first node of the PRIMARY cluster, execute the DGBEditProperties.sh script to set Data Guard Broker configuration properties. For example:
[oracle@bristol1]$ ./DGBEditProperties.sh
Properties set by DGBEditProperties.sh include:
This instance property is used by Data Guard broker to restart an instance following a switchover/failover operation. This property must be set individually for each instance on both clusters. For example:
EDIT INSTANCE 'PROD1' ON DATABASE 'PRODCARD' SET PROPERTY StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=cardiff1-vip.sns.sky.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PRODCARD_DGMGRL)(INSTANCE_NAME=PROD1)(SERVER=DEDICATED)))';
This database property is set to synchronous (SYNC) by MAAGEN. For example:
EDIT DATABASE 'PRODCARD' SET PROPERTY LogXptMode='SYNC';
This database property is set to 10 by MAAGEN
EDIT DATABASE 'PRODCARD' SET PROPERTY NetTimeout=10;
This configuration property is set to MaxAvailability by MAAGEN
EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
If the default property settings are not appropriate, either the generated scripts can be modified or an enhancement request can be raised.
As the oracle user on the first node of the PRIMARY cluster, execute the DGBEnableConfiguration.sh generated script to enable the Data Guard Broker Configuration
[oracle@bristol1]$ ./DGBEnableConfiguration.sh
The DGBEnableConfiguration.sh script executes the DGMGRL ENABLE CONFIGURATION command which starts managed recovery on the standby database. Fetch Archive Logging (FAL) is used to copy any recently created archived redo logs from the primary database to the standby database. This command can take several minutes to complete depending on the state of the standby.
As the oracle user on the first node of the PRIMARY cluster, execute the DGBShowConfiguration.sh generated script to check the current status of the Data Guard Broker Configuration.
[oracle@bristol1]$ ./DGBShowConfiguration.sh
For example:
[oracle@bristol1]$ ./DGBShowConfiguration.sh DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected. DGMGRL> Configuration - DGPROD Protection Mode: MaxAvailability Databases: PRODBRIS - Primary database PRODCARD - Physical standby database Fast-Start Failover: DISABLED Configuration Status: SUCCESS
DGBShowConfiguration.sh should not report any errors. If any errors are reported, then wait for a few minutes and try again as some Data Guard Broker configuration is performed asynchronously in the background and may not have completed when then ENABLE CONFIGURATION command returns.
Once managed recovery has been enabled and the standby has caught up the primary, flashback logging can be enabled on the standby database.
Data Guard Broker must be disabled before flashback logging can be enabled.
As the oracle user on the first node of the PRIMARY cluster, execute the DGBDisableConfiguration.sh generated script to disable the Data Guard Broker Configuration
[oracle@bristol1]$ ./DGBDisableConfiguration.sh
Although the Data Guard Broker has been disabled, managed recovery may still be enabled. On the first node of the STANDBY cluster execute the CancelManagedRecovery.sh generated script to cancel managed recovery.
[oracle@cardiff1]$ ./CancelManagedRecovery.sh
On the first node of the STANDBY cluster, execute the EnableFlashbackLogging.sh generated script to enable flashback logging.
[oracle@cardiff1]$ ./EnableFlashbackLogging.sh
It is not necessary to restart the instance after flashback logging has been enabled.
As the oracle user on the first node of the PRIMARY cluster, execute the DGBEnableConfiguration.sh generated script to enable the Data Guard Broker Configuration
[oracle@bristol1]$ ./DGBEnableConfiguration.sh
As the oracle user on the first node of the PRIMARY cluster, execute the DGBShowConfiguration.sh generated script to check the current status of the Data Guard Broker Configuration.
[oracle@bristol1]$ ./DGBShowConfiguration.sh
This completes the physical standby creation procedure.