OCR File and Voting Disk Administration by Example - (Oracle 10g)
by Jeff Hunter, Sr. Database Administrator
Contents
- Overview
- Example Configuration
- Administering the OCR File
View OCR Configuration Information
Add an OCR File
Relocate an OCR File
Repair an OCR File on a Local Node
Remove an OCR File - Backup the OCR File
Automatic OCR Backups
Manual OCR Exports - Recover the OCR File
Recover OCR from Valid OCR Mirror
Recover OCR from Automatically Generated Physical Backup
Recover OCR from an OCR Export File - Administering the Voting Disk
View Voting Disk Configuration Information
Add a Voting Disk
Remove a Voting Disk
Relocate a Voting Disk - Backup the Voting Disk
- Recover the Voting Disk
- Move the Voting Disk and OCR from OCFS to RAW Devices
Move the OCR
Move the Voting Disk - About the Author
Oracle Clusterware 10g, formerly known as Cluster Ready Services (CRS) is software that when installed on servers running the same operating system, enables the servers to be bound together to operate and function as a single server or cluster. This infrastructure simplifies the requirement for an Oracle Real Application Clusters (RAC) database by providing cluster software that is tightly integrated with the Oracle Database.
The Oracle Clusterware requires two critical clusterware components: a voting disk to record node membership information and the Oracle Cluster Registry (OCR) to record cluster configuration information:
Voting DiskOracle Cluster Registry (OCR)The voting disk is a shared partition that Oracle Clusterware uses to verify cluster node membership and status. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster by way of a health check and arbitrates cluster ownership among the instances in case of network failures. The primary function of the voting disk is to manage node membership and prevent what is known as Split Brain Syndrome in which two or more instances attempt to control the RAC database. This can occur in cases where there is a break in communication between nodes through the interconnect.
The voting disk must reside on a shared disk(s) that is accessible by all of the nodes in the cluster. For high availability, Oracle recommends that you have multiple voting disks. Oracle Clusterware can be configured to maintain multiple voting disks (multiplexing) but you must have an odd number of voting disks, such as three, five, and so on. Oracle Clusterware supports a maximum of 32 voting disks. If you define a single voting disk, then you should use external mirroring to provide redundancy.
A node must be able to access more than half of the voting disks at any time. For example, if you have five voting disks configured, then a node must be able to access at least three of the voting disks at any time. If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster. After the cause of the failure has been corrected and access to the voting disks has been restored, you can instruct Oracle Clusterware to recover the failed node and restore it to the cluster.
Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. OCR is the repository of configuration information for the cluster that manages information about like the cluster node list and instance-to-node mapping information. This configuration information is used by many of the processes that make up the CRS as well as other cluster-aware applications which use this repository to share information amoung them. Some of the main components included in the OCR are:
- Node membership information
- Database instance, node, and other mapping information
- ASM (if configured)
- Application resource profiles such as VIP addresses, services, etc.
- Service characteristics
- Information about processes that Oracle Clusterware controls
- Information about any third-party applications controlled by CRS (10g R2 and later)
The OCR stores configuration information in a series of key-value pairs within a directory tree structure. To view the contents of the OCR in a human-readable format, run the ocrdump command. This will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE.
The OCR must reside on a shared disk(s) that is accessible by all of the nodes in the cluster. Oracle Clusterware 10g Release 2 allows you to multiplex the OCR and Oracle recommends that you use this feature to ensure cluster high availability. Oracle Clusterware allows for a maximum of two OCR locations; one is the primary and the second is an OCR mirror. If you define a single OCR, then you should use external mirroring to provide redundancy. You can replace a failed OCR online, and you can update the OCR through supported APIs such as Enterprise Manager, the Server Control Utility (SRVCTL), or the Database Configuration Assistant (DBCA).
This article provides a detailed look at how to administer the two critical Oracle Clusterware components — the voting disk and the Oracle Cluster Registry (OCR). The examples described in this guide were tested with Oracle RAC 10g Release 2 (10.2.0.4) on the Linux x86 platform.
![]()
It is highly recommended to take a backup of the voting disk and OCR file before making any changes! Instruction are included in this guide on how to perform backups of the voting disk and OCR file.
![]()
CRS_home The Oracle Clusterware binaries included in this article (i.e. crs_stat, ocrcheck, crsctl, etc.) are being executed from the Oracle Clusterware home directory which for the purpose of this article is /u01/app/crs. The environment variable $ORA_CRS_HOME is set for both the oracle and root user accounts to this directory and is also included in the $PATH:
[root@racnode1 ~]# echo $ORA_CRS_HOME /u01/app/crs [root@racnode1 ~]# which ocrcheck /u01/app/crs/bin/ocrcheck
![]() |
Example Configuration
The example configuration used in this article consists of a two-node RAC with a clustered database named racdb.idevelopment.info running Oracle RAC 10g Release 2 on the Linux x86 platform. The two node names are racnode1 and racnode2, each hosting a single Oracle instance named racdb1 and racdb2 respectively. For a detailed guide on building the example clustered database environment, please see:
![]()
Building an Inexpensive Oracle RAC 10g Release 2 on Linux - (CentOS 5.3 / iSCSI)The example Oracle Clusterware environment is configured with a single voting disk and a single OCR file on an OCFS2 clustered file system. Note that the voting disk is owned by the oracle user in the oinstall group with 0644 permissions while the OCR file is owned by root in the oinstall group with 0640 permissions:
Check Current OCR File
[oracle@racnode1 ~]$ ls -l /u02/oradata/racdb total 16608 -rw-r--r-- 1 oracle oinstall 10240000 Aug 26 22:43 CSSFile drwxr-xr-x 2 oracle oinstall 3896 Aug 26 23:45 dbs/ -rw-r----- 1 root oinstall 6836224 Sep 3 23:47 OCRFileCheck Current Voting Disk
[oracle@racnode1 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4660 Available space (kbytes) : 257460 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeededPreparation
[oracle@racnode1 ~]$ crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile located 1 votedisk(s).To prepare for the examples used in this guide, five new iSCSI volumes were created from the SAN and will be bound to RAW devices on all nodes in the RAC cluster. These five new volumes will be used to demonstrate how to move the current voting disk and OCR file from an OCFS2 file system to RAW devices:
Five New iSCSI Volumes and their Local Device Name Mappings iSCSI Target Name Local Device Name Disk Size iqn.2006-01.com.openfiler:racdb.ocr1 /dev/iscsi/ocr1/part 512 MB iqn.2006-01.com.openfiler:racdb.ocr2 /dev/iscsi/ocr2/part 512 MB iqn.2006-01.com.openfiler:racdb.voting1 /dev/iscsi/voting1/part 32 MB iqn.2006-01.com.openfiler:racdb.voting2 /dev/iscsi/voting2/part 32 MB iqn.2006-01.com.openfiler:racdb.voting3 /dev/iscsi/voting3/part 32 MB After creating the new iSCSI volumes from the SAN, they now need to be configured for access and bound to RAW devices by all Oracle RAC nodes in the database cluster.
- From all Oracle RAC nodes in the cluster as root, discover the five new iSCSI volumes from the SAN which will be used to store the voting disks and OCR files.
[root@racnode1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting3 [root@racnode2 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting3- Manually login to the new iSCSI targets from all Oracle RAC nodes in the cluster.
[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr1 -p 192.168.2.195 -l [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr2 -p 192.168.2.195 -l [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting1 -p 192.168.2.195 -l [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting2 -p 192.168.2.195 -l [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting3 -p 192.168.2.195 -l [root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr1 -p 192.168.2.195 -l [root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr2 -p 192.168.2.195 -l [root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting1 -p 192.168.2.195 -l [root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting2 -p 192.168.2.195 -l [root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting3 -p 192.168.2.195 -l- Create a single primary partition on each of the five new iSCSI volumes that span the entire disk. Perform this from only one of the Oracle RAC nodes in the cluster:
[root@racnode1 ~]# fdisk /dev/iscsi/ocr1/part [root@racnode1 ~]# fdisk /dev/iscsi/ocr2/part [root@racnode1 ~]# fdisk /dev/iscsi/voting1/part [root@racnode1 ~]# fdisk /dev/iscsi/voting2/part [root@racnode1 ~]# fdisk /dev/iscsi/voting3/part- Re-scan the SCSI bus from all Oracle RAC nodes in the cluster:
[root@racnode2 ~]# partprobe- Create a shell script (/usr/local/bin/setup_raw_devices.sh) on all Oracle RAC nodes in the cluster to bind the five Oracle Clusterware component devices to RAW devices as follows:
# +---------------------------------------------------------+ # | FILE: /usr/local/bin/setup_raw_devices.sh | # +---------------------------------------------------------+ # +---------------------------------------------------------+ # | Bind OCR files to RAW device files. | # +---------------------------------------------------------+ /bin/raw /dev/raw/raw1 /dev/iscsi/ocr1/part1 /bin/raw /dev/raw/raw2 /dev/iscsi/ocr2/part1 sleep 3 /bin/chown root:oinstall /dev/raw/raw1 /bin/chown root:oinstall /dev/raw/raw2 /bin/chmod 0640 /dev/raw/raw1 /bin/chmod 0640 /dev/raw/raw2 # +---------------------------------------------------------+ # | Bind voting disks to RAW device files. | # +---------------------------------------------------------+ /bin/raw /dev/raw/raw3 /dev/iscsi/voting1/part1 /bin/raw /dev/raw/raw4 /dev/iscsi/voting2/part1 /bin/raw /dev/raw/raw5 /dev/iscsi/voting3/part1 sleep 3 /bin/chown oracle:oinstall /dev/raw/raw3 /bin/chown oracle:oinstall /dev/raw/raw4 /bin/chown oracle:oinstall /dev/raw/raw5 /bin/chmod 0644 /dev/raw/raw3 /bin/chmod 0644 /dev/raw/raw4 /bin/chmod 0644 /dev/raw/raw5From all Oracle RAC nodes in the cluster, change the permissions of the new shell script to execute:
[root@racnode1 ~]# chmod 755 /usr/local/bin/setup_raw_devices.sh [root@racnode2 ~]# chmod 755 /usr/local/bin/setup_raw_devices.shManually execute the new shell script from all Oracle RAC nodes in the cluster to bind the voting disks to RAW devices:
[root@racnode1 ~]# /usr/local/bin/setup_raw_devices.sh /dev/raw/raw1: bound to major 8, minor 97 /dev/raw/raw2: bound to major 8, minor 17 /dev/raw/raw3: bound to major 8, minor 1 /dev/raw/raw4: bound to major 8, minor 49 /dev/raw/raw5: bound to major 8, minor 33 [root@racnode2 ~]# /usr/local/bin/setup_raw_devices.sh /dev/raw/raw1: bound to major 8, minor 65 /dev/raw/raw2: bound to major 8, minor 49 /dev/raw/raw3: bound to major 8, minor 33 /dev/raw/raw4: bound to major 8, minor 1 /dev/raw/raw5: bound to major 8, minor 17Check that the character (RAW) devices were created from all Oracle RAC nodes in the cluster:
[root@racnode1 ~]# ls -l /dev/raw total 0 crw-r----- 1 root oinstall 162, 1 Sep 24 00:48 raw1 crw-r----- 1 root oinstall 162, 2 Sep 24 00:48 raw2 crw-r--r-- 1 oracle oinstall 162, 3 Sep 24 00:48 raw3 crw-r--r-- 1 oracle oinstall 162, 4 Sep 24 00:48 raw4 crw-r--r-- 1 oracle oinstall 162, 5 Sep 24 00:48 raw5 [root@racnode2 ~]# ls -l /dev/raw total 0 crw-r----- 1 root oinstall 162, 1 Sep 24 00:48 raw1 crw-r----- 1 root oinstall 162, 2 Sep 24 00:48 raw2 crw-r--r-- 1 oracle oinstall 162, 3 Sep 24 00:48 raw3 crw-r--r-- 1 oracle oinstall 162, 4 Sep 24 00:48 raw4 crw-r--r-- 1 oracle oinstall 162, 5 Sep 24 00:48 raw5 [root@racnode1 ~]# raw -qa /dev/raw/raw1: bound to major 8, minor 97 /dev/raw/raw2: bound to major 8, minor 17 /dev/raw/raw3: bound to major 8, minor 1 /dev/raw/raw4: bound to major 8, minor 49 /dev/raw/raw5: bound to major 8, minor 33 [root@racnode2 ~]# raw -qa /dev/raw/raw1: bound to major 8, minor 65 /dev/raw/raw2: bound to major 8, minor 49 /dev/raw/raw3: bound to major 8, minor 33 /dev/raw/raw4: bound to major 8, minor 1 /dev/raw/raw5: bound to major 8, minor 17Include the new shell script in /etc/rc.local to run on each boot from all Oracle RAC nodes in the cluster:
[root@racnode1 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local [root@racnode2 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local- Once the raw devices are created, use the dd command to zero out the device and make sure no data is written to the raw devices. Only perform this action from one of the Oracle RAC nodes in the cluster:
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1 dd: writing to '/dev/raw/raw1': No space left on device 1048516+0 records in 1048515+0 records out 536839680 bytes (537 MB) copied, 773.145 seconds, 694 kB/s [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw2 dd: writing to '/dev/raw/raw2': No space left on device 1048516+0 records in 1048515+0 records out 536839680 bytes (537 MB) copied, 769.974 seconds, 697 kB/s [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3 dd: writing to '/dev/raw/raw3': No space left on device 65505+0 records in 65504+0 records out 33538048 bytes (34 MB) copied, 47.9176 seconds, 700 kB/s [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4 dd: writing to '/dev/raw/raw4': No space left on device 65505+0 records in 65504+0 records out 33538048 bytes (34 MB) copied, 47.9915 seconds, 699 kB/s [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw5 dd: writing to '/dev/raw/raw5': No space left on device 65505+0 records in 65504+0 records out 33538048 bytes (34 MB) copied, 48.2684 seconds, 695 kB/s
![]() |
Administering the OCR File
View OCR Configuration InformationAdd an OCR FileTwo methods exist to verify how many OCR files are configured for the cluster as well as their location. If the cluster is up and running, use the ocrcheck utility as either the oracle or root user account:
[oracle@racnode1 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4660 Available space (kbytes) : 257460 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile <-- OCR (primary) Device/File integrity check succeeded Device/File not configured <-- OCR Mirror (not configured) Cluster registry integrity check succeededIf CRS is down, you can still determine the location and number of OCR files by viewing the file ocr.loc, whose location is somewhat platform dependent. For example, on the Linux platform it is located in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:
[root@racnode1 ~]# cat /etc/oracle/ocr.loc ocrconfig_loc=/u02/oradata/racdb/OCRFile local_only=FALSETo view the actual contents of the OCR in a human-readable format, run the ocrdump command. This command requires the CRS stack to be running. Running the ocrdump command will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE:
[root@racnode1 ~]# ocrdump [root@racnode1 ~]# ls -l OCRDUMPFILE -rw-r--r-- 1 root root 250304 Oct 2 22:46 OCRDUMPFILEThe ocrdump utility also allows for different output options:
# # Write OCR contents to specified file name. # [root@racnode1 ~]# ocrdump /tmp/'hostname'_ocrdump_'date +%m%d%y:%H%M' # # Print OCR contents to the screen. # [root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css # # Write OCR contents out to XML format. # [root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css -xml > ocrdump.xmlRelocate an OCR FileStarting with Oracle Clusterware 10g Release 2 (10.2), users now have the ability to multiplex (mirror) the OCR. Oracle Clusterware allows for a maximum of two OCR locations; one is the primary and the second is an OCR mirror. To avoid simultaneous loss of multiple OCR files, each copy of the OCR should be placed on a shared storage device that does not share any components (controller, interconnect, and so on) with the storage devices used for the other OCR file.
Before attempting to add a mirrored OCR, determine how many OCR files are currently configured for the cluster as well as their location. If the cluster is up and running, use the ocrcheck utility as either the oracle or root user account:
[oracle@racnode1 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4660 Available space (kbytes) : 257460 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile <-- OCR (primary) Device/File integrity check succeeded Device/File not configured <-- OCR Mirror (not configured yet) Cluster registry integrity check succeededIf CRS is down, you can still determine the location and number of OCR files by viewing the file ocr.loc, whose location is somewhat platform dependent. For example, on the Linux platform it is located in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:
[root@racnode1 ~]# cat /etc/oracle/ocr.loc ocrconfig_loc=/u02/oradata/racdb/OCRFile local_only=FALSEThe results above indicate I have only one OCR file and that it is located on an OCFS2 file system. Since we are allowed a maximum of two OCR locations, I intend to create an OCR mirror and locate it on the same OCFS2 file system in the same directory as the primary OCR. Please note that I am doing this for the sake brevity. The OCR mirror should always be placed on a separate device than the primary OCR file to guard against a single point of failure.
Note that the Oracle Clusterware stack should be online and running on all nodes in the cluster while adding, replacing, or removing the OCR location and hence does not require any system downtime.
![]()
The operations performed in this section affect the OCR for the entire cluster. However, the ocrconfig command cannot modify OCR configuration information for nodes that are shut down or for nodes on which Oracle Clusterware is not running. So, you should avoid shutting down nodes while modifying the OCR using the ocrconfig command. If for any reason, any of the nodes in the cluster are shut down while modifying the OCR using the ocrconfig command, you will need to perform a repair on the stopped node before it can brought online to join the cluster. Please see the section "Repair an OCR File on a Local Node" for instructions on repairing the OCR file on the affected node. You can add an OCR mirror after an upgrade or after completing the Oracle Clusterware installation. The Oracle Universal Installer (OUI) allows you to configure either one or two OCR locations during the installation of Oracle Clusterware. If you already mirror the OCR, then you do not need to add a new OCR location; Oracle Clusterware automatically manages two OCRs when you configure normal redundancy for the OCR. As previously mentioned, Oracle RAC environments do not support more than two OCR locations; a primary OCR and a secondary (mirrored) OCR.
Run the following command to add or relocate an OCR mirror using either destination_file or disk to designate the target location of the additional OCR:
ocrconfig -replace ocrmirrorocrconfig -replace ocrmirror
![]()
You must be logged in as the root user to run the ocrconfig command.
![]()
Please note that ocrconfig -replace is the only way to add/relocate OCR files/mirrors. Attempting to copy the existing OCR file to a new location and then manually adding/changing the file pointer in the ocr.loc file is not supported and will actually fail to work. For example:
# # Verify CRS is running on node 1. # [root@racnode1 ~]# crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy # # Verify CRS is running on node 2. # [root@racnode2 ~]# crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy # # Configure the shared OCR destination_file/disk before # attempting to create the new ocrmirror on it. This example # creates a destination_file on an OCFS2 file system. # Failure to pre-configure the new destination_file/disk # before attempting to run ocrconfig will result in the # following error: # # PROT-21: Invalid parameter # [root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror [root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror [root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror [root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror # # Add new OCR mirror. # [root@racnode1 ~]# ocrconfig -replace ocrmirror /u02/oradata/racdb/OCRFile_mirrorAfter adding the new OCR mirror, check that it can be seen from all nodes in the cluster:
# # Verify new OCR mirror from node 1. # [root@racnode1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror <-- New OCR Mirror Device/File integrity check succeeded Cluster registry integrity check succeeded [root@racnode1 ~]# cat /etc/oracle/ocr.loc #Device/file getting replaced by device /u02/oradata/racdb/OCRFile_mirror ocrconfig_loc=/u02/oradata/racdb/OCRFile ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror # # Verify new OCR mirror from node 2. # [root@racnode2 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror <-- New OCR Mirror Device/File integrity check succeeded Cluster registry integrity check succeeded [root@racnode2 ~]# cat /etc/oracle/ocr.loc #Device/file getting replaced by device /u02/oradata/racdb/OCRFile_mirror ocrconfig_loc=/u02/oradata/racdb/OCRFile ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirrorAs mentioned earlier, you can have at most two OCR files in the cluster; the primary OCR and a single OCR mirror. Attempting to add an extra mirror will actually relocate the current OCR mirror to the new location specified in the command:
[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror2 [root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror2 [root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror2 [root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror2 [root@racnode1 ~]# ocrconfig -replace ocrmirror /u02/oradata/racdb/OCRFile_mirror2 [root@racnode1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror2 <-- Mirror was Relocated! Device/File integrity check succeeded Cluster registry integrity check succeededRepair an OCR File on a Local NodeJust as we were able to add a new ocrmirror while the CRS stack was online, the same holds true when relocating an OCR file or OCR mirror and therefore does not require any system downtime.
![]()
You can relocate OCR only when the OCR is mirrored. A mirror copy of the OCR file is required to move the OCR online. If there is no mirror copy of the OCR, first create the mirror using the instructions in the previous section.
Attempting to relocate OCR when an OCR mirror does not exist will produce the following error:
ocrconfig -replace ocr /u02/oradata/racdb/OCRFile PROT-16: Internal ErrorIf the OCR mirror is not required in the cluster after relocating the OCR, it can be safely removed.
Run the following command as the root account to relocate the current OCR file to a new location using either destination_file or disk to designate the new target location for the OCR:
ocrconfig -replace ocrocrconfig -replace ocr Run the following command as the root account to relocate the current OCR mirror to a new location using either destination_file or disk to designate the new target location for the OCR mirror:
ocrconfig -replace ocrmirrorocrconfig -replace ocrmirror The following example assumes the OCR is mirrored and demonstrates how to relocate the current OCR file (/u02/oradata/racdb/OCRFile) from the OCFS2 file system to a new raw device (/dev/raw/raw1):
# # Verify CRS is running on node 1. # [root@racnode1 ~]# crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy # # Verify CRS is running on node 2. # [root@racnode2 ~]# crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy # # Verify current OCR configuration. # [root@racnode2 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile <-- Current OCR to Relocate Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror Device/File integrity check succeeded Cluster registry integrity check succeeded # # Verify new raw storage device exists, is configured with # the correct permissions, and can be seen from all nodes # in the cluster. # [root@racnode1 ~]# ls -l /dev/raw/raw1 crw-r----- 1 root oinstall 162, 1 Oct 2 19:54 /dev/raw/raw1 [root@racnode2 ~]# ls -l /dev/raw/raw1 crw-r----- 1 root oinstall 162, 1 Oct 2 19:54 /dev/raw/raw1 # # Clear out the contents from the new raw device. # [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1 # # Relocate primary OCR file to new raw device. Note that # there is no deletion of the old OCR file but simply a # replacement. # [root@racnode1 ~]# ocrconfig -replace ocr /dev/raw/raw1After relocating the OCR file, check that the change can be seen from all nodes in the cluster:
# # Verify new OCR file from node 1. # [root@racnode1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 <-- Relocated OCR Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror Device/File integrity check succeeded Cluster registry integrity check succeeded [root@racnode1 ~]# cat /etc/oracle/ocr.loc #Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1 ocrconfig_loc=/dev/raw/raw1 ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror # # Verify new OCR file from node 2. # [root@racnode2 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 <-- Relocated OCR Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror Device/File integrity check succeeded Cluster registry integrity check succeeded [root@racnode2 ~]# cat /etc/oracle/ocr.loc #Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1 ocrconfig_loc=/dev/raw/raw1 ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirrorAfter verifying the relocation was successful, remove the old OCR file at the OS level:
[root@racnode1 ~]# rm -v /u02/oradata/racdb/OCRFile removed '/u02/oradata/racdb/OCRFile'Remove an OCR FileIt was mentioned in the previous section that the ocrconfig command cannot modify OCR configuration information for nodes that are shut down or for nodes on which Oracle Clusterware is not running. You may need to repair an OCR configuration on a particular node if your OCR configuration changes while that node is stopped. For example, you may need to repair the OCR on a node that was shut down while you were adding, replacing, or removing an OCR.
To repair an OCR configuration, run the following command as root from the node on which you have stopped the Oracle Clusterware daemon:
ocrconfig 杛epair ocr device_nameTo repair an OCR mirror configuration, run the following command as root from the node on which you have stopped the Oracle Clusterware daemon:
ocrconfig 杛epair ocrmirror device_name
![]()
You cannot perform this operation on a node on which the Oracle Clusterware daemon is running. The CRS stack must be shutdown before attempting to repair the OCR configuration on the local node. The ocrconfig 杛epair command changes the OCR configuration only on the node from which you run this command. For example, if the OCR mirror was relocated to a disk named /dev/raw/raw2 from racnode1 while the node racnode2 was down, then use the command ocrconfig -repair ocrmirror /dev/raw/raw2 on racnode2 while the CRS stack is down on that node to repair its OCR configuration:
# # Shutdown CRS stack on node 2 and verify the CRS stack is not up. # [root@racnode2 ~]# crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [root@racnode2 ~]# ps -ef | grep d.bin | grep -v grep # # Relocate OCR mirror to new raw device from node 1. Note # that node 2 is down (actually CRS down on node 2) while # we relocate the OCR mirror. # [root@racnode1 ~]# ocrconfig -replace ocrmirror /dev/raw/raw2 # # Verify relocated OCR mirror from node 1. # [root@racnode1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File Name : /dev/raw/raw2 <-- Relocated OCR Mirror Device/File integrity check succeeded Cluster registry integrity check succeeded [root@racnode1 ~]# cat /etc/oracle/ocr.loc #Device/file /u02/oradata/racdb/OCRFile_mirror getting replaced by device /dev/raw/raw2 ocrconfig_loc=/dev/raw/raw1 ocrmirrorconfig_loc=/dev/raw/raw2 # # Node 2 does not know about the OCR mirror relocation. # [root@racnode2 ~]# cat /etc/oracle/ocr.loc #Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1 ocrconfig_loc=/dev/raw/raw1 ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror # # While the CRS stack is down on node 2, perform a local OCR # repair operation to inform the node of the relocated OCR # mirror. The ocrconfig -repair option will only update the # OCR configuration information on node 2. If there were # other nodes shutdown during the relocation, they too will # need repaired. # [root@racnode2 ~]# ocrconfig -repair ocrmirror /dev/raw/raw2 # # Verify the repair updated the OCR configuration on node 2. # [root@racnode2 ~]# cat /etc/oracle/ocr.loc #Device/file /u02/oradata/racdb/OCRFile_mirror getting replaced by device /dev/raw/raw2 ocrconfig_loc=/dev/raw/raw1 ocrmirrorconfig_loc=/dev/raw/raw2 # # Bring up the CRS stack on node 2. # [root@racnode2 ~]# crsctl start crs Attempting to start CRS stack The CRS stack will be started shortly # # Verify node 2 is back online. # [root@racnode2 ~]# crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.racdb.db application ONLINE ONLINE racnode1 ora....b1.inst application ONLINE ONLINE racnode1 ora....b2.inst application ONLINE ONLINE racnode2 ora....srvc.cs application ONLINE ONLINE racnode1 ora....db1.srv application ONLINE ONLINE racnode1 ora....db2.srv application ONLINE ONLINE racnode2 ora....SM1.asm application ONLINE ONLINE racnode1 ora....E1.lsnr application ONLINE ONLINE racnode1 ora....de1.gsd application ONLINE ONLINE racnode1 ora....de1.ons application ONLINE ONLINE racnode1 ora....de1.vip application ONLINE ONLINE racnode1 ora....SM2.asm application ONLINE ONLINE racnode2 ora....E2.lsnr application ONLINE ONLINE racnode2 ora....de2.gsd application ONLINE ONLINE racnode2 ora....de2.ons application ONLINE ONLINE racnode2 ora....de2.vip application ONLINE ONLINE racnode2To remove an OCR, you need to have at least one OCR online. You may need to perform this to reduce overhead or for other storage reasons, such as stopping a mirror to move it to SAN, RAID etc. Carry out the following steps:
- Check if at least one OCR is online
- Verify the CRS stack is online — preferably on all nodes
- Remove the OCR or OCR mirror
- If using a clustered file system, remove the deleted file at the OS level
Run the following command as the root account to delete the current OCR or the current OCR mirror:
ocrconfig -replace ocr or ocrconfig -replace ocrmirrorFor example:
# # Verify CRS is running on node 1. # [root@racnode1 ~]# crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy # # Verify CRS is running on node 2. # [root@racnode2 ~]# crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy # # Get the existing OCR file information by running ocrcheck # utility. # [root@racnode1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File Name : /dev/raw/raw2 <-- OCR Mirror to be Removed Device/File integrity check succeeded Cluster registry integrity check succeeded # # Delete OCR mirror from the cluster configuration. # [root@racnode1 ~]# ocrconfig -replace ocrmirrorAfter removing the new OCR mirror, check that the change is seen from all nodes in the cluster:
# # Verify OCR mirror was removed from node 1. # [root@racnode1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File not configured <-- OCR Mirror Removed Cluster registry integrity check succeeded # # Verify OCR mirror was removed from node 2. # [root@racnode2 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File not configured <-- OCR Mirror Removed Cluster registry integrity check succeeded
![]()
Removing the OCR or OCR mirror from the cluster configuration does not remove the physical file at the OS level when using a clustered file system.
![]() |
Backup the OCR File
There are two methods for backing up the contents of the OCR and each backup method can be used for different recovery purposes. This section discusses how to ensure the stability of the cluster by implementing a robust backup strategy.
The first type of backup relies on automatically generated OCR file copies which are sometimes referred to as physical backups. These physical OCR file copies are automatically generated by the CRSD process on the
master node and are primarily used to recover the OCR from a lost or corrupt OCR file. Your backup strategy should include procedures to copy these automatically generated OCR file copies to a secure location which is accessible from all nodes in the cluster in the event the OCR needs to be restored.
The second type of backup uses manual procedures to create OCR export files; also known as logical backups. Creating a manual OCR export file should be performed both before and after making significant configuration changes to the cluster, such as adding or deleting nodes from your environment, modifying Oracle Clusterware resources, or creating a database. If in the event a configuration change is made to the OCR that causes errors, the OCR can be restored to a previous state by performing an import of the logical backup taken before the configuration change. Please note that an OCR logical export can also be used to restore the OCR from a lost or corrupt OCR file.
![]()
Unlike the methods used to backup the voting disk, attempting to backup the OCR by copying the OCR file directly at the OS level is not a valid backup and will result in errors after the restore! Because of the importance of OCR information, Oracle recommends that you make copies of the automatically created backup files and an OCR export at least once a day. The following is a working UNIX script that can be scheduled in CRON to backup the OCR File(s) and the Voting Disk(s) on a regular basis:
Automatic OCR Backupscrs_components_backup_10g.ksh
The Oracle Clusterware automatically creates OCR physical backups every four hours. At any one time, Oracle always retains the last 3 backup copies of the OCR that are 4 hours old. The CRSD process that creates these backups also creates and retains an OCR backup for each full day and at the end of each week. You cannot customize the backup frequencies or the number of OCR physical backup files that Oracle retains.
The default location for generating physical backups on UNIX-based systems is CRS_