APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.1 and laterInformation in this document applies to any platform.
GOAL
The primary goal of this how-to is establish a data guard configuration where the primary site is RAC and the standby site is RAC.
Please Note: The cluster nodes each have 3 network interfaces, 1 to used for the cluster interconnect, 1 for the public network and 1 for log shipping purposes between the primary and standby sites. These interfaces must be in place prior to starting the configuration steps detailed in this document.
SOLUTION
Cluster Configuration Information
Details the configuration prior to establishing the data guard broker environment for RAC Primary to RAC Standby using a second network and set of network interfaces in each GRID cluster and RAC database site.
RAC Primary Nodes
Interface: eth0
Node 1:
10.187.115.125 grid1vm1.au.oracle.com
10.187.115.139 grid1vm1-vip.au.oracle.com
Node 2:
10.187.115.126 grid1vm2.au.oracle.com
10.187.115.140 grid1vm2-vip.au.oracle.com
Single Client Access Name (SCAN)
10.187.115.133 grid1vmscan1.au.oracle.com
Data Guard IP Addresses/Interfaces:
Interface: eth2
Node 1:
192.168.11.125 grid1vm1-dg.au.oracle.com
192.168.11.225 grid1vm1-dg-vip.au.oracle.com
Node 2:
192.168.11.126 grid1vm2-dg.au.oracle.com
192.168.11.226 grid1vm2-dg-vip.au.oracle.com
Database Details
db_name db112a
db_unique_name db112a
instance_names db112a1, db112a2
RAC Standby Nodes
Interface: eth0
Node 1:
10.187.115.128 grid2vm1.au.oracle.com
10.187.115.142 grid2vm1-vip.au.oracle.com grid2vm1-vip
Node 2:
10.187.115.129 grid2vm2.au.oracle.com
10.187.115.143 grid2vm2-vip.au.oracle.com grid2vm2-vip
Single Client Access Name (SCAN)
10.187.115.136 grid2vmscan1.au.oracle.com
Data Guard IP Addresses/Interfaces:
Interface: eth2
Node 1:
192.168.11.128 grid2vm1-dg
192.168.11.228 grid2vm1-dg-vip.au.oracle.com
Node 2:
192.168.11.129 grid2vm2-dg
192.168.11.229 grid2vm2-dg-vip.au.oracle.com
Database Details
db_name db112a
db_unique_name db112a_stb
instance_names db112a1, db112a2
Listener Configuration Details
The way the listeners now operate in an 11.2 environment has altered dramatically when compared to previous releases. Some of the changes relevant to data guard are the following:
- The introduction of the SCAN and SCAN listener(s) (there can be up to 3 SCAN listeners) for handling client connections.
- The listener configuration details are now held in both the GRID cluster registry (OCR) and the GRID_HOME/network/admin directory.
- RAC enabled listeners must run out of the 11.2 GRID Home
- RAC enabled listeners must have Virtual IP Addresses in order for the listener to be configured
- The network the listeners are to be configured against must be registered in the OCR as a resource
- Administration of the listeners should always be performed through netca
The SCAN Listener Configuration
The SCAN listener and SCAN VIP can reside on any node of the cluster and move from node to node as a result of nodes leaving the cluster.
The Primary Site:
In the example below we can see the SCAN listener at the Primary Site is running on the node grid1vm2
oracle 13775 1 0 Aug18 ? 00:00:02 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_DG -inherit
oracle 14737 1 0 Aug12 ? 00:00:08 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit
oracle 18728 1 0 Aug19 ? 00:00:02 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
[oracle@grid1vm2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.LISTENER.lsnr
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
..
.
ora.net1.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
..
.
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE grid1vm2
ora.db112a.db
1 ONLINE ONLINE grid1vm1 Open
2 ONLINE ONLINE grid1vm2 Open
..
.
ora.grid1vm1.vip
1 ONLINE ONLINE grid1vm1
..
.
ora.grid1vm2.vip
1 ONLINE ONLINE grid1vm2
..
.
ora.scan1.vip
1 ONLINE ONLINE grid1vm2
As a result the status of the SCAN listener and its service listing can only be checked from the node it is running on. You will not be able to access the SCAN listeners details from a node it is NOT running on.
LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 20-AUG-2011 01:07:59
Copyright (c) 1991, 2010, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 19-AUG-2011 00:04:39
Uptime 1 days 1 hr. 3 min. 19 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/grid1vm2/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.133)(PORT=1521)))
Services Summary...
Service "DB112A.au.oracle.com" has 2 instance(s).
Instance "db112a1", status READY, has 2 handler(s) for this service...
Instance "db112a2", status READY, has 2 handler(s) for this service...
..
.
The command completed successfully
[oracle@grid1vm2 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node grid1vm2
[oracle@grid1vm2 ~]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
The Standby site:
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[oracle@grid2vm1 ~]$ ps -ef | grep tns
oracle 3202 1 0 Aug11 ? 00:00:06 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle 8051 1 0 04:49 ? 00:00:03 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_DG -inherit
oracle 10572 1 0 05:26 ? 00:00:01 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit
oracle 29833 29788 0 23:18 pts/1 00:00:00 grep tns
[oracle@grid2vm1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.LISTENER.lsnr
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
..
.
ora.net1.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
..
.
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE grid2vm1
..
.
ora.db112a_stb.db
1 ONLINE INTERMEDIATE grid2vm1 Mounted (Closed)
2 ONLINE INTERMEDIATE grid2vm2 Mounted (Closed)
..
.
ora.grid2vm1.vip
1 ONLINE ONLINE grid2vm1
..
.
ora.grid2vm2.vip
1 ONLINE ONLINE grid2vm2
..
.
ora.scan1.vip
1 ONLINE ONLINE grid2vm1
[oracle@grid2vm1 ~]$ lsnrctl status listener_scan1
LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 19-AUG-2011 23:19:09
Copyright (c) 1991, 2010, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 11-AUG-2011 20:18:32
Uptime 8 days 3 hr. 0 min. 36 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora
Listener Log File /u01/app/11.2.0.2/grid/log/diag/tnslsnr/grid2vm1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.136)(PORT=1521)))
Services Summary...
Service "db112a_stb.au.oracle.com" has 2 instance(s).
Instance "db112a1", status READY, has 1 handler(s) for this service...
Instance "db112a2", status READY, has 1 handler(s) for this service...
..
.
The command completed successfully
[oracle@grid2vm1 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node grid2vm1
[oracle@grid2vm1 ~]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
The Local Listeners Configuration
Each node in the cluster has a node listener which will be bound to the VIP and public host IP.
The Primary Site:
oracle 14737 1 0 Aug12 ? 00:00:09 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit
oracle 18728 1 0 Aug19 ? 00:00:02 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
[oracle@grid1vm2 ~]$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:6D:71:40
inet addr:10.187.115.126 Bcast:10.187.115.255 Mask:255.255.254.0
inet6 addr: fe80::216:3eff:fe6d:7140/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6457652 errors:0 dropped:0 overruns:0 frame:0
TX packets:173719 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:840105961 (801.1 MiB) TX bytes:93958502 (89.6 MiB)
eth0:3 Link encap:Ethernet HWaddr 00:16:3E:6D:71:40
inet addr:10.187.115.140 Bcast:10.187.115.255 Mask:255.255.254.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
[oracle@grid1vm2 ~]$ lsnrctl status
LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 20-AUG-2011 01:34:24
Copyright (c) 1991, 2010, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 12-AUG-2011 01:26:54
Uptime 8 days 0 hr. 7 min. 30 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/grid1vm2/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.126)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.140)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "DB112A.au.oracle.com" has 1 instance(s).
Instance "db112a2", status READY, has 2 handler(s) for this service...
..
.
The command completed successfully
The Standby Site:
oracle 3202 1 0 Aug11 ? 00:00:06 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle 10572 1 0 05:26 ? 00:00:01 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit
[oracle@grid2vm1 ~]$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:F4:35:04
inet addr:10.187.115.128 Bcast:10.187.115.255 Mask:255.255.254.0
inet6 addr: fe80::216:3eff:fef4:3504/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8149993 errors:0 dropped:0 overruns:0 frame:0
TX packets:951427 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2737221120 (2.5 GiB) TX bytes:4646486293 (4.3 GiB)
eth0:3 Link encap:Ethernet HWaddr 00:16:3E:F4:35:04
inet addr:10.187.115.142 Bcast:10.187.115.255 Mask:255.255.254.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
[oracle@grid2vm1 ~]$ lsnrctl status
LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 19-AUG-2011 23:27:03
Copyright (c) 1991, 2010, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 19-AUG-2011 05:26:38
Uptime 0 days 18 hr. 0 min. 25 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/grid2vm1/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.128)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.142)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "db112a_stb.au.oracle.com" has 1 instance(s).
Instance "db112a1", status READY, has 2 handler(s) for this service...
..
.
The command completed successfully
The RDBMS TNSNAMES.ora
[oracle@grid1vm1 admin]$ cat tnsnames.ora
# tnsnames.ora.grid1vm1 Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_2/network/admin/tnsnames.ora.grid1vm1
# Generated by Oracle configuration tools.
DB112A_PRM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = grid1vmscan1.au.oracle.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db112a.au.oracle.com)
)
)
DB112A_STB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = grid2vmscan1.au.oracle.com)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = db112a_stb.au.oracle.com)
)
)
Building the Data Guard Broker and Log Shipping Network
The process for establishing the environment as a Data Guard configuration using a second network other than the public network
1. Establish the new entries in the hosts file for the second network interfaces and VIP's required. The following hosts file also includes the SCAN and publicly resolvable hostnames and VIPS which would normally be resolved through the DNS. This hosts file is the same across all cluster nodes and both the primary and standby sites.
[root@ovmsrv1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 ovmsrv1.au.oracle.com ovmsrv1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
#######################
# Public IP Addresses
#######################
#
# Oracle Virtual Servers - Hypervisors
#
10.187.115.123 ovmsrv1.au.oracle.com ovmsrv1
10.187.115.124 ovmsrv2.au.oracle.com ovmsrv2
#
# Virtual Machines
#
10.187.115.125 grid1vm1.au.oracle.com grid1vm1
10.187.115.126 grid1vm2.au.oracle.com grid1vm2
10.187.115.127 grid1vm3.au.oracle.com grid1vm3
10.187.115.128 grid2vm1.au.oracle.com grid2vm1
10.187.115.129 grid2vm2.au.oracle.com grid2vm2
10.187.115.130 grid2vm3.au.oracle.com grid2vm3
10.187.115.131 grid1filer1.au.oracle.com grid1filer1
10.187.115.132 grid2filer1.au.oracle.com grid2filer1
10.187.115.133 grid1vmscan1.au.oracle.com grid1vmscan1
10.187.115.134 grid1vmscan2.au.oracle.com grid1vmscan2
10.187.115.135 grid1vmscan3.au.oracle.com grid1vmscan3
10.187.115.136 grid2vmscan1.au.oracle.com grid2vmscan1
10.187.115.137 grid2vmscan2.au.oracle.com grid2vmscan2
10.187.115.138 grid2vmscan3.au.oracle.com grid2vmscan3
10.187.115.139 grid1vm1-vip.au.oracle.com grid1vm1-vip
10.187.115.140 grid1vm2-vip.au.oracle.com grid1vm2-vip
10.187.115.141 grid1vm3-vip.au.oracle.com grid1vm3-vip
10.187.115.142 grid2vm1-vip.au.oracle.com grid2vm1-vip
10.187.115.143 grid2vm2-vip.au.oracle.com grid2vm2-vip
10.187.115.144 grid2vm3-vip.au.oracle.com grid2vm3-vip
######################
# Private IP Addresses
######################
#
# Interconnect
#
192.168.10.123 ovmsrv1-prv
192.168.10.124 ovmsrv2-prv
192.168.10.125 grid1vm1-prv
192.168.10.126 grid1vm2-prv
192.168.10.127 grid1vm3-prv
192.168.10.128 grid2vm1-prv
192.168.10.129 grid2vm2-prv
192.168.10.130 grid2vm3-prv
192.168.10.131 gridfiler1-prv
192.168.10.132 gridfiler2-prv
##################################
# Data Guard Log Shipping Network
##################################a
#
# Data Guard Private IP's
#
192.168.11.125 grid1vm1-dg
192.168.11.126 grid1vm2-dg
192.168.11.127 grid1vm3-dg
192.168.11.128 grid2vm1-dg
192.168.11.129 grid2vm2-dg
192.168.11.130 grid2vm3-dg
#
# Data Guard VIP's
#
192.168.11.225 grid1vm1-dg-vip.au.oracle.com grid1vm1-dg-vip
192.168.11.226 grid1vm2-dg-vip.au.oracle.com grid1vm2-dg-vip
192.168.11.227 grid1vm3-dg-vip.au.oracle.com grid1vm3-dg-vip
192.168.11.228 grid2vm1-dg-vip.au.oracle.com grid2vm1-dg-vip
192.168.11.229 grid2vm2-dg-vip.au.oracle.com grid2vm2-dg-vip
192.168.11.230 grid2vm3-dg-vip.au.oracle.com grid2vm3-dg-vip
2. Add the new network configuration to the GRID environment. The new network will running on eth2 using the network subnet of 192.168.11.0. In this case the same network is being used between the primary and standby sites as there is now router and the connection is facilitated through a network switch only.
At the Primary site on one of the cluster nodes add the new network resource using srvctl.
Successfully added Network.
[root@grid1vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
ora.net2.network
OFFLINE OFFLINE grid1vm1
OFFLINE OFFLINE grid1vm2
..
.
At the Standby Site on one of the cluster nodes add the new network resource using srvctl
[root@grid2vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
ora.net2.network
OFFLINE OFFLINE grid2vm1
OFFLINE OFFLINE grid2vm2
..
.
3. Start the new network resources.
From the Primary Site:
CRS-2672: Attempting to start 'ora.net2.network' on 'grid1vm1'
CRS-2672: Attempting to start 'ora.net2.network' on 'grid1vm2'
CRS-2676: Start of 'ora.net2.network' on 'grid1vm2' succeeded
CRS-2676: Start of 'ora.net2.network' on 'grid1vm1' succeeded
[root@grid1vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
ora.net2.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
..
.
ora.db112a.db
1 ONLINE ONLINE grid1vm1 Open
2 ONLINE ONLINE grid1vm2 Open
From the Standby Site:
[root@grid2vm1 ~]# crsctl start res ora.net2.network
CRS-2672: Attempting to start 'ora.net2.network' on 'grid2vm1'
CRS-2672: Attempting to start 'ora.net2.network' on 'grid2vm2'
CRS-2676: Start of 'ora.net2.network' on 'grid2vm2' succeeded
CRS-2676: Start of 'ora.net2.network' on 'grid2vm1' succeeded
[root@grid2vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
ora.net2.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
..
.
ora.db112a_stb.db
1 ONLINE INTERMEDIATE grid2vm1 Mounted (Closed)
2 ONLINE INTERMEDIATE grid2vm2 Mounted (Closed)
4. Add the new VIP addresses to GRID environment
# Data Guard VIP's
192.168.11.225 grid1vm1-dg-vip
192.168.11.226 grid1vm2-dg-vip
192.168.11.227 grid1vm3-dg-vip
192.168.11.228 grid2vm1-dg-vip
192.168.11.229 grid2vm2-dg-vip
192.168.11.230 grid2vm3-dg-vip
At the Primary site from one of the cluster nodes
[root@grid1vm1 ~]# srvctl add vip -n grid1vm2 -A 192.168.11.226/255.255.255.0 -k 2
At the Standby site from one of the cluster nodes
[root@grid2vm1 ~]# srvctl add vip -n grid2vm2 -A 192.168.11.229/255.255.255.0 -k 2
5. Start the new VIP resources on each cluster node
At the Primary site from one of the cluster nodes
[root@grid1vm1 ~]# srvctl start vip -i grid1vm2-dg-vip
[root@grid1vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
ora.net2.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
..
.
ora.db112a.db
1 ONLINE ONLINE grid1vm1 Open
2 ONLINE ONLINE grid1vm2 Open
ora.grid1vm1-dg-vip.vip
1 ONLINE ONLINE grid1vm1
ora.grid1vm1.vip
1 ONLINE ONLINE grid1vm1
ora.grid1vm2-dg-vip.vip
1 ONLINE ONLINE grid1vm2
ora.grid1vm2.vip
..
.
At the Standby site from one of the cluster nodes
[root@grid2vm1 ~]# srvctl start vip -i grid2vm2-dg-vip
[root@grid2vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid2vm1
&n