1)、检查集群状态:
[grid@rac02 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
2)、所有 Oracle 实例 —(数据库状态):
[grid@rac02 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node rac01
Instance racdb2 is running on node rac02
3)、检查单个实例状态:
[grid@rac02 ~]$ srvctl status instance -d racdb -i racdb1
Instance racdb1 is running on node rac01
4)、节点应用程序状态:
[grid@rac02 ~]$ srvctl status nodeapps
VIP rac01-vip is enabled
VIP rac01-vip is running on node: rac01
VIP rac02-vip is enabled
VIP rac02-vip is running on node: rac02
Network is enabled
Network is running on node: rac01
Network is running on node: rac02
GSD is disabled
GSD is not running on node: rac01
GSD is not running on node: rac02
ONS is enabled
ONS daemon is running on node: rac01
ONS daemon is running on node: rac02
eONS is enabled
eONS daemon is running on node: rac01
eONS daemon is running on node: rac02
5)、列出所有的配置数据库:
[grid@rac02 ~]$ srvctl config database
racdb
6)、数据库配置:
[grid@rac02 ~]$ srvctl config database -d racdb -a
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +RACDB_DATA/racdb/spfileracdb.ora
Domain: xzxj.edu.cn
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2
Disk Groups: RACDB_DATA,FRA
Services:
Database is enabled
Database is administrator managed
7)、ASM状态以及ASM配置:
[grid@rac02 ~]$ srvctl status asm
ASM is running on rac01,rac02
[grid@rac02 ~]$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
8)、TNS监听器状态以及配置:
[grid@rac02 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac01,rac02
[grid@rac02 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home:
End points: TCP:1521
9)、SCAN状态以及配置:
[grid@rac02 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac02
[grid@rac02 ~]$ srvctl config scan
SCAN name: rac-scan.xzxj.edu.cn, Network:
1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /rac-scan.xzxj.edu.cn/192.168.1.55
10)、VIP各个节点的状态以及配置:
[grid@rac02 ~]$ srvctl status vip -n rac01
VIP rac01-vip is enabled
VIP rac01-vip is running on node: rac01
[grid@rac02 ~]$ srvctl status vip -n rac02
VIP rac02-vip is enabled
VIP rac02-vip is running on node: rac02
[grid@rac02 ~]$ srvctl config vip -n rac01
VIP exists.:rac01
VIP exists.: /rac01-vip/192.168.1.53/255.255.255.0/eth0
[grid@rac02 ~]$ srvctl config vip -n rac02
VIP exists.:rac02
VIP exists.: /rac02-vip/192.168.1.54/255.255.255.0/eth0
11)、节点应用程序配置 —(VIP、GSD、ONS、监听器)
[grid@rac02 ~]$ srvctl config nodeapps -a -g -s -l
-l option has been deprecated and will be ignored.
VIP exists.:rac01
VIP exists.: /rac01-vip/192.168.1.53/255.255.255.0/eth0
VIP exists.:rac02
VIP exists.: /rac02-vip/192.168.1.54/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
Name: LISTENER
Network: 1, Owner: grid
Home:
End points: TCP:1521
12)、验证所有集群节点间的时钟同步:
[grid@rac02 ~]$ cluvfy comp clocksync -verbose
Verifying Clock Synchronization across the cluster nodes
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
Node Name Status
------------------------------------ ------------------------
rac02 passed
Result: CTSS resource check passed
Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed
Check CTSS state started...
Check: CTSS state
Node Name State
------------------------------------ ------------------------
rac02 Active
CTSS is in Active state. Proceeding with check of clock time offsets
on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
Node Name Time Offset Status
------------ ------------------------ ------------------------
rac02 0.0 passed
Time offset is within the specified limits on the following set of
nodes:
"[rac02]"
Result: Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Verification of Clock Synchronization across the cluster nodes was
successful.
13)、集群中所有正在运行的实例 — (SQL):
SELECT inst_id , instance_number inst_no , instance_name inst_name , parallel , status ,
database_status db_status , active_state state , host_name host FROM gv$instance ORDER BY inst_id;
14)、所有数据库文件及它们所在的 ASM 磁盘组 — (SQL):
15)、ASM 磁盘卷:
16)、启动和停止集群:
以下操作需用root用户执行。
(1)、在本地服务器上停止Oracle Clusterware 系统:
[root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster
注:在运行“crsctl stop cluster”命令之后,如果 Oracle
Clusterware 管理的
资源中有任何一个还在运行,则整个命令失败。使用 -f 选项无条件地停止所有资源并
停止 Oracle Clusterware 系统。
另请注意,可通过指定 -all 选项在集群中所有服务器上停止 Oracle Clusterware
系统。如下所示,在rac01和rac02上停止oracle clusterware系统:
[root@rac02 ~]# /u01/app/grid/product/11.2.0/crs_1/bin/crsctl
stop cluster -all
在本地服务器上启动oralce clusterware系统:
[root@rac01 ~]#/u01/app/grid/product/11.2.0/crs_1/bin /crsctl start
cluster -all
注:可通过指定 -all 选项在集群中所有服务器上启动 Oracle Clusterware 系统。
[root@rac02 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster –all
还可以通过列出服务器(各服务器之间以空格分隔)在集群中一个或多个指定的
服务器上启动 Oracle Clusterware 系统:
[root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n rac01
rac02
使用 SRVCTL 启动/停止所有实例:
[oracle@rac01 ~]#srvctl stop database -d racdb
[oracle@rac01 ~]#srvctl start database -d racdb
Startup
1. Ensure that you are logged in as the root Linux/ UNIX user.
If you are not connected as root OS user, you must switch to the oracle OS user
su - root
2. Start (startup) the Oracle cluster stack
su - root
cd $CRS_HOME/bin
# ./crsctl start crs (must be run on each node)
3. Startup (start) all Oracle ASM instances on all nodes. (If you are not using the ASM you must skip this step.)
su - oracle (you must be logged as oracle)
To shut down an Oracle ASM instance, enter
the following command, where node_name is the name of the node
where the Oracle ASM instance is running:
$ oracle_home/bin/srvctl start asm -n node_name
4. Startup (start) all Oracle RAC instances on all nodes.
To shut down all Oracle RAC instances for a
database, enter the following command, where db_name is the name of the
database:
$ srvctl start database -d db_name (srvctl from
ORACLE_HOME) (this command is starting all the
instances)
5. Startup (start) all applications using the Oracle database.
This step includes stopping (shutting down) the Oracle Enterprise Manager Database Control:
$ emctl start dbconsole
Oracle Enterprise
Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights
reserved.
https://dev1rac.mfol.dgti.ro:1158/em/console/aboutApplication
Stopping Oracle Enterprise Manager 11g Database Control ...
... Stopped.
Shutdown
Attention: In previous releases of Oracle Database, you were required to set environment variables for ORACLE_HOME and ORACLE_SID to start, stop, and check the status
of Enterprise Manager. With Oracle Database 11g release 2 (11.2) and later, you need to set the environment variables ORACLE_HOME and ORACLE_UNQNAME to use or manage the Enterprise Manager.
export ORACLE_UNQNAME=GlobalUniqueName
If you want to check if the Entreprise Manager Database Console is running or not:
emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://dev1rac:1158/em/console/aboutApplication
EM Daemon is not running.
emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://dev1rac:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
1. Ensure that you are logged in as the oracle Linux/ UNIX user.
If you are not connected as oracle OS user, you must switch to the oracle OS user
su - oracle
2. Stop/ shut (stop) down all applications using the Oracle database.
This step includes stopping (shutting down) the Oracle Enterprise Manager Database Control:
$ emctl stop dbconsole
Oracle Enterprise
Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights
reserved.
https://dev1rac.mfol.dgti.ro:1158/em/console/aboutApplication
Stopping Oracle Enterprise Manager 11g Database Control ...
... Stopped.
If you want to check if the Entreprise Manager Database Console is running or not:
emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://dev1rac:1158/em/console/aboutApplication
EM Daemon is not running.
emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://dev1rac:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
Attention: In previous releases of Oracle Database, you were required to set environment variables for ORACLE_HOME and ORACLE_SID to start, stop, and check the status
of Enterprise Manager. With Oracle Database 11g release 2 (11.2) and later, you need to set the environment variables ORACLE_HOME and ORACLE_UNQNAME to use or manage the Enterprise Manager.
export ORACLE_UNQNAME=GlobalUniqueName (database SID and not instance SID)
3. Shut down (stop) all Oracle RAC instances on all nodes.
To shut down all Oracle RAC instances for a
database, enter the following command, where db_name is the name of the
database:
$ oracle_home/bin/srvctl stop database -d
db_name (this command is starting all the
instances)
4. Shut down (stop) all Oracle ASM instances on all nodes. (If you are not using the ASM you must skip this step.)
To shut down an Oracle ASM instance, enter
the following command, where node_name is the name of the node where
the Oracle ASM instance is running:
$ oracle_home/bin/srvctl stop asm -n node_name
5. Stop (shut down) the Oracle cluster stack
su - root
cd $CRS_HOME/bin
# ./crsctl stop crs (must be run on each node)
./srvctl stop nodeapps -n node_name --> in 11.2 stops only ONS and eONS because of some dependencies.
===========================================
If you want to check if the database is running you can run:
ps -ef | grep smon
oracle 246196 250208 0 14:29:11 pts/0 0:00 grep smon
If you want to check if the database listeners are running you can run:
ps -ef | grep lsnr
root 204886 229874 0 14:30:07 pts/0 0:00 grep lsnr
Here the listeners are running:
ps -ef | grep lsnr
oracle 282660 1 0 14:07:34 - 0:00 /oracle/grid/crs/11.2/bin/tnslsnr LISTENER_SCAN2 -inherit
oracle 299116 250208 0 14:30:00 pts/0 0:00 grep lsnr
oracle 303200 1 0 14:23:44 - 0:00 /oracle/grid/crs/11.2/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle 315432 1 0 14:07:35 - 0:00 /oracle/grid/crs/11.2/bin/tnslsnr LISTENER_SCAN3 -inherit
oracle 323626 1 0 14:07:34 - 0:00 /oracle/grid/crs/11.2/bin/tnslsnr LISTENER -inherit
If you want to check if any clusterware component is running you can run:
ps -ef | grep crs
root 204842 229874 0 14:27:19 pts/0 0:00 grep crs
Here the clusterware components (resources) are running:
ps -ef | grep crs
root 155856 1 1 14:05:47 - 0:22 /oracle/grid/crs/11.2/bin/ohasd.bin reboot
root 159940 1 0 14:07:08 - 0:01 /oracle/grid/crs/11.2/bin/oclskd.bin
oracle 221270 1 0 14:06:43 - 0:02 /oracle/grid/crs/11.2/bin/gpnpd.bin
root 225322 1 0 14:06:45 - 0:02 /oracle/grid/crs/11.2/bin/cssdmonitor
oracle 229396 1 0 14:06:41 - 0:00 /oracle/grid/crs/11.2/bin/gipcd.bin
oracle 233498 1 0 14:06:41 - 0:00 /oracle/grid/crs/11.2/bin/mdnsd.bin
root 253952 1 0 14:06:46 - 0:01 /oracle/grid/crs/11.2/bin/orarootagent.bin
root 258060 1 0 14:06:59 - 0:00 /oracle/grid/crs/11.2/bin/octssd.bin reboot
root 262150 1 0 14:06:47 - 0:00 /bin/sh /oracle/grid/crs/11.2/bin/ocssd
oracle 270344 262150 1 14:06:47 - 0:11 /oracle/grid/crs/11.2/bin/ocssd.bin
oracle 274456 156062 0 14:07:10 - 0:00 /oracle/grid/crs/11.2/bin/evmlogger.bin
-o /oracle/grid/crs/11.2/evm/log/evmlogger.info -l /oracle/grid/crs/11.2/evm/log/evmlogger.log
oracle 282660 1 0 14:07:34 - 0:00 /oracle/grid/crs/11.2/bin/tnslsnr
LISTENER_SCAN2 -inherit
root 286742 1 6 14:07:17 - 0:36 /oracle/grid/crs/11.2/bin/orarootagent.bin
oracle 303200 1 1 14:23:44 - 0:00 /oracle/grid/crs/11.2/bin/tnslsnr LISTENER_SCAN1
-inherit
oracle 315432 1 0 14:07:35 - 0:00 /oracle/grid/crs/11.2/bin/tnslsnr
LISTENER_SCAN3 -inherit
oracle 323626 1 0 14:07:34 - 0:00 /oracle/grid/crs/11.2/bin/tnslsnr LISTENER
-inherit
oracle 156062 1 0 14:07:01 - 0:02 /oracle/grid/crs/11.2/bin/evmd.bin
root 229692 1 0 14:06:46 - 0:02 /oracle/grid/crs/11.2/bin/cssdagent
oracle 233762 1 0 14:06:40 - 0:01 /oracle/grid/crs/11.2/bin/oraagent.bin
oracle 246226 250208 0 14:32:34 pts/0 0:00 grep crs
oracle 254218 1 0 14:06:53 - 0:01 /oracle/grid/crs/11.2/bin/diskmon.bin -d -f
root 258554 1 0 14:07:01 - 0:09 /oracle/grid/crs/11.2/bin/crsd.bin reboot
oracle 270612 1 0 14:07:28 - 0:03 /oracle/grid/crs/11.2/bin/oraagent.bin
REMARK: the database listeners are cluster resources and not database resources !