RAC架构中public网卡down后发生什么和RAC如何自愈?
导读:privdate网卡down后,rac会发生脑裂。public网卡down后,会发生什么呢?public网卡恢复后,rac又是如何自愈的呢?故事的起因:早上5:42左右收到数据库服务器bmcdb2无法链接的告警,5:48左右自动恢复了。这短短10分钟内究竟发生了什么?接下来我们就分析一下。
1.环境简介
数据库版本 SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production PL/SQL Release 11.2.0.4.0 - Production CORE 11.2.0.4.0 Production TNS for IBM/AIX RISC System/6000: Version 11.2.0.4.0 - Production NLSRTL Version 11.2.0.4.0 - Production 服务器版本 # oslevel -s 7200-02-02-1810
2.分析原因
2.1 当前状态服务是否正常,确认业务已恢复
2.1.1 集群资源与db状态
crsctl stat res -t bmcdb2:/home/grid$crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ARCH.dg ONLINE ONLINE bmcdb1 ONLINE ONLINE bmcdb2 ora.DATA.dg ONLINE ONLINE bmcdb1 ONLINE ONLINE bmcdb2 ora.LISTENER.lsnr ONLINE ONLINE bmcdb1 ONLINE ONLINE bmcdb2 ora.OCR.dg ONLINE ONLINE bmcdb1 ONLINE ONLINE bmcdb2 ora.asm ONLINE ONLINE bmcdb1 Started ONLINE ONLINE bmcdb2 Started ora.gsd OFFLINE OFFLINE bmcdb1 OFFLINE OFFLINE bmcdb2 ora.net1.network ONLINE ONLINE bmcdb1 ONLINE ONLINE bmcdb2 ora.ons ONLINE ONLINE bmcdb1 ONLINE ONLINE bmcdb2 ora.registry.acfs ONLINE ONLINE bmcdb1 ONLINE ONLINE bmcdb2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE bmcdb1 ora.bmcdb.db 1 ONLINE ONLINE bmcdb1 Open 2 ONLINE ONLINE bmcdb2 Open ora.bmcdb1.vip 1 ONLINE ONLINE bmcdb1 ora.bmcdb2.vip 1 ONLINE ONLINE bmcdb2 ora.cvu 1 ONLINE ONLINE bmcdb1 ora.oc4j 1 ONLINE ONLINE bmcdb1 ora.scan1.vip 1 ONLINE ONLINE bmcdb1 --当前状态均是ok的
2.1.2 数据库的连接是否正常
监听状态和持续产生的listener.log必须都ok,才是正常的 bmcdb2:/home/grid$lsnrctl status ...... Alias LISTENER Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.4.0 - Production Start Date 01-JUL-2022 05:47:09 Uptime 0 days 4 hr. 56 min. 5 sec ==》重启过 ..... bmcdb1:/home/grid$lsnrctl status ...... Alias LISTENER Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.4.0 - Production Start Date 26-MAY-2022 20:50:02 Uptime 35 days 13 hr. 52 min. 59 sec ...... --当前listener状态是ok的,值得注意点是bmcdb2监听重启过。 --当前listener.log有持续日志产生这里就不列出来,说明当前listener对外服务正常。
2.1.3 数据库读写是否正常
#数据库文件写入正常才ok。简单测试一下:logfile,archivelog,datafile写入均ok,才是正常的 多次执行 alter system switch logfile; 说明日志切换正常,归档切换正常 多次执行 alter system checkpoint; 说明数据从database buffer cache写入datafile正常
2.1.4 此时已证明数据库服务状态正常,业务已恢复
# 业务即已恢复,就不紧张了。接下来我们来分析短短10分钟内,为什么bmcdb2不能建立连接,然后又自动恢复了呢?
2.2 分析问题,我们采用FTA分析法来进行排查(主要是服务器、集群、网络)
2.2.1 服务器相关错误信息剖析
服务器是否重启:last reboot --结果:无重启信息 服务器告警:erpprt | more # errpt | more IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION 8650BE3F 0701054722 I H ent13 ETHERCHANNEL RECOVERY 42F2355C 0701054722 T H ent4 PROBLEM RESOLVED F1814D51 0701054622 T H ent4 ETHERNET DOWN 42F2355C 0701054622 T H ent4 PROBLEM RESOLVED 11FDF493 0701054622 I H ent13 ETHERCHANNEL RECOVERY 42F2355C 0701054622 T H ent5 PROBLEM RESOLVED B50A3F81 0701054622 P H ent13 TOTAL ETHERCHANNEL FAILURE F1814D51 0701054622 T H ent5 ETHERNET DOWN 59224136 0701054622 P H ent13 ETHERCHANNEL FAILOVER 42F2355C 0701054622 T H ent5 PROBLEM RESOLVED CE9566DF 0701053922 P H ent13 TOTAL ETHERCHANNEL FAILURE F1814D51 0701053922 T H ent4 ETHERNET DOWN F1814D51 0701053922 T H ent5 ETHERNET DOWN E87EF1BE 0630150022 P O dumpcheck The largest dump device is too small. --结果:公有网卡ent13出现短暂的故障,然后恢复 05:39:ent4和ent5网卡down,然后公共网卡ent13故障 05:46:ent5和ent4网卡先后恢复 05:47:公有网卡ent3恢复 ## 服务器分析结果:bmcdb2出现过短暂网卡故障 ## 这里有一个疑点:2个网卡同时出现故障,推测是网卡上线交换机故障,若交换机故障连接在他上面的端口均会在同一个时间内出现故障
2.2.2 集群相关信息剖析
## crs alert log
节点bmcdb1
2022-07-01 05:39:04.720:
[crsd(6619772)]CRS-2765:Resource 'ora.net1.network' has failed on server 'bmcdb2'.
2022-07-01 05:39:05.896:
[crsd(6619772)]CRS-2878:Failed to restart resource 'ora.net1.network'
2022-07-01 05:39:05.930:
[crsd(6619772)]CRS-2769:Unable to failover resource 'ora.net1.network'.
--结果,发现bmcdb2的ora.net1.network资源无法管理
节点bmcdb2
--结果,crs alert log无告警日志
## crs alert log分析结果:节点bmcdb2这个服务器的ora.net1.network资源异常,结合服务器网卡故障信息,推荐出是因为网卡故障导致了ora.net1.network资源异常,进而导致vip故障转移
## crsd log
节点bmcdb1
--节点bmcdb1的crsd日志还是很丰富的。通过crsd日志过程,我们可以分析出rac的vip故障转移和vip自愈机制
通过crsd日志分析RAC故障转移和自愈机制。
1)发现问题:ora.net1.network bmcdb2当前是offline,RAC尝试将其online
2)处理过程
2.1 尝试通过crs命令在bmcdb2启动ora.net1.network资源
2022-07-01 05:39:04.774: [ CRSPE][11824]{0:5:6936} CRS-2672: Attempting to start 'ora.net1.network' on 'bmcdb2'
2.2 尝试失败
2022-07-01 05:39:05.881: [ CRSPE][11824]{0:5:6936} CRS-2674: Start of 'ora.net1.network' on 'bmcdb2' failed
2.3 并通知crs资源启动失败
2022-07-01 05:39:05.882: [ CRSPE][11824]{0:5:6936} Sequencer for [ora.net1.network bmcdb2 1] has completed with error: CRS-0215: Could not start resource 'ora.net1.network'.
2.4 既然不能管理此项资源,name进行故障转移操作。先停止bmcdb2服务器上与ora.net1.network相关的资源。即停止ons和listener
2022-07-01 05:39:05.935: [ CRSPE][11824]{0:5:6936} CRS-2673: Attempting to stop 'ora.ons' on 'bmcdb2'
2022-07-01 05:39:05.936: [ CRSPE][11824]{0:5:6936} CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'bmcdb2'
2022-07-01 05:39:06.177: [ CRSPE][11824]{0:5:6936} CRS-2677: Stop of 'ora.ons' on 'bmcdb2' succeeded
2022-07-01 05:39:07.314: [ CRSPE][11824]{0:5:6936} CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'bmcdb2' succeeded
2022-07-01 05:39:07.316: [ CRSPE][11824]{0:5:6936} CRS-2673: Attempting to stop 'ora.bmcdb2.vip' on 'bmcdb2'
2022-07-01 05:39:08.375: [ CRSPE][11824]{0:5:6936} CRS-2677: Stop of 'ora.bmcdb2.vip' on 'bmcdb2' succeeded
2.5 vip故障转移操作,但请注意vip,ons虽然偏移了,但是本地listener是不会偏移的。所以没有尝试在bmcdb2启动listener。
# 所以这里也说明一点,vip在故障转移中只是快速转移的效果,但不具备服务转移的能力。他可以通过ons很快的反馈给客户端故障机器的vip已偏移,请客户端速度做出故障转移的动作。
2022-07-01 05:39:08.379: [ CRSPE][11824]{0:5:6936} CRS-2672: Attempting to start 'ora.bmcdb2.vip' on 'bmcdb1'
2022-07-01 05:39:11.023: [ CRSPE][11824]{0:5:6936} CRS-2676: Start of 'ora.bmcdb2.vip' on 'bmcdb1' succeeded
2022-07-01 05:47:05.954: [ CRSPE][11824]{0:5:6940} CRS-2672: Attempting to start 'ora.ons' on 'bmcdb2'
## 此时到这里vip的转移已完成。配合客户端故障转移配置。新进业务会尝试连接活着的服务器,进而完成rac外服务
3)当服务器服务网卡恢复后,rac又是如何自愈的?
3.1 通知CRS节点bmcdb2资源ora.net1.network已恢复
2022-07-01 05:47:05.917: [ CRSPE][11824]{0:5:6940} State change received from bmcdb2 for ora.net1.network bmcdb2 1
2022-07-01 05:47:05.917: [ CRSPE][11824]{0:5:6940} Processing PE command id=365. Description: [Resource State Change (ora.net1.network bmcdb2 1) : 110f60070]
2022-07-01 05:47:05.917: [ CRSPE][11824]{0:5:6940} RI [ora.net1.network bmcdb2 1] new external state [ONLINE] old value: [OFFLINE] on bmcdb2 label = []
2022-07-01 05:47:05.917: [ CRSPE][11824]{0:5:6940} Processing unplanned state change for [ora.net1.network bmcdb2 1]
2022-07-01 05:47:05.918: [ CRSRPT][12081]{0:5:6940} Published to EVM CRS_RESOURCE_STATE_CHANGE for ora.net1.network
2022-07-01 05:47:05.952: [ CRSPE][11824]{0:5:6940} Op 112093c70 has 5 WOs
2022-07-01 05:47:05.954: [ CRSPE][11824]{0:5:6940} RI [ora.ons bmcdb2 1] new internal state: [STARTING] old value: [STABLE]
2022-07-01 05:47:05.954: [ CRSPE][11824]{0:5:6940} Sending message to agfw: id = 1328819
3.2 尝试在bmcdb2启动ons,得到尝试启动vip的命令后ons才会启动成功
2022-07-01 05:47:05.954: [ CRSPE][11824]{0:5:6940} CRS-2672: Attempting to start 'ora.ons' on 'bmcdb2'
3.3 尝试在bmcdb1停止vip并成功
2022-07-01 05:47:06.021: [ CRSPE][11824]{2:36251:309} CRS-2673: Attempting to stop 'ora.bmcdb2.vip' on 'bmcdb1'
2022-07-01 05:47:07.099: [ CRSPE][11824]{2:36251:309} CRS-2677: Stop of 'ora.bmcdb2.vip' on 'bmcdb1' succeeded
3.4 尝试在bmcdb2启动vip,然后ons先启动成功,然后vip再次启动成功
2022-07-01 05:47:07.102: [ CRSPE][11824]{2:36251:309} CRS-2672: Attempting to start 'ora.bmcdb2.vip' on 'bmcdb2'
2022-07-01 05:47:07.207: [ CRSPE][11824]{0:5:6940} CRS-2676: Start of 'ora.ons' on 'bmcdb2' succeeded
22-07-01 05:47:09.806: [ CRSPE][11824]{2:36251:309} CRS-2676: Start of 'ora.bmcdb2.vip' on 'bmcdb2' succeeded
3.5 尝试恢复listener
2022-07-01 05:47:09.822: [ CRSPE][11824]{2:36251:309} CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'bmcdb2'
2022-07-01 05:47:11.681: [ CRSPE][11824]{2:36251:309} CRS-2676: Start of 'ora.LISTENER.lsnr' on 'bmcdb2' succeeded
## 至此,rac自愈完成。真的很强大
节点bmcdb2
--节点bmcdb2的crsd log忽略
2.3 网络相关分析
2.3.1 分析:通过上面的分析我们得知因为网卡故障导致,rac故障转移;10分钟内,网卡恢复,rac自愈完成。一台服务器2个网卡同时故障,最有可能的故障就是交换机故障。 2.3.2 落实:通过咨询网络同事,发现故障时间段内交换机发生过重启故障。这样就诠释了为什么两个网卡同时故障,同时恢复的原因。
3.故障分析结果
3.1 rac架构中public故障后会发生vip故障转移。rac架构中public故障后过程分析: 网卡故障,ora.net1.network资源异常,ons资源异常,vip故障转移,listener异常 3.2 rac架构中public恢复后rac会发生rac自愈,vip偏移回原始节点。rac架构中public恢复后过程分析: 网卡恢复,ora.net1.network正常,vip正常,ons正常,listener正常 3.3 一台服务器两个网卡同时损坏,是交换机故障
########################################################################################
版权所有,文章允许转载,但必须以链接方式注明源地址,否则追究法律责任!【QQ交流群:53993419】
QQ:14040928 E-mail:dbadoudou@163.com
本文链接: http://blog.itpub.net/26442936/viewspace-2903833/
########################################################################################