关于orachk的使用方法介绍

本文主要简单介绍下使用orachk检测系统健康状况的方法:整个步骤如下:
d4jtfmcvurd02-> mkdir orachk
d4jtfmcvurd02-> cd orachk
d4jtfmcvurd02-> mv ../orachk.zip ./
d4jtfmcvurd02-> ls
orachk.zip
d4jtfmcvurd02-> pwd    
/home/oracle/orachk
d4jtfmcvurd02-> unzip orachk.zip 
Archive:  orachk.zip
  inflating: UserGuide.txt           
  inflating: rules.dat               
  inflating: orachk                  
   creating: .cgrep/
  inflating: .cgrep/hiacgrep         
  inflating: .cgrep/lcgrep5          
  inflating: .cgrep/auto_upgrade.pl  
  inflating: .cgrep/check_reblance_free_space.sql  
  inflating: .cgrep/psqlplus         
  inflating: .cgrep/scgrepx86        
  inflating: .cgrep/init.tmpl        
  inflating: .cgrep/utluppkg.sql     
  inflating: .cgrep/scgrep           
  inflating: .cgrep/versions.dat     
  inflating: .cgrep/raw_data_browser.pl  
  inflating: .cgrep/lcgrep6          
  inflating: .cgrep/profiles.dat     
  inflating: .cgrep/auto_upgrade_check.pl  
  inflating: .cgrep/CollectionManager_App.sql  
  inflating: .cgrep/utlu112i.sql     
  inflating: .cgrep/ggdiscovery.sh   
  inflating: .cgrep/lcgreps9         
  inflating: .cgrep/checkDiskFGMapping.sh  
  inflating: .cgrep/lcgreps10        
  inflating: .cgrep/pxhcdr.sql       
  inflating: .cgrep/diff_collections.pl  
  inflating: .cgrep/rack_comparison.py  
  inflating: .cgrep/exalogic_zfs_checks.aksh  
  inflating: .cgrep/lcgrep4          
  inflating: .cgrep/merge_collections.pl  
  inflating: .cgrep/acgrep           
  inflating: .cgrep/show_file_in_html.pl  
  inflating: .cgrep/scnhealthcheck.sql  
  inflating: .cgrep/lcgreps11        
  inflating: .cgrep/reset_crshome.pl  
  inflating: .cgrep/ogghc_12101.sql  
   creating: .cgrep/profiles/
  inflating: .cgrep/profiles/DF65D0F7FB6F1014E04312C0E50A7808.prf  
  inflating: .cgrep/profiles/DFE9C207A8F2428CE04313C0E50A6B0A.prf  
  inflating: .cgrep/profiles/D49C4F9F48735396E0431EC0E50A9A0B.prf  
  inflating: .cgrep/profiles/D49C0AB26A6D45A8E0431EC0E50ADE06.prf  
  inflating: .cgrep/profiles/D49BDC2EC9E624AEE0431EC0E50A3E12.prf  
  inflating: .cgrep/profiles/F9ED0179CCD8256BE04312C0E50A5399.prf  
  inflating: .cgrep/profiles/F6AFECA37F177C3FE04313C0E50A56BF.prf  
  inflating: .cgrep/profiles/D49B218473787400E0431EC0E50A0BB9.prf  
  inflating: .cgrep/profiles/E2E972DDE1E14493E04312C0E50A1AB1.prf  
  inflating: .cgrep/profiles/F32F44CE0BCD662FE04312C0E50AB058.prf  
  inflating: .cgrep/profiles/D49AD88F8EE75CD8E0431EC0E50A0BC3.prf  
  inflating: .cgrep/profiles/E8DF76E07DD82E0DE04313C0E50AA55D.prf  
  inflating: .cgrep/profiles/EA5EE324E7E05128E04313C0E50A4B2A.prf  
  inflating: .cgrep/profiles/D462A6F7E9C340FDE0431EC0E50ABE12.prf  
  inflating: .cgrep/profiles/E1BF012E8F210839E04313C0E50A7B68.prf  
  inflating: .cgrep/profiles/DF65D6117CB41054E04312C0E50A69D1.prf  
  inflating: .cgrep/profiles/D8367AD6754763FEE04312C0E50A6FCB.prf  
  inflating: .cgrep/profiles/D49C0FBF8FBF4B1AE0431EC0E50A0F24.prf  
  inflating: .cgrep/profiles/DA94919CD0DE0913E04312C0E50A7996.prf  
  inflating: .cgrep/profiles/EF6C016813C51366E04313C0E50AE11F.prf  
 extracting: .cgrep/profiles/F13E11974A282AB3E04312C0E50ABCBF.prf  
  inflating: .cgrep/utlusts.sql      
  inflating: .cgrep/asrexacheck      
  inflating: .cgrep/create_version.pl  
  inflating: .cgrep/oracle-upstarttmpl.conf  
  inflating: .cgrep/preupgrd.sql     
  inflating: .cgrep/ogghc_11203.sql  
  inflating: .cgrep/ogghc_11204.sql  
  inflating: CollectionManager_App.sql  
  inflating: raccheck                
  inflating: readme.txt              
  inflating: collections.dat         
d4jtfmcvurd02-> ls
CollectionManager_App.sql  orachk      raccheck    rules.dat
collections.dat            orachk.zip  readme.txt  UserGuide.txt
d4jtfmcvurd02-> ll
total 37860
-rw-r--r-- 1 oracle oinstall  3435193 May 31 14:37 CollectionManager_App.sql
-rw-rw-r-- 1 oracle oinstall 22951324 May 31 14:37 collections.dat
-rwxr-xr-x 1 oracle oinstall  1604239 May 31 14:37 orachk
-rw-r--r-- 1 oracle oinstall  5770368 Sep 17 08:58 orachk.zip
-rwxr-xr-x 1 oracle oinstall  1604239 May 31 14:37 raccheck
-rw-r--r-- 1 oracle oinstall     3879 May 31 14:37 readme.txt
-rw-rw-r-- 1 oracle oinstall  3384097 May 31 14:37 rules.dat
-rw-r--r-- 1 oracle oinstall      432 May 31 14:37 UserGuide.txt
d4jtfmcvurd02-> ./orachk 


CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to /oracle/app/11.2.0/grid?[y/n][y]y


Checking ssh user equivalency settings on all nodes in cluster


Node d4jtfmcvurd01 is configured for ssh user equivalency for oracle user
 


Searching for running databases . . . . .


. . 
List of running databases registered in OCR
1. hbdbuat
2. None of above


Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1].1
. . 




Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
-------------------------------------------------------------------------------------------------------
                                                 Oracle Stack Status                            
-------------------------------------------------------------------------------------------------------
Host Name  CRS Installed  ASM HOME       RDBMS Installed  CRS UP    ASM UP    RDBMS UP  DB Instance Name
-------------------------------------------------------------------------------------------------------
d4jtfmcvurd02 Yes             N/A             Yes             Yes        Yes      Yes      hbdbuat2  
d4jtfmcvurd01 Yes             N/A             Yes             Yes        Yes      Yes      hbdbuat1  
-------------------------------------------------------------------------------------------------------




Copying plug-ins


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 


. . . . . . 




17 of the included audit checks require root privileged data collection . If sudo is not configured or the root password is not available, audit checks which  require root privileged data collection can be skipped.




1. Enter 1 if you will enter root password for each  host when prompted


2. Enter 2 if you have sudo configured for oracle user to execute root_orachk.sh script 


3. Enter 3 to skip the root privileged collections 


4. Enter 4 to exit and work with the SA to configure sudo  or to arrange for root access and run the tool later.


Please indicate your selection from one of the above options for root access[1-4][1]:- 1






*** Checking Best Practice Recommendations (PASS/WARNING/FAIL) ***






Collections and audit checks log file is 
/home/oracle/orachk/orachk_d4jtfmcvurd02_hbdbuat_091714_090111/log/orachk.log


Running orachk in serial mode because expect(/usr/bin/expect) is not available to supply root passwords on remote nodes


NOTICE:  Installing the expect utility (/usr/bin/expect) will allow orachk to gather root passwords at the beginning of the process and execute orachk on all nodes in parallel speeding up the entire process. For more info - http://www.nist.gov/el/msid/expect.cfm.  Expect is available for all major platforms.  See User Guide for more details.






Checking for prompts in /home/oracle/.bash_profile on d4jtfmcvurd02 for oracle user...








Checking for prompts in /home/oracle/.bash_profile on d4jtfmcvurd01 for oracle user...




=============================================================
                    Node name - d4jtfmcvurd02                                
=============================================================


Collecting - ASM DIsk I/O stats 
Collecting - ASM Disk Groups 
Collecting - ASM Diskgroup Attributes 
Collecting - ASM disk partnership imbalance 
Collecting - ASM diskgroup attributes 
Collecting - ASM initialization parameters 
Collecting - Active sessions load balance for hbdbuat database
Collecting - Archived Destination Status for hbdbuat database
Collecting - Cluster Interconnect Config for hbdbuat database
Collecting - Database Archive Destinations for hbdbuat database
Collecting - Database Files for hbdbuat database
Collecting - Database Instance Settings for hbdbuat database
Collecting - Database Parameters for hbdbuat database
Collecting - Database Parameters for hbdbuat database
Collecting - Database Properties for hbdbuat database
Collecting - Database Registry for hbdbuat database
Collecting - Database Sequences for hbdbuat database
Collecting - Database Undocumented Parameters for hbdbuat database
Collecting - Database Undocumented Parameters for hbdbuat database
Collecting - Database Workload Services for hbdbuat database
Collecting - Dataguard Status for hbdbuat database
Collecting - Files not opened by ASM 
Collecting - Log Sequence Numbers for hbdbuat database
Collecting - Percentage of asm disk  Imbalance 
Collecting - Process for shipping Redo to standby for hbdbuat database
Collecting - RDBMS Feature Usage for hbdbuat database
Collecting - Redo Log information for hbdbuat database
Collecting - Standby redo log creation status before switchover for hbdbuat database
Collecting - /proc/cmdline
Collecting - /proc/modules
Collecting - CPU Information
Collecting - CRS active version
Collecting - CRS oifcfg
Collecting - CRS software version
Collecting - CSS Reboot time
Collecting - CSS disktimout
Collecting - Cluster interconnect (clusterware)
Collecting - Clusterware OCR healthcheck 
Collecting - Clusterware Resource Status
Collecting - DiskFree Information
Collecting - DiskMount Information
Collecting - Huge pages configuration
Collecting - Interconnect network card speed
Collecting - Kernel parameters
Collecting - Maximum number of semaphore sets on system
Collecting - Maximum number of semaphores on system
Collecting - Maximum number of semaphores per semaphore set
Collecting - Memory Information
Collecting - NUMA Configuration
Collecting - Network Interface Configuration
Collecting - Network Performance
Collecting - Network Service Switch
Collecting - OS Packages
Collecting - OS version
Collecting - Operating system release information and kernel version
Collecting - Oracle Executable Attributes
Collecting - Patches for Grid Infrastructure 
Collecting - Patches for RDBMS Home 
Collecting - Shared memory segments
Collecting - Table of file system defaults
Collecting - Voting disks (clusterware)
Collecting - number of semaphore operations per semop system call
Preparing to run root privileged commands  d4jtfmcvurd02.  Please enter root password when prompted.
root@d4jtfmcvurd02's password: 
root@d4jtfmcvurd02's password: 
Collecting - ACFS Volumes status 
Collecting - Broadcast Requirements for Networks 
Collecting - CRS user time zone check 
Collecting - Custom rc init scripts (rc.local) 
Collecting - Disk Information 
Collecting - Grid Infastructure user shell limits configuration 
Collecting - Interconnect interface config 
Collecting - Network interface stats 
Collecting - Number of RDBMS LMS running in real time 
Collecting - OLR Integrity 
Collecting - Root user limits 
Collecting - Verify no database server kernel out of memory errors  
Collecting - root time zone check 
Collecting - slabinfo 




Data collections completed. Checking best practices on d4jtfmcvurd02.
--------------------------------------------------------------------------------------




 WARNING => SYS.AUDSES$ sequence cache size < 10,000 for hbdbuat
 WARNING => Without ARCHIVELOG mode the database cannot be recovered from an online backup and Data Guard cannot be used. for hbdbuat
 WARNING => OCR is NOT being backed up daily
 WARNING => Controlfile is NOT multiplexed for hbdbuat
 WARNING => One or more redo log groups are NOT multiplexed for hbdbuat
 INFO =>    oracleasm (asmlib) module is NOT loaded
 WARNING => /tmp is NOT on a dedicated filesystem
 INFO =>    Number of SCAN listeners is NOT equal to the recommended number of 3.
 WARNING => Database Parameter memory_target is not set to the recommended value on hbdbuat2 instance
 FAIL =>    Operating system hugepages count does not satisfy total SGA requirements
 WARNING => NIC bonding is not configured for interconnect
 WARNING => NIC bonding is NOT configured for public network (VIP)
 WARNING => OSWatcher is not running as is recommended.
 INFO =>    Jumbo frames (MTU >= 8192) are not configured for interconnect
 WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value. for hbdbuat
 FAIL =>    Flashback on PRIMARY is not configured for hbdbuat
 INFO =>    Operational Best Practices
 INFO =>    Database Consolidation Best Practices
 INFO =>    Computer failure prevention best practices
 INFO =>    Data corruption prevention best practices
 INFO =>    Logical corruption prevention best practices
 INFO =>    Database/Cluster/Site failure prevention best practices
 INFO =>    Client failover operational best practices
 WARNING => fast_start_mttr_target should be greater than or equal to 300. on hbdbuat2 instance


 INFO =>    IMPORTANT: Oracle Database Patch 17478514 PSU is NOT applied to RDBMS Home /oracle/app/oracle/product/11.2.0/db_1
 INFO =>    Information about hanganalyze and systemstate dump
 WARNING => Package ksh-20100621-12.el6-x86_64 is recommended but NOT installed
 FAIL =>    Table AUD$[FGA_LOG$] should use Automatic Segment Space Management for hbdbuat
 INFO =>    Database failure prevention best practices
 WARNING => Database Archivelog Mode should be set to ARCHIVELOG for hbdbuat
 FAIL =>    Primary database is NOT protected with Data Guard (standby database) for real-time data protection and availability for hbdbuat
 INFO =>    Parallel Execution Health-Checks and Diagnostics Reports for hbdbuat
 WARNING => Linux transparent huge pages are enabled
 WARNING => vm.min_free_kbytes should be set as recommended.
 INFO =>    Oracle recovery manager(rman) best practices
 WARNING => RMAN controlfile autobackup should be set to ON for hbdbuat
 WARNING => Linux Disk I/O Scheduler should be configured to [Deadline]
 INFO =>    Consider increasing the COREDUMPSIZE size
 WARNING => Shell limit hard nproc for root is NOT configured according to recommendation
 INFO =>    Consider investigating changes to the schema objects such as DDLs or new object creation for hbdbuat
 INFO =>    Consider investigating the frequency of SGA resize operations and take corrective action for hbdbuat




Best Practice checking completed.Checking recommended patches on d4jtfmcvurd02.
---------------------------------------------------------------------------------




Collecting patch inventory on  CRS HOME /oracle/app/11.2.0/grid
Collecting patch inventory on ORACLE_HOME /oracle/app/oracle/product/11.2.0/db_1 
---------------------------------------------------------------------------------
1 Recommended CRS patches for 112040 from /oracle/app/11.2.0/grid on d4jtfmcvurd02
---------------------------------------------------------------------------------
Patch#   CRS  ASM    RDBMS RDBMS_HOME                              Patch-Description                            
---------------------------------------------------------------------------------
18139609  no          no  /oracle/app/oracle/product/11.2.0/db_1GRID INFRASTRUCTURE SYSTEM PATCH 11.2.0.4.2  
---------------------------------------------------------------------------------




---------------------------------------------------------------------------------
1 Recommended RDBMS patches for 112040 from /oracle/app/oracle/product/11.2.0/db_1 on d4jtfmcvurd02
---------------------------------------------------------------------------------
Patch#   RDBMS    ASM     type                Patch-Description                       
---------------------------------------------------------------------------------
18139609  no             merge               GRID INFRASTRUCTURE SYSTEM PATCH 11.2.0.4.2
---------------------------------------------------------------------------------
---------------------------------------------------------------------------------




---------------------------------------------------------------------------------
              Clusterware patches summary report
---------------------------------------------------------------------------------
Total patches  Applied on CRS Applied on RDBMS Applied on ASM 
---------------------------------------------------------------------------------
1              0              0                0              
---------------------------------------------------------------------------------




---------------------------------------------------------------------------------
              RDBMS homes patches summary report
---------------------------------------------------------------------------------
Total patches  Applied on RDBMS Applied on ASM ORACLE_HOME    
---------------------------------------------------------------------------------
 1              0              0                /oracle/app/oracle/product/11.2.0/db_1
---------------------------------------------------------------------------------






=============================================================
                    Node name - d4jtfmcvurd01                                
=============================================================


Collecting - /proc/cmdline
Collecting - /proc/modules
Collecting - CPU Information
Collecting - CRS active version
Collecting - CRS oifcfg
Collecting - CRS software version
Collecting - Cluster interconnect (clusterware)
Collecting - DiskFree Information
Collecting - DiskMount Information
Collecting - Huge pages configuration
Collecting - Interconnect network card speed
Collecting - Kernel parameters
Collecting - Maximum number of semaphore sets on system
Collecting - Maximum number of semaphores on system
Collecting - Maximum number of semaphores per semaphore set
Collecting - Memory Information
Collecting - NUMA Configuration
Collecting - Network Interface Configuration
Collecting - Network Performance
Collecting - Network Service Switch
Collecting - OS Packages
Collecting - OS version
Collecting - Operating system release information and kernel version
Collecting - Oracle Executable Attributes
Collecting - Patches for Grid Infrastructure 
Collecting - Patches for RDBMS Home 
Collecting - Shared memory segments
Collecting - Table of file system defaults
Collecting - number of semaphore operations per semop system call
Preparing to run root privileged commands  d4jtfmcvurd01.  Please enter root password when prompted.
root@d4jtfmcvurd01's password: 




Data collections completed. Checking best practices on d4jtfmcvurd01.
--------------------------------------------------------------------------------------




 INFO =>    At some times checkpoints are not being completed for hbdbuat
 INFO =>    oracleasm (asmlib) module is NOT loaded
 WARNING => /tmp is NOT on a dedicated filesystem
 INFO =>    Number of SCAN listeners is NOT equal to the recommended number of 3.
 WARNING => Database Parameter memory_target is not set to the recommended value on hbdbuat1 instance
 FAIL =>    Operating system hugepages count does not satisfy total SGA requirements
 WARNING => NIC bonding is not configured for interconnect
 WARNING => NIC bonding is NOT configured for public network (VIP)
 WARNING => OSWatcher is not running as is recommended.
 INFO =>    Jumbo frames (MTU >= 8192) are not configured for interconnect
 WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value. for hbdbuat
 WARNING => fast_start_mttr_target should be greater than or equal to 300. on hbdbuat1 instance


 INFO =>    IMPORTANT: Oracle Database Patch 17478514 PSU is NOT applied to RDBMS Home /oracle/app/oracle/product/11.2.0/db_1
 WARNING => Package ksh-20100621-12.el6-x86_64 is recommended but NOT installed
 WARNING => Linux transparent huge pages are enabled
 WARNING => vm.min_free_kbytes should be set as recommended.
 WARNING => Linux Disk I/O Scheduler should be configured to [Deadline]
 INFO =>    Consider increasing the COREDUMPSIZE size




Best Practice checking completed.Checking recommended patches on d4jtfmcvurd01.
---------------------------------------------------------------------------------




Collecting patch inventory on  CRS HOME /oracle/app/11.2.0/grid 
Collecting patch inventory on ORACLE_HOME /oracle/app/oracle/product/11.2.0/db_1 
---------------------------------------------------------------------------------
1 Recommended CRS patches for 112040 from /oracle/app/11.2.0/grid on d4jtfmcvurd01
---------------------------------------------------------------------------------
Patch#   CRS  ASM    RDBMS RDBMS_HOME                              Patch-Description                            
---------------------------------------------------------------------------------
18139609  no          no  /oracle/app/oracle/product/11.2.0/db_1GRID INFRASTRUCTURE SYSTEM PATCH 11.2.0.4.2  
---------------------------------------------------------------------------------




---------------------------------------------------------------------------------
1 Recommended RDBMS patches for 112040 from /oracle/app/oracle/product/11.2.0/db_1 on d4jtfmcvurd01
---------------------------------------------------------------------------------
Patch#   RDBMS    ASM     type                Patch-Description                       
---------------------------------------------------------------------------------
18139609  no             merge               GRID INFRASTRUCTURE SYSTEM PATCH 11.2.0.4.2
---------------------------------------------------------------------------------
---------------------------------------------------------------------------------




---------------------------------------------------------------------------------
              Clusterware patches summary report
---------------------------------------------------------------------------------
Total patches  Applied on CRS Applied on RDBMS Applied on ASM 
---------------------------------------------------------------------------------
1              0              0                0              
---------------------------------------------------------------------------------




---------------------------------------------------------------------------------
              RDBMS homes patches summary report
---------------------------------------------------------------------------------
Total patches  Applied on RDBMS Applied on ASM ORACLE_HOME    
---------------------------------------------------------------------------------
 1              0              0                /oracle/app/oracle/product/11.2.0/db_1
---------------------------------------------------------------------------------










---------------------------------------------------------------------------------
                      CLUSTERWIDE CHECKS
---------------------------------------------------------------------------------
 FAIL =>    All nodes are not using same NTP server across cluster
 WARNING => Clusterware software version does not match across cluster.
---------------------------------------------------------------------------------
 


















Detailed report (html) - /home/oracle/orachk/orachk_d4jtfmcvurd02_hbdbuat_091714_090111/orachk_d4jtfmcvurd02_hbdbuat_091714_090111.html




UPLOAD(if required) - /home/oracle/orachk/orachk_d4jtfmcvurd02_hbdbuat_091714_090111.zip








d4jtfmcvurd02->

d4jtfmcvurd02-> ls -l
total 42432
-rw-r--r-- 1 oracle oinstall  3435193 May 31 14:37 CollectionManager_App.sql
-rw-rw-r-- 1 oracle oinstall 22951324 May 31 14:37 collections.dat
-rwxr-xr-x 1 oracle oinstall  1604239 May 31 14:37 orachk
drwxr-xr-x 7 oracle oinstall   118784 Sep 17 09:24 orachk_d4jtfmcvurd02_hbdbuat_091714_090111
-rw-r--r-- 1 oracle oinstall  4559399 Sep 17 09:24 orachk_d4jtfmcvurd02_hbdbuat_091714_090111.zip
-rw-r--r-- 1 oracle oinstall  5770368 Sep 17 08:58 orachk.zip
-rwxr-xr-x 1 oracle oinstall  1604239 May 31 14:37 raccheck
-rw-r--r-- 1 oracle oinstall     3879 May 31 14:37 readme.txt
-rw-rw-r-- 1 oracle oinstall  3384097 May 31 14:37 rules.dat
-rw-r--r-- 1 oracle oinstall      432 May 31 14:37 UserGuide.txt
运行结束后可以看到如上红色部分,将zip文件上传到window上,解压后就可以看到生成的一个健康报告html格式的,如下所示:

Oracle RAC Assessment Report

System Health Score is 90 out of 100 (detail)

System Health Score is derived using following formula.
  • Every check has 10 points
  • Failure will deduct 10 points
  • Warning will deduct 5 points
  • Info will deduct 3 points
  • Skip will deduct 3 points
Total checks 319
Passed checks 250
Failed(fail/warn/info/skip) checks 69
..Hide

Cluster Summary

Cluster Name d4jtfmc-cluster
OS/Kernel Version LINUX X86-64 OELRHEL 6 2.6.32-358.el6.x86_64
CRS Home - Version /oracle/app/11.2.0/grid - 11.2.0.4.0
DB Home - Version - Names /oracle/app/oracle/product/11.2.0/db_1 - 11.2.0.4.0 - hbdbuat
Number of nodes 2
   Database Servers 2
orachk Version 2.2.5_20140530
Collection orachk_d4jtfmcvurd02_hbdbuat_091714_090111.zip
Duration 21 mins, 35 seconds
Collection Date 17-Sep-2014 09:03:04

Note! This version of orachk is considered valid for 10 days from today or until a new version is available


WARNING! The data collection activity appears to be incomplete for this orachk run. Please review the "Killed Processes" and / or "Skipped Checks" section and refer to "Appendix A - Troubleshooting Scenarios" of the "Orachk User Guide" for corrective actions.

Table of Contents


Show Check Ids

Remove finding from report


Findings Needing Attention

FAIL, WARNING, ERROR and INFO finding details should be reviewed in the context of your environment.

NOTE: Any recommended change should be applied to and thoroughly tested (functionality and load) in one or more non-production environments before applying the change to a production environment.

Database Server

Status Type Message Status On Details
FAIL SQL Check Table AUD$[FGA_LOG$] should use Automatic Segment Space Management for hbdbuat All Databases View
FAIL OS Check Operating system hugepages count does not satisfy total SGA requirements All Database Servers View
WARNING OS Check Shell limit hard nproc for root is NOT configured according to recommendation All Database Servers View
WARNING ASM Check Linux Disk I/O Scheduler should be configured to [Deadline] All ASM Instances View
WARNING OS Check vm.min_free_kbytes should be set as recommended. All Database Servers View
WARNING OS Check Linux transparent huge pages are enabled All Database Servers View
WARNING OS Check Package ksh-20100621-12.el6-x86_64 is recommended but NOT installed All Database Servers View
WARNING OS Check OSWatcher is not running as is recommended. All Database Servers View
WARNING OS Check NIC bonding is NOT configured for public network (VIP) All Database Servers View
WARNING OS Check NIC bonding is not configured for interconnect All Database Servers View
WARNING SQL Parameter Check Database Parameter memory_target is not set to the recommended value All Instances View
WARNING OS Check /tmp is NOT on a dedicated filesystem All Database Servers View
WARNING SQL Check One or more redo log groups are NOT multiplexed All Databases View
WARNING SQL Check Controlfile is NOT multiplexed All Databases View
WARNING OS Check OCR is NOT being backed up daily All Database Servers View
WARNING SQL Check Without ARCHIVELOG mode the database cannot be recovered from an online backup and Data Guard cannot be used. All Databases View
WARNING SQL Check SYS.AUDSES$ sequence cache size < 10,000 All Databases View
INFO OS Check At some times checkpoints are not being completed d4jtfmcvurd01 View
INFO SQL Check Consider investigating the frequency of SGA resize operations and take corrective action All Databases View
INFO SQL Check Consider investigating changes to the schema objects such as DDLs or new object creation All Databases View
INFO OS Check Consider increasing the COREDUMPSIZE size All Database Servers View
INFO OS Check Parallel Execution Health-Checks and Diagnostics Reports All Database Servers View
INFO OS Check Information about hanganalyze and systemstate dump All Database Servers View
INFO Patch Check IMPORTANT: Oracle Database Patch 17478514 PSU is NOT applied to RDBMS Home All Homes View
INFO OS Check Jumbo frames (MTU >= 8192) are not configured for interconnect All Database Servers View
INFO OS Check Number of SCAN listeners is NOT equal to the recommended number of 3. All Database Servers View
INFO ASM Check oracleasm (asmlib) module is NOT loaded All ASM Instances View

Cluster Wide

Status Type Message Status On Details
FAIL Cluster Wide Check All nodes are not using same NTP server across cluster Cluster Wide View
WARNING Cluster Wide Check Clusterware software version does not match across cluster. Cluster Wide View

Top

Maximum Availability Architecture (MAA) Scorecard

Outage Type Status Type Message Status On Details
.
DATABASE FAILURE PREVENTION BEST PRACTICES FAIL
Description Oracle database can be configured with best practices that are applicable to all Oracle databases, including single-instance, Oracle RAC databases, Oracle RAC One Node databases, and the primary and standby databases in Oracle Data Guard configurations.
 
 
Key HA Benefits:
				
  • Improved recoverability
  • Improved stability
Best Practices
WARNING SQL Check Database Archivelog Mode should be set to ARCHIVELOG All Databases View
PASS SQL Check All tablespaces are locally managed tablespace All Databases View
PASS SQL Check All tablespaces are using Automatic segment storage management All Databases View
PASS SQL Check Default temporary tablespace is set All Databases View
PASS SQL Check The SYS and SYSTEM userids have a default tablespace of SYSTEM All Databases View
.
COMPUTER FAILURE PREVENTION BEST PRACTICES FAIL
Description Oracle RAC and Oracle Clusterware allow Oracle Database to run any packaged or custom application across a set of clustered servers. This capability provides server side high availability and scalability. If a clustered server fails, then Oracle Database continues running on the surviving servers. When more processing power is needed, you can add another server without interrupting access to data.

Key HA Benefits:

Zero database downtime for node and instance failures.  Application brownout can be zero or seconds compared to minutes and an hour with third party cold cluster failover solutions. 

Oracle RAC and Oracle Clusterware rolling upgrade for most hardware and software changes excluding Oracle RDBMS patch sets and new database releases. Best Practices 
				
WARNING SQL Parameter Check fast_start_mttr_target should be greater than or equal to 300. All Instances View
.
DATA CORRUPTION PREVENTION BEST PRACTICES FAIL
Description The MAA recommended way to achieve the most comprehensive data corruption prevention and detection is to use Oracle Active Data Guard and configure the DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM, and DB_LOST_WRITE_PROTECT database initialization parameters on the Primary database and any Data Guard and standby databases.

 Key HA Benefits
				
  • Application downtime can be reduced from hours and days to seconds to no downtime.
  • Prevention, quick detection and fast repair of data block corruptions.
  • With Active Data Guard, data block corruptions can be repaired automatically.
Best Practices
WARNING OS Check Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value. All Database Servers View
PASS SQL Check The data files are all recoverable All Databases View
PASS SQL Check No reported block corruptions in V$DATABASE_BLOCK_CORRUPTIONS All Databases View
.
LOGICAL CORRUPTION PREVENTION BEST PRACTICES FAIL
Description Oracle Flashback Technology enables fast logical failure repair. Oracle recommends that you use automatic undo management with sufficient space to attain your desired undo retention guarantee, enable Oracle Flashback Database, and allocate sufficient space and I/O bandwidth in the fast recovery area.  Application monitoring is required for early detection.  Effective and fast repair comes from leveraging and rehearsing the most common application specific logical failures and using the different flashback features effectively (e.g flashback query, flashback version query, flashback transaction query, flashback transaction, flashback drop, flashback table, and flashback database).

Key HA Benefits:

With application monitoring and rehearsed repair actions with flashback technologies, application downtime can reduce from hours and days to the time to detect the logical inconsistency.

Fast repair for logical failures caused by malicious or accidental DML or DDL operations.

Effect fast point-in-time repair at the appropriate level of granularity: transaction, table, or database.
 
Questions:

Can your application or monitoring infrastructure detect logical inconsistencies?

Is your operations team prepared to use various flashback technologies to repair quickly and efficiently?

Is security practices enforced to prevent unauthorized privileges that can result logical inconsistencies? Best Practices 
				
FAIL SQL Check Flashback on PRIMARY is not configured All Databases View
PASS SQL Parameter Check RECYCLEBIN on PRIMARY is set to the recommended value All Instances View
PASS SQL Parameter Check Database parameter UNDO_RETENTION on PRIMARY is not null All Instances View
.
DATABASE/CLUSTER/SITE FAILURE PREVENTION BEST PRACTICES FAIL
Description Oracle 11g and higher Active Data Guard is the real-time data protection and availability solution that eliminates single point of failure by maintaining one or more synchronized physical replicas of the production database. If an unplanned outage of any kind impacts the production database, applications and users can quickly failover to a synchronized standby, minimizing downtime and preventing data loss. An Active Data Guard standby can be used to offload read-only applications, ad-hoc queries, and backups from the primary database or be dual-purposed as a test system at the same time it provides disaster protection. An Active Data Guard standby can also be used to minimize downtime for planned maintenance when upgrading to new Oracle Database patch sets and releases and for select migrations.  
 
For zero data loss protection and fastest recovery time, deploy a local Data Guard standby database with Data Guard Fast-Start Failover and integrated client failover. For protection against outages impacting both the primary and the local standby or the entire data center, or a broad geography, deploy a second Data Guard standby database at a remote location.

Key HA Benefits:

With Oracle 11g release 2 and higher Active Data Guard and real time apply, data block corruptions can be repaired automatically and downtime can be reduced from hours and days of application impact to zero downtime with zero data loss.

With MAA best practices, Data Guard Fast-Start Failover (typically a local standby) and integrated client failover, downtime from database, cluster and site failures can be reduced from hours to days and seconds and minutes.

With remote standby database (Disaster Recovery Site), you have protection from complete site failures.

In all cases, the Active Data Guard instances can be active and used for other activities.

Data Guard can reduce risks and downtime for planned maintenance activities by using Database rolling upgrade with transient logical standby, standby-first patch apply and database migrations.

Active Data Guard provides optimal data protection by using physical replication and comprehensive Oracle validation to maintain an exact byte-for-byte copy of the primary database that can be open read-only to offload reporting, ad-hoc queries and backups. For other advanced replication requirements where read-write access to a replica database is required while it is being synchronized with the primary database see Oracle GoldenGate logical replication.Oracle GoldenGate can be used to support heterogeneous database platforms and database releases, an effective read-write full or subset logical replica and to reduce or eliminate downtime for application, database or system changes. Oracle GoldenGate flexible logical replication solution鈥檚 main trade-off is the additional administration for application developer and database administrators. Best Practices 
				
FAIL SQL Check Primary database is NOT protected with Data Guard (standby database) for real-time data protection and availability All Databases View
.
CLIENT FAILOVER OPERATIONAL BEST PRACTICES PASS
Description A highly available architecture requires the ability of the application tier to transparently fail over to a surviving instance or database advertising the required service. This ensures that applications are generally available or minimally impacted in the event of node failure, instance failure, or database failures.

Oracle listeners can be configured to throttle incoming connections to avoid logon storms after a database node or instance failure.  The connection rate limiter feature in the Oracle Net Listener enables a database administrator (DBA) to limit the number of new connections handled by the listener. Best Practices 
				
PASS OS Check Clusterware is running All Database Servers View
.
ORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES FAIL
Description Oracle Recovery Manager (RMAN) is an Oracle Database utility to manage database backup and, more importantly, the recovery of the database. RMAN eliminates operational complexity while providing superior performance and availability of the database.
RMAN determines the most efficient method of executing the requested backup, restoration, or recovery operation and then submits these operations to the Oracle Database server for processing. RMAN and the server automatically identify modifications to the structure of the database and dynamically adjust the required operation to adapt to the changes.

RMAN has many unique HA capabilities that can be challenging or impossible for third party backup and restore utilities to deliver such as
				
  • In-depth Oracle data block checks during every backup or restore operation
  • Efficient block media recovery
  • Automatic recovery through complex database state changes such as resetlogs or past Data Guard role transitions
  • Fast incremental backup and restore operations
  • Integrated retention policies and backup file management with Oracle鈥檚 fast recovery area
  • Online backups without the need to put the database or data file in hot backup mode.
RMAN backups are strategic to MAA so a damaged database (complete database or subset of the database such as a data file or tablespace, log file, or controlfile) can be recovered but for the fastest recovery, use Data Guard or GoldenGate. RMAN operations are also important for detecting any corrupted blocks from data files that are not frequently accessed. Best Practices
WARNING SQL Check RMAN controlfile autobackup should be set to ON All Databases View
PASS OS Check control_file_record_keep_time is within recommended range [1-9] for hbdbuat All Database Servers View
.
OPERATIONAL BEST PRACTICES INFO
Description Operational best practices are an essential prerequisite to high availability.   The integration of Oracle Maximum Availability Architecture (MAA) operational and configuration best practices with Oracle Exadata Database Machine (Exadata MAA) provides the most comprehensive high availability solution available for the Oracle Database. Best Practices 
				
.
DATABASE CONSOLIDATION BEST PRACTICES INFO
Description Database consolidation requires additional planning and management to ensure HA requirements are met. Best Practices 
				

Top

GRID and RDBMS patch recommendation Summary report

Summary Report for "d4jtfmcvurd02"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
1 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
1 0 0 /oracle/app/oracle/product/11.2.0/db_1 View

Summary Report for "d4jtfmcvurd01"

Clusterware patches
Total patches Applied on CRS Applied on RDBMS Applied on ASM Details
1 0 0 0 View

RDBMS homes patches
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME Details
1 0 0 /oracle/app/oracle/product/11.2.0/db_1 View

Top

GRID and RDBMS patch recommendation Detailed report

Detailed report for "d4jtfmcvurd02"




1 Recommended CRS patches for 112040 from /oracle/app/11.2.0/grid
Patch# CRS ASM RDBMS RDBMS_HOME Patch-Description
18139609 not-applied n/a not-applied merge GRID INFRASTRUCTURE SYSTEM PATCH 11.2.0.4.2
Top 

1 Recommended RDBMS patches for 112040 from /oracle/app/oracle/product/11.2.0/db_1
Patch# RDBMS ASM Type Patch-Description
18139609 not-applied n/a merge GRID INFRASTRUCTURE SYSTEM PATCH 11.2.0.4.2
Top

Detailed report for "d4jtfmcvurd01"




1 Recommended CRS patches for 112040 from /oracle/app/11.2.0/grid
Patch# CRS ASM RDBMS RDBMS_HOME Patch-Description
18139609 not-applied n/a not-applied merge GRID INFRASTRUCTURE SYSTEM PATCH 11.2.0.4.2
Top 

1 Recommended RDBMS patches for 112040 from /oracle/app/oracle/product/11.2.0/db_1
Patch# RDBMS ASM Type Patch-Description
18139609 not-applied n/a merge GRID INFRASTRUCTURE SYSTEM PATCH 11.2.0.4.2
Top
Top

Findings Passed

Database Server

Status Type Message Status On Details
PASS OS Check There are no duplicate parameter entries in the database init.ora(spfile) file All Database Servers View
PASS OS Check kernel.panic_on_oops parameter is configured according to recommendation All Database Servers View
PASS OS Check Shell limit soft stack for DB is configured according to recommendation All Database Servers View
PASS SQL Check The database parameter session_cached_cursors is set to the recommended value All Databases View
PASS ASM Check ASM disk permissions are set as recommended All ASM Instances View
PASS ASM Check ASM disks have enough free space for rebalance All ASM Instances View
PASS SQL Check All optimizer related parameters are set to the default value (Parameter Set 2 of 3) All Databases View
PASS SQL Check All optimizer related parameters are set to the default value (Parameter Set 3 of 3) for hbdbuat All Databases View
PASS SQL Check No obsolete initialization parameters are set All Databases View
PASS SQL Check Database parameter OPTIMIZER_FEATURES_ENABLE is set to the current database version All Databases View
PASS SQL Check Database parameter DB_FILE_MULTIBLOCK_READ_COUNT is unset as recommended All Databases View
PASS SQL Check Database parameter NLS_SORT is set to BINARY All Databases View
PASS OS Check Online(hot) patches are not applied to CRS_HOME. All Database Servers View
PASS SQL Check Table containing SecureFiles LOB storage belongs to a tablespace with extent allocation type that is SYSTEM managed (AUTOALLOCATE) All Databases View
PASS ASM Check ASM memory_target is set to recommended value All ASM Instances View
PASS OS Check Online(hot) patches are not applied to ORACLE_HOME. All Database Servers View
PASS SQL Check All optimizer related parameters are set to the default value (Parameter Set 1 of 3) All Databases View
PASS OS Check Berkeley Database location points to correct GI_HOME All Database Servers View
PASS ASM Check All diskgroups from v$asm_diskgroups are registered in clusterware registry All ASM Instances View
PASS OS Check Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check OLR Integrity check Succeeded All Database Servers View
PASS OS Check pam_limits configured properly for shell limits All Database Servers View
PASS OS Check Package unixODBC-devel-2.2.14-11.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check OCR and Voting disks are stored in ASM All Database Servers View
PASS OS Check System clock is synchronized to hardware clock at system shutdown All Database Servers View
PASS OS Check TFA Collector is installed and running All Database Servers View
PASS OS Check No clusterware resource are in unknown state All Database Servers View
PASS ASM Check No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors) All ASM Instances View
PASS ASM Check No disks found which are not part of any disk group All ASM Instances View
PASS OS Check loopback address is configured as recommended in /etc/hosts All Database Servers View
PASS OS Check Redo log write time is less than 500 milliseconds All Database Servers View
PASS OS Check Grid infastructure network broadcast requirements are met All Database Servers View
PASS OS Check Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS SQL Check No read/write errors found for ASM disks All Databases View
PASS OS Check Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation All Database Servers View
PASS OS Check Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation All Database Servers View
PASS OS Check Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation All Database Servers View
PASS OS Check Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package make-3.81-19.el6 meets or exceeds recommendation All Database Servers View
PASS OS Check Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation All Database Servers View
PASS OS Check Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package unixODBC-devel-2.2.14-11.el6-i686 meets or exceeds recommendation All Database Servers View
PASS OS Check Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation All Database Servers View
PASS OS Check Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation All Database Servers View
PASS OS Check Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package unixODBC-2.2.14-11.el6-i686 meets or exceeds recommendation All Database Servers View
PASS OS Check Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation All Database Servers View
PASS OS Check Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation All Database Servers View
PASS OS Check Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation All Database Servers View
PASS OS Check Remote listener is set to SCAN name All Database Servers View
PASS OS Check Value of remote_listener parameter is able to tnsping All Database Servers View
PASS OS Check No server-side TNSNAMES.ora aliases resolve using SCAN EZ syntax All Database Servers View
PASS OS Check ezconnect is configured in sqlnet.ora All Database Servers View
PASS OS Check Local listener init parameter is set to local node VIP All Database Servers View
PASS SQL Parameter Check Database Parameter parallel_execution_message_size is set to the recommended value All Instances View
PASS SQL Check All bigfile tablespaces have non-default maxbytes values set All Databases View
PASS OS Check umask for RDBMS owner is set to 0022 All Database Servers View
PASS ASM Check ASM Audit file destination file count <= 100,000 All ASM Instances View
PASS OS Check Shell limit hard stack for GI is configured according to recommendation All Database Servers View
PASS SQL Parameter Check asm_power_limit is set to recommended value of 1 All Instances View
PASS OS Check NTP is running with correct setting All Database Servers View
PASS OS Check CSS reboottime is set to the default value of 3 All Database Servers View
PASS OS Check CSS disktimeout is set to the default value of 200 All Database Servers View
PASS OS Check ohasd Log Ownership is Correct (root root) All Database Servers View
PASS OS Check ohasd/orarootagent_root Log Ownership is Correct (root root) All Database Servers View
PASS OS Check crsd/orarootagent_root Log Ownership is Correct (root root) All Database Servers View
PASS OS Check crsd Log Ownership is Correct (root root) All Database Servers View
PASS ASM Check CRS version is higher or equal to ASM version. All ASM Instances View
PASS OS Check All voting disks are online All Database Servers View
PASS OS Check CSS misscount is set to the default value of 30 All Database Servers View
PASS SQL Check All redo log files are of same size All Databases View
PASS OS Check SELinux is not being Enforced. All Database Servers View
PASS OS Check Public interface is configured and exists in OCR All Database Servers View
PASS OS Check ip_local_port_range is configured according to recommendation All Database Servers View
PASS OS Check kernel.shmmax parameter is configured according to recommendation All Database Servers View
PASS OS Check Kernel Parameter fs.file-max is configuration meets or exceeds recommendation All Database Servers View
PASS OS Check Shell limit hard stack for DB is configured according to recommendation All Database Servers View
PASS OS Check Free space in /tmp directory meets or exceeds recommendation of minimum 1GB All Database Servers View
PASS OS Check Shell limit hard nproc for GI is configured according to recommendation All Database Servers View
PASS OS Check Shell limit soft nofile for DB is configured according to recommendation All Database Servers View
PASS OS Check Shell limit hard nofile for GI is configured according to recommendation All Database Servers View
PASS OS Check Shell limit hard nproc for DB is configured according to recommendation All Database Servers View
PASS OS Check Shell limit soft nofile for GI is configured according to recommendation All Database Servers View
PASS OS Check Shell limit soft nproc for GI is configured according to recommendation All Database Servers View
PASS OS Check Shell limit hard nofile for DB is configured according to recommendation All Database Servers View
PASS OS Check Shell limit soft nproc for DB is configured according to recommendation All Database Servers View
PASS OS Check Linux Swap Configuration meets or exceeds Recommendation All Database Servers View
PASS SQL Check All data and temporary are autoextensible All Databases View
PASS OS Check audit_file_dest does not have any audit files older than 30 days All Database Servers View
PASS OS Check $ORACLE_HOME/bin/oradism ownership is root All Database Servers View
PASS OS Check $ORACLE_HOME/bin/oradism setuid bit is set All Database Servers View
PASS SQL Check Avg message sent queue time on ksxp is <= recommended All Databases View
PASS SQL Check Avg message sent queue time is <= recommended All Databases View
PASS SQL Check Avg message received queue time is <= recommended All Databases View
PASS SQL Check No Global Cache lost blocks detected All Databases View
PASS SQL Check Failover method (SELECT) and failover mode (BASIC) are configured properly All Databases View
PASS OS Check No indication of checkpoints not being completed d4jtfmcvurd02 View
PASS SQL Check Avg GC CURRENT Block Receive Time Within Acceptable Range All Databases View
PASS SQL Check Avg GC CR Block Receive Time Within Acceptable Range All Databases View
PASS SQL Check Tablespace allocation type is SYSTEM for all appropriate tablespaces for hbdbuat All Databases View
PASS OS Check background_dump_dest does not have any files older than 30 days All Database Servers View
PASS OS Check Alert log is not too big All Database Servers View
PASS OS Check No ORA-07445 errors found in alert log All Database Servers View
PASS OS Check No ORA-00600 errors found in alert log All Database Servers View
PASS OS Check user_dump_dest does not have trace files older than 30 days All Database Servers View
PASS OS Check core_dump_dest does not have too many older core dump files All Database Servers View
PASS OS Check Kernel Parameter SEMMNS OK All Database Servers View
PASS OS Check Kernel Parameter kernel.shmmni OK All Database Servers View
PASS OS Check Kernel Parameter SEMMSL OK All Database Servers View
PASS OS Check Kernel Parameter SEMMNI OK All Database Servers View
PASS OS Check Kernel Parameter SEMOPM OK All Database Servers View
PASS OS Check Kernel Parameter kernel.shmall OK All Database Servers View
PASS OS Check The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr) All Database Servers View
PASS OS Check $CRS_HOME/log/hostname/client directory does not have too many older log files All Database Servers View
PASS OS Check net.core.rmem_max is Configured Properly All Database Servers View
PASS SQL Parameter Check Instance is using spfile All Instances View
PASS OS Check Interconnect is configured on non-routable network addresses All Database Servers View
PASS OS Check None of the hostnames contains an underscore character All Database Servers View
PASS OS Check net.core.rmem_default Is Configured Properly All Database Servers View
PASS OS Check net.core.wmem_max Is Configured Properly All Database Servers View
PASS OS Check net.core.wmem_default Is Configured Properly All Database Servers View
PASS OS Check ORA_CRS_HOME environment variable is not set All Database Servers View
PASS SQL Check SYS.IDGEN1$ sequence cache size >= 1,000 All Databases View

Cluster Wide

Status Type Message Status On Details
PASS Cluster Wide Check Time zone matches for root user across cluster Cluster Wide View
PASS Cluster Wide Check Time zone matches for GI/CRS software owner across cluster Cluster Wide View
PASS Cluster Wide Check Operating system version matches across cluster. Cluster Wide View
PASS Cluster Wide Check OS Kernel version(uname -r) matches across cluster. Cluster Wide View
PASS Cluster Wide Check Clusterware active version matches across cluster. Cluster Wide View
PASS Cluster Wide Check RDBMS software version matches across cluster. Cluster Wide View
PASS Cluster Wide Check Timezone matches for current user across cluster. Cluster Wide View
PASS Cluster Wide Check Public network interface names are the same across cluster Cluster Wide View
PASS Cluster Wide Check GI/CRS software owner UID matches across cluster Cluster Wide View
PASS Cluster Wide Check RDBMS software owner UID matches across cluster Cluster Wide View

Top

Best Practices and Other Recommendations

Best Practices and Other Recommendations are generally items documented in various sources which could be overlooked. orachk assesses them and calls attention to any findings.


Top

Clusterware software version comparison

Recommendation
 
Benefit / Impact:

Stability, Availability, Standardization



Risk:

Potential cluster instability due to clusterware version mismatch on cluster nodes.
It is possible that if the clusterware versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the
later clusterware version still being present on some nodes but not on others.



Action / Repair:

Unless in the process of a rolling upgrade of the clusterware it is assumed
that the clusterware versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on Cluster Wide
Passed on -

Status on Cluster Wide:
WARNING => Clusterware software version does not match across cluster.

 DATA FROM D4JTFMCVURD02 - CLUSTERWARE SOFTWARE VERSION COMPARISON 



Oracle Clusterware version on node [d4jtfmcvurd02] is [11.2.0.4.0]



DATA FROM D4JTFMCVURD01 - CLUSTERWARE SOFTWARE VERSION COMPARISON 



Oracle Clusterware version on node [d4jtfmcvurd01] is [11.2.0.4.0]



Top

Top

Same NTP server across cluster

Success Factor MAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Make sure machine clocks are synchronized on all nodes to the same NTP source.

NOTE: raccheck expects the NTP time source to be the same across the cluster based on the NTP server IP address.  In cases where the customer is using a fault tolerant configuration for NTP servers and the customer is certain that the configuration is correct and the same time source is being utilized then a finding for this check can be ignored.

Implement NTP (Network Time Protocol) on all nodes.
Prevents evictions and helps to facilitate problem diagnosis.

Also use  the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact the CRS. Enterprise Linux: see /etc/sysconfig/ntpd; Solaris: set "slewalways yes" and "disable pll" in /etc/inet/ntp.conf. 
Like:- 
       # Drop root to id 'ntp:ntp' by default.
       OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
       # Set to 'yes' to sync hw clock after successful ntpdate
       SYNC_HWCLOCK=no
       # Additional options for ntpdate
       NTPDATE_OPTIONS=""

The Time servers operate in a pyramid structure where the top of the NTP stack is usually an external time source (such as GPS Clock).  This then trickles down through the Network Switch stack to the connected server.  
This NTP stack acts as the NTP Server and ensuring that all the RAC Nodes are acting as clients to this server in a slewing method will keep time changes to a minute amount.

Changes in global time to account for atomic accuracy's over Earth rotational wobble , will thus be accounted for with minimal effect.   This is sometimes referred to as the " Leap Second " " epoch ", (between  UTC  12/31/2008 23:59.59 and 01/01/2009 00:00.00  has the one second inserted).

RFC "NTP Slewing for RAC" has been created successfully. CCB ID 462 
 
Links
Needs attention on Cluster Wide
Passed on -

Status on Cluster Wide:

Top

Top

Root time zone

Success Factor MAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Make sure machine clocks are synchronized on all nodes to the same NTP source.
Implement NTP (Network Time Protocol) on all nodes.
Prevents evictions and helps to facilitate problem diagnosis.

Also use  the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact the CRS. Enterprise Linux: see /etc/sysconfig/ntpd; Solaris: set "slewalways yes" and "disable pll" in /etc/inet/ntp.conf. 
Like:- 
       # Drop root to id 'ntp:ntp' by default.
       OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
       # Set to 'yes' to sync hw clock after successful ntpdate
       SYNC_HWCLOCK=no
       # Additional options for ntpdate
       NTPDATE_OPTIONS=""

The Time servers operate in a pyramid structure where the top of the NTP stack is usually an external time source (such as GPS Clock).  This then trickles down through the Network Switch stack to the connected server.  
This NTP stack acts as the NTP Server and ensuring that all the RAC Nodes are acting as clients to this server in a slewing method will keep time changes to a minute amount.

Changes in global time to account for atomic accuracy's over Earth rotational wobble , will thus be accounted for with minimal effect.   This is sometimes referred to as the " Leap Second " " epoch ", (between  UTC  12/31/2008 23:59.59 and 01/01/2009 00:00.00  has the one second inserted).

More information can be found in Note 759143.1
"NTP leap second event causing Oracle Clusterware node reboot"
Linked to this Success Factor.

RFC "NTP Slewing for RAC" has been created successfully. CCB ID 462 
 
Needs attention on -
Passed on Cluster Wide

Status on Cluster Wide:
PASS => Time zone matches for root user across cluster

 DATA FROM D4JTFMCVURD02 FOR ROOT TIME ZONE CHECK 



HKT






Top

Top

GI/CRS software owner time zone

Success Factor MAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 
Benefit / Impact:

Clusterware deployment requirement



Risk:

Potential cluster instability



Action / Repair:

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.

If for whatever reason the time zones have gotten out of sych then the configuration should be corrected.  Consult with Oracle Support about the proper method for correcting the time zones.
 
Needs attention on -
Passed on Cluster Wide

Status on Cluster Wide:
PASS => Time zone matches for GI/CRS software owner across cluster

 DATA FROM D4JTFMCVURD02 FOR CRS USER TIME ZONE CHECK 



HKT






Top

Top

Operating System Version comparison

Recommendation
 Operating system versions should match on each node of the cluster
 
Needs attention on -
Passed on Cluster Wide

Status on Cluster Wide:
PASS => Operating system version matches across cluster.

 DATA FROM D4JTFMCVURD02 - OPERATING SYSTEM VERSION COMPARISON 



Red Hat Enterprise Linux Server release 6.4 (Santiago)



DATA FROM D4JTFMCVURD01 - OPERATING SYSTEM VERSION COMPARISON 



Red Hat Enterprise Linux Server release 6.4 (Santiago)



Top

Top

Kernel version comparison across cluster

Recommendation
 
Benefit / Impact:

Stability, Availability, Standardization



Risk:

Potential cluster instability due to kernel version mismatch on cluster nodes.
It is possible that if the kernel versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the l
ater kernel still being present on some nodes but not on others.



Action / Repair:

Unless in the process of a rolling upgrade of cluster node kernels it is assumed
that the kernel versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on -
Passed on Cluster Wide

Status on Cluster Wide:
PASS => OS Kernel version(uname -r) matches across cluster.

 DATA FROM D4JTFMCVURD02 - KERNEL VERSION COMPARISON ACROSS CLUSTER 



2.6.32-358.el6.x86_64



DATA FROM D4JTFMCVURD01 - KERNEL VERSION COMPARISON ACROSS CLUSTER 



2.6.32-358.el6.x86_64



Top

Top

Clusterware version comparison

Recommendation
 
Benefit / Impact:

Stability, Availability, Standardization



Risk:

Potential cluster instability due to clusterware version mismatch on cluster nodes.
It is possible that if the clusterware versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the
later clusterware version still being present on some nodes but not on others.



Action / Repair:

Unless in the process of a rolling upgrade of the clusterware it is assumed
that the clusterware versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on -
Passed on Cluster Wide

Status on Cluster Wide:
PASS => Clusterware active version matches across cluster.

 DATA FROM D4JTFMCVURD02 - CLUSTERWARE VERSION COMPARISON 



d4jtfmcvurd02.CRS_ACTIVE_VERSION = 11.2.0.4.0



DATA FROM D4JTFMCVURD01 - CLUSTERWARE VERSION COMPARISON 



d4jtfmcvurd01.CRS_ACTIVE_VERSION = 11.2.0.4.0



Top

Top

RDBMS software version comparison

Recommendation
 
Benefit / Impact:

Stability, Availability, Standardization



Risk:

Potential database or application instability due to version mismatch for database homes.
It is possible that if the versions of related RDBMS homes on all the cluster nodes do not
match that some incompatibility could exist which would make diagnosing problems difficult
or bugs fixed in the later RDBMS version still being present on some nodes but not on others.



Action / Repair:

It is assumed that the RDBMS versions of related database homes will match across the cluster. 
If the versions of related RDBMS homes do not match then it is assumed that some mistake has
been made and overlooked.  The purpose of this check is to bring this situation to the attention
of the customer for action and remedy.
 
Needs attention on -
Passed on Cluster Wide

Status on Cluster Wide:
PASS => RDBMS software version matches across cluster.

 DATA FROM D4JTFMCVURD02 - RDBMS SOFTWARE VERSION COMPARISON 



d4jtfmcvurd02.hbdbuat.INSTANCE_VERSION = 112040



DATA FROM D4JTFMCVURD01 - RDBMS SOFTWARE VERSION COMPARISON 



d4jtfmcvurd01.hbdbuat.INSTANCE_VERSION = 112040



Top

Top

Timezone for current user

Success Factor MAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 
Benefit / Impact:

Clusterware deployment requirement



Risk:

Potential cluster instability



Action / Repair:

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.

If for whatever reason the time zones have gotten out of sych then the configuration should be corrected.  Consult with Oracle Support about the proper method for correcting the time zones.
 
Needs attention on -
Passed on Cluster Wide

Status on Cluster Wide:
PASS => Timezone matches for current user across cluster.

 DATA FROM D4JTFMCVURD02 - TIMEZONE FOR CURRENT USER 



Wed Sep 17 09:06:02 HKT 2014



DATA FROM D4JTFMCVURD01 - TIMEZONE FOR CURRENT USER 



Wed Sep 17 09:18:50 HKT 2014



Top

Top

GI/CRS - Public interface name check (VIP)

Success Factor MAKE SURE NETWORK INTERFACES HAVE THE SAME NAME ON ALL NODES
Recommendation
 
Benefit / Impact:

Stability, Availability, Standardization



Risk:

Potential application instability due to incorrectly named network interfaces used for node VIP.



Action / Repair:

The Oracle clusterware expects and it is required that the network interfaces used for
the public interface used for the node VIP be named the same on all nodes of the cluster.
 
Needs attention on -
Passed on Cluster Wide

Status on Cluster Wide:
PASS => Public network interface names are the same across cluster

 DATA FROM D4JTFMCVURD02 - GI/CRS - PUBLIC INTERFACE NAME CHECK (VIP) 



eth1  10.0.61.0  global  public
eth0  192.168.61.0  global  cluster_interconnect



DATA FROM D4JTFMCVURD01 - GI/CRS - PUBLIC INTERFACE NAME CHECK (VIP) 



eth1  10.0.61.0  global  public
eth0  192.168.61.0  global  cluster_interconnect



Top

Top

GI/CRS software owner across cluster

Success Factor ENSURE EACH ORACLE/ASM USER HAS A UNIQUE UID ACROSS THE CLUSTER
Recommendation
 
Benefit / Impact:

Availability, stability




Risk:

Potential OCR logical corruptions and permission problems accessing OCR keys when multiple O/S users share the same UID which are difficult to diagnose.

Action / Repair:

For GI/CRS, ASM and RDBMS software owners ensure one unique user ID with a single name is in use across the cluster.
 
Needs attention on -
Passed on Cluster Wide

Status on Cluster Wide:
PASS => GI/CRS software owner UID matches across cluster

 DATA FROM D4JTFMCVURD02 - GI/CRS SOFTWARE OWNER ACROSS CLUSTER 



grid:x:1001:1000:grid Infrastructure Owner:/home/grid:/bin/bash



DATA FROM D4JTFMCVURD01 - GI/CRS SOFTWARE OWNER ACROSS CLUSTER 



grid:x:1001:1000:grid Infrastructure Owner:/home/grid:/bin/bash



Top

Top

RDBMS software owner UID across cluster

Success Factor ENSURE EACH ORACLE/ASM USER HAS A UNIQUE UID ACROSS THE CLUSTER
Recommendation
 
Benefit / Impact:

Availability, stability



Risk:

Potential OCR logical corruptions and permission problems accessing OCR keys when multiple O/S users share the same UID which are difficult to diagnose.



Action / Repair:

For GI/CRS, ASM and RDBMS software owners ensure one unique user ID with a single name is in use across the cluster.
 
Needs attention on -
Passed on Cluster Wide

Status on Cluster Wide:
PASS => RDBMS software owner UID matches across cluster

 DATA FROM D4JTFMCVURD02 - RDBMS SOFTWARE OWNER UID ACROSS CLUSTER 



uid=1000(oracle) gid=1000(oinstall) groups=1000(oinstall),1003(asmdba),1001(dba),1005(oper)



DATA FROM D4JTFMCVURD01 - RDBMS SOFTWARE OWNER UID ACROSS CLUSTER 



uid=1000(oracle) gid=1000(oinstall) groups=1000(oinstall),1003(asmdba),1001(dba),1005(oper)



Top

Top

Redo log Checkpoint not complete

Recommendation
 If Checkpoints are not being completed the database may hang or experience performance degradation.  Under this circumstance the alert.log will contain "checkpoint not complete" messages and it is recommended that the online redo logs be recreated with a larger size.
 
Links
Needs attention on d4jtfmcvurd01
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => No indication of checkpoints not being completed

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



"Checkpoint not complete" messages in /oracle/app/oracle/diag/rdbms/hbdbuat/hbdbuat2/trace/alert_hbdbuat2.log found 0 times

Status on d4jtfmcvurd01:
INFO => At some times checkpoints are not being completed

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - REDO LOG CHECKPOINT NOT COMPLETE 



"Checkpoint not complete" messages in /oracle/app/oracle/diag/rdbms/hbdbuat/hbdbuat1/trace/alert_hbdbuat1.log found 82 times
Top

Top

Verify no multiple parameter entries in database init.ora(spfile)

Recommendation
 
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



hbdbuat2.__db_cache_size=1308622848
hbdbuat1.__db_cache_size=1308622848
hbdbuat1.__java_pool_size=16777216
hbdbuat2.__java_pool_size=16777216
hbdbuat1.__large_pool_size=33554432
hbdbuat2.__large_pool_size=33554432
hbdbuat1.__pga_aggregate_target=1325400064
hbdbuat2.__pga_aggregate_target=1325400064
hbdbuat1.__sga_target=1979711488
hbdbuat2.__sga_target=1979711488
hbdbuat1.__shared_io_pool_size=0
hbdbuat2.__shared_io_pool_size=0
hbdbuat2.__shared_pool_size=587202560
hbdbuat1.__shared_pool_size=587202560
hbdbuat1.__streams_pool_size=0
hbdbuat2.__streams_pool_size=0 Click for more data 

Status on d4jtfmcvurd01:
PASS => There are no duplicate parameter entries in the database init.ora(spfile) file

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - VERIFY NO MULTIPLE PARAMETER ENTRIES IN DATABASE INIT.ORA(SPFILE) 



hbdbuat2.__db_cache_size=1308622848
hbdbuat1.__db_cache_size=1308622848
hbdbuat1.__java_pool_size=16777216
hbdbuat2.__java_pool_size=16777216
hbdbuat1.__large_pool_size=33554432
hbdbuat2.__large_pool_size=33554432
hbdbuat1.__pga_aggregate_target=1325400064
hbdbuat2.__pga_aggregate_target=1325400064
hbdbuat1.__sga_target=1979711488
hbdbuat2.__sga_target=1979711488
hbdbuat1.__shared_io_pool_size=0
hbdbuat2.__shared_io_pool_size=0
hbdbuat2.__shared_pool_size=587202560
hbdbuat1.__shared_pool_size=587202560
hbdbuat1.__streams_pool_size=0
hbdbuat2.__streams_pool_size=0 Click for more data 
Top

Top

UHC_DB_Foundation_Install:kernel panic_on_oops

Recommendation
 
Benefit / Impact:

Setting this kernel parameter to 1 causes the kernel to panic if a kernel oops or bug is encountered.


Risk:

Kernel panic or oops represents potential problems with the server. If it's set to 0, the kernel will try to keep running (with whatever awful consequences). If it's set to 1, the kernel will enter the panic state and halt.

Action / Repair:

As root user, execute the following command from the shell prompt

echo 1 > /proc/sys/kernel/panic_on_oops

To set this kernel parameter permanently, edit /etc/sysctl.conf and make the following entry

kernel.panic_on_oops = 1
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => kernel.panic_on_oops parameter is configured according to recommendation

 DATA FROM D4JTFMCVURD02 - UHC_DB_FOUNDATION_INSTALL:KERNEL PANIC_ON_OOPS 



1

Status on d4jtfmcvurd01:
PASS => kernel.panic_on_oops parameter is configured according to recommendation

 DATA FROM D4JTFMCVURD01 - UHC_DB_FOUNDATION_INSTALL:KERNEL PANIC_ON_OOPS 



1
Top

Top

UHC_DB_Foundation_Install:stack size soft user limit

Recommendation
 
Benefit / Impact:

Documented value is the /etc/security/limits.conf file as documented in Oracle 12c Database software Installation Guide, section 10 Configuring Kernel Parameters and Resource Limits.  

If the /etc/security/limits.conf file is not configured as described in the documentation then log in to the system as the database software owner (eg., oracle) and check the soft stack configuration as described below.

Risk:

The soft stack shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 10240.  As long as the soft stack limit is 10240 or above then the configuration should be ok.



Action / Repair:

To change the DB software install owner soft stack shell limit, execute the following command at the shell prompt

$ ulimit -Ss 10240

To set this limit permanently for the DB software install owner (ex, oracle), edit /etc/security/limits.conf as root user and add the following entry

oracle soft stack 10240
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Shell limit soft stack for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD02 - UHC_DB_FOUNDATION_INSTALL:STACK SIZE SOFT USER LIMIT 



oracle stack size soft user limit =  10240

Status on d4jtfmcvurd01:
PASS => Shell limit soft stack for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD01 - UHC_DB_FOUNDATION_INSTALL:STACK SIZE SOFT USER LIMIT 



oracle stack size soft user limit =  10240
Top

Top

UHC_DB_Foundation_Performance:Monitoring SGA resize operations

Recommendation
 
Benefit / Impact:

When the size of SGA is increased with the proper minimum value set for shared pool and buffer cache, the number of resize operations should be reduced.

Risk:

An undersized SGA can lead to frequent resizing operations of the SGA which could potentially cause slow performance or database hang situations.

Action / Repair:

Perform the following 3 actions:

1.  Increase the SGA_TARGET or MEMORY_TARGET initialization parameters. If frequent resizing is occurring V$SGA_TARGET_ADVICE can be used to ensure that SGA_TARGET is optimally set.  Refer to 鈥淭uning SGA_TARGET Using V$SGA_TARGET_ADVICE (Doc ID 1323708.1)鈥? for more assistance.

2.  Set the minimum value for shared pool or buffer cache to the high watermark size of past resize  assuming that is the peak level .

Ex:    
SQL> SELECT MAX(final_size) FROM V$SGA_RESIZE_OPS WHERE component='shared pool';
SQL> ALTER SYSTEM SET SHARED_POOL_SIZE= scope=both;

3.  Increase the time interval between resizes. To do this, the following parameter can be set dynamically:
  
SQL> alter system set "_memory_broker_stat_interval"=999;
  
This will increase the time between resize to at least 999 seconds and thereby reducing the number of resize operations. The parameter "_memory_broker_stat_interval" is expressed in seconds with a default setting of 30 seconds. If you are running on RAC you will need to do this on all nodes.
 
Links
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
INFO => Consider investigating the frequency of SGA resize operations and take corrective action

 COMPONENT                 OPER_TYPE       COUNT(1)                                        
------------------------- ------------- ----------                                        
DEFAULT buffer cache      SHRINK                 2                                        
shared pool               GROW                   2                                        
Top

Top

UHC_DB_Foundation_Performance:session_cached_cursors parameter

Recommendation
 
Benefit / Impact:

When the session_cached_cursors database parameter is set to a high value then the number of soft parses will be reduced.  This will reduce the contention in the library cache for sharing the cursors during repeated executions of the SQL statements.

Risk:

The amount of soft parses could increase if the value for the session_cached_cursors parameter is not increased. This may lead to slow performance due to excessive parse calls.

Action / Repair:

The default value for session_cached_cursors is 50.
The parameter value can be modified based on the USAGE from the report.
If the current value is set to less than or equal to 300 and the USAGE is greater than or equal to 100% then the value can be doubled.

Set the parameter value in the parameter file (init.ora or spfile) and restart the database.

SQL> alter system set session_cached_cursors= scope=spfile sid='*';

 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => The database parameter session_cached_cursors is set to the recommended value

 SESSION_CACHED_CURSORS current value =    50                usage =   54%       
Top

Top

UHC_DB_Foundation_Performance:Monitoring changes to schema objects

Recommendation
 
Benefit / Impact:

Changes to schema objects could lead to bad execution plan for the SQL statements involving the objects that changed. Investigating the changes may help in identifying the root cause behind the sudden change in the execution plan for the SQL statements or slow application performance.

Risk:

The generation of bad execution plan due to the changes to schema objects might lead to application performance issues.

Action / Repair:

Action/Repair:

When performance issues are noticed on SQL statements, check with the DBA or Application Developers about the changes to the schema objects and take corrective action to revert it back if possible.
 
Links
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
INFO => Consider investigating changes to the schema objects such as DDLs or new object creation

 OWNER           OBJECT_NAME               OBJECT_TYP CREATED   LAST_DDL_ TIMESTAMP           STATUS                                                                                                     
--------------- ------------------------- ---------- --------- --------- ------------------- -------                                                                                                    
EXFSYS          RLM$SCHDNEGACTION         JOB        11-SEP-14 17-SEP-14 2014-09-17:08:47:31 VALID                                                                                                      
EXFSYS          RLM$EVTCLEANUP            JOB        11-SEP-14 17-SEP-14 2014-09-17:08:15:12 VALID                                                                                                      
ORACLE_OCM      MGMT_CONFIG_JOB           JOB        11-SEP-14 17-SEP-14 2014-09-17:01:01:02 VALID                                                                                                      
PUBLIC          EXF$XPATH_TAG             SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:12:25 VALID                                                                                                      
PUBLIC          EXF$ATTRIBUTE_LIST        SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:12:25 VALID                                                                                                      
PUBLIC          EXF$ATTRIBUTE             SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:12:25 VALID                                                                                                      
PUBLIC          DBMS_EXPFIL               SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:12:26 VALID                                                                                                      
PUBLIC          EVALUATE                  SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:37:12 VALID                                                                                                      
PUBLIC          EXF$XPATH_TAGS            SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:12:25 VALID                                                                                                      
PUBLIC          CTX_USER_INDEX_SUB_LEXERS SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:37:12 VALID                                                                                                      
PUBLIC          ABSPATH                   SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:13:25 VALID                                                                                                      
PUBLIC          DEPTH                     SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:13:25 VALID                                                                                                      
PUBLIC          PATH                      SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:13:25 VALID                                                                                                      
PUBLIC          ALL_PATH                  SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:13:30 VALID                                                                                                      
PUBLIC          CTX_USER_INDEX_SUB_LEXER_ SYNONYM    11-SEP-14 11-SEP-14 2014-09-11:12:37:12 VALID                                                                                                      
                VALS Click for more data 
Top

Top

Verify ASM disk permissions

Recommendation
 Candidate disk devices were found having incorrect permissions, adding these disks to a diskgroup can lead to unexpected results including database instance crashes.  The proper permissions for ASM devices is 0660 (brw-rw-----).  Refer to the Oracle Grid Infrastructure Installation Guide  for additional details. 
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => ASM disk permissions are set as recommended

 DATA FROM D4JTFMCVURD02 - VERIFY ASM DISK PERMISSIONS 




brw-rw---- 1 grid asmadmin 8, 48 Sep 17 09:09 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 32 Sep 17 09:09 /dev/asm-diskc

Status on d4jtfmcvurd01:
PASS => ASM disk permissions are set as recommended

 DATA FROM D4JTFMCVURD01 - VERIFY ASM DISK PERMISSIONS 




brw-rw---- 1 grid asmadmin 8, 48 Sep 17 09:24 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 32 Sep 17 09:24 /dev/asm-diskc
Top

Top

Verify free disk space for rebalance

Recommendation
 
Benefit / Impact:

If any disk on a diskgroup irrespective of redundancy,have less than 50Mb of free space rebalance operation fails to complete with ORA-15041 

Risk:

As rebalance does not complete successfully ,allocation unit distribution  will not be in symmetrical nature. Results in future allocation unit failure . This causes imbalance within the diskgroup.

Action / Repair:

release space by removing some redundant files from this diskgroup.  
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => ASM disks have enough free space for rebalance

 DATA FROM D4JTFMCVURD02 - VERIFY FREE DISK SPACE FOR REBALANCE 




OCR_0000|/dev/asm-diskc|9844
DATA01_0000|/dev/asm-diskd|201723

Status on d4jtfmcvurd01:
PASS => ASM disks have enough free space for rebalance

 DATA FROM D4JTFMCVURD01 - VERIFY FREE DISK SPACE FOR REBALANCE 




OCR_0000|/dev/asm-diskc|9844
DATA01_0000|/dev/asm-diskd|201723
Top

Top

UHC_DB_Foundation_Performance:Optimizer Parameters with Non-Default Settings (Check 2)

Recommendation
 
Benefit / Impact:

One or more optimizer related parameters are set to a non-default value.



Risk:

The settings for these parameters should be recommended by the application vendor or by Oracle Support. If any optimizer related parameters are set, the application should be certified and tested to work well with the parameter settings. Ensure that the performance is as expected.

Non default settings of optimizer related parameters may result in performance issues. The settings of such optimizer parameters can have an impact on the entire application performance.

Please Note:  If the database instance is running Siebel/Peoplesoft Applications then you may not need to change any settings, please check your product documentation.



Action / Repair:

To modify the parameter value, use the alter session or alter system commands. Refer to the Initialization Parameter chapter of the Oracle Database Reference Documentation for more details on setting these parameters.
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => All optimizer related parameters are set to the default value (Parameter Set 2 of 3)

 Query returned no rows which is expected when the SQL check passes.

Top

Top

UHC_DB_Foundation_Performance:Optimizer Parameters with Non-Default Settings (Check 3)

Recommendation
 
Benefit / Impact:

One or more optimizer related parameters are set to a non-default value.



Risk:

The settings for these parameters should be recommended by the application vendor or by Oracle Support. If any optimizer related parameters are set, the application should be certified and tested to work well with the parameter settings. Ensure that the performance is as expected.

Non default settings of optimizer related parameters may result in performance issues. The settings of such optimizer parameters can have an impact on the entire application performance.

Please Note:  If the database instance is running Siebel/Peoplesoft Applications then you may not need to change any settings, please check your product documentation.



Action / Repair:

To modify the parameter value, use the alter session or alter system commands. Refer to the Initialization Parameter chapter of the Oracle Database Reference Documentation for more details on setting these parameters.
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => All optimizer related parameters are set to the default value (Parameter Set 3 of 3) for hbdbuat

 Query returned no rows which is expected when the SQL check passes.

Top

Top

UHC_DB_Foundation_Admin:Obsolete Database Parameters for 11.2

Recommendation
 
Benefit / Impact:

An attempt to start a database using one or more obsolete initialization parameters will succeed, but a warning is returned and recorded in the alert log.



Risk:

In most cases setting an obsolete initialization parameter will have minimal to no impact on the database or application. However, there are some cases where this can lead to unexpected behavior or performance issues. Obsolete parameters could also affect the upgrade.



Action / Repair:

To modify the parameter value, use the alter session or alter system commands. Refer to the Initialization Parameter chapter of the Oracle Database Reference Documentation for more details on setting these parameters.

Obsolete Parameters:

dblink_encrypt_login
hash_join_enabled
log_parallelism
max_rollback_segments
mts_circuits
mts_dispatchers
mts_listener_address
mts_max_dispatchers
mts_max_servers
mts_multiple_listeners
mts_servers
mts_service
mts_sessions
optimizer_max_permutations
oracle_trace_collection_name
oracle_trace_collection_path
oracle_trace_collection_size
oracle_trace_enable
oracle_trace_facility_name
oracle_trace_facility_path
partition_view_enabled
plsql_native_c_compiler
plsql_native_linker
plsql_native_make_file_name
plsql_native_make_utility
row_locking
serializable
transaction_auditing
undo_suppress_errors
enqueue_resources
ddl_wait_for_locks
logmnr_max_persistent_sessions
plsql_compiler_flags
drs_start
gc_files_to_locks
max_commit_propagation_delay
plsql_native_library_dir
plsql_native_library_subdir_count
sql_version
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => No obsolete initialization parameters are set

 Query returned no rows which is expected when the SQL check passes.

Top

Top

root shell limits hard nproc

Recommendation
 The hard nproc shell limit for root should be >= 137524.
 
Needs attention on d4jtfmcvurd02
Passed on -

Status on d4jtfmcvurd02:
WARNING => Shell limit hard nproc for root is NOT configured according to recommendation

 DATA FROM D4JTFMCVURD02 FOR ROOT USER LIMITS 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 62834
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 62834 Click for more data 
Top

Top

UHC_DB_Foundation_Performance:optimizer_features_enable parameter

Recommendation
 
Benefit / Impact:

New optimizer features and fixes are enabled when the OPTIMIZER_FEATURES_ENABLE parameter is set to the current database version.  Setting the OPTIMIZER_FEATURES_ENABLE parameter to a value other than the current database version should only be used as a short term measure at the suggestion of oracle support.



Risk:

By reducing the OPTIMIZER_FEATURES_ENABLE level, new optimizer features and fixes are disabled.  This can potentially have a serious negative impact on performance because it eliminates the possibility of choosing better plans which are only available with the new features and fixes.



Action / Repair:

If Oracle Support has not recommended that you modify this parameter, then you should set the OPTIMIZER_FEATURES_ENABLE parameter to the current database version to ensure that new optimizer features are enabled.  This parameter can be set with the following command:

SQL> alter system set optimizer_features_enable='X.X.X.X' scope=spfile;

Refer to the following MOS document for more details.
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Database parameter OPTIMIZER_FEATURES_ENABLE is set to the current database version

 Database Version = 11.2.0.4.0                                                   
optimizer_features_enable parameter = 11.2.0.4                                  
Top

Top

UHC_DB_Foundation_Performance:db_file_multiblock_read_count parameter

Recommendation
 
Benefit / Impact:

When Oracle calculates the value of the DB_FILE_MULTIBLOCK_READ_COUNT parameter, the I/O used for table scans will be optimized.  Oracle calculates the value of DB_FILE_MULTIBLOCK_READ_COUNT by using: 
* operating system optimal I/O size
* the size of the buffer cache
* the maximum number of sessions



Risk:

The optimizer will favor large I/Os if you explicitly set the DB_FILE_MULTIBLOCK_READ_COUNT parameter to a large value.  This can affect the performance of your queries.



Action / Repair:

Unset the  DB_FILE_MULTIBLOCK_READ_COUNT parameter in the spfile or init.ora file.  This parameter can also be unset with the following command, but the database needs to be restarted in order for the change to take effect:

SQL> alter system reset db_file_multiblock_read_count scope=spfile sid='*';

Refer to the following MOS document for more details.
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Database parameter DB_FILE_MULTIBLOCK_READ_COUNT is unset as recommended

 db_file_multiblock_read_count parameter value = 128                             
db_file_multiblock_read_count isdefault value = TRUE                            
                                                                                
Top

Top

UHC_DB_Foundation_Performance:nls_sort parameter

Recommendation
 
Benefit / Impact:

Setting NLS_SORT to BINARY will result in the ability for your SQL to use standard indexes and queries will be more efficient.



Risk:

Setting NLS_SORT to a value other than BINARY will make regular indexes unusable.  This will negatively affect the performance of your SQL unless you have specific indexes to support the new sorting.



Action / Repair:

Set NLS_SORT to BINARY.   Change the value in the initialization parameter file and then restart the instance. If you do need to have a different sorting in place, then make sure you have the necessary indexes created using a different sorting (NLSSORT function).

Refer to the following MOS document for more details.
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Database parameter NLS_SORT is set to BINARY

 nls_sort parameter value from nls_database_parameters =  BINARY                 
nls_sort parameter value from nls_instance_parameters =                         
nls_sort parameter value from nls_session_parameters =  BINARY                  
nls_sort parameter value from v$parameter =                                     
Top

Top

Verify online(hot) patches are not applied on CRS_HOME

Recommendation
 (1) Online patches are recommended when a downtime cannot be scheduled and the patch needs to be applied urgently
(2) Online patches consume additional memory and if kept permanently the memory consumption increases as the number of processes in the system increases.
(3) It is strongly recommended to rollback all Online patches and replace them with regular (offline) patches on next instance shutdown or the earliest maintenance window.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Online(hot) patches are not applied to CRS_HOME.

 DATA FROM D4JTFMCVURD02 - VERIFY ONLINE(HOT) PATCHES ARE NOT APPLIED ON CRS_HOME 



ls: cannot access /oracle/app/11.2.0/grid/hpatch/: No such file or directory

Status on d4jtfmcvurd01:
PASS => Online(hot) patches are not applied to CRS_HOME.

 DATA FROM D4JTFMCVURD01 - VERIFY ONLINE(HOT) PATCHES ARE NOT APPLIED ON CRS_HOME 



ls: cannot access /oracle/app/11.2.0/grid/hpatch/: No such file or directory
Top

Top

Tablespace extent management Allocation Type for SecureFiles LOB storage

Recommendation
 For tables with SecureFiles LOB storage, the associated tablespace extent management local allocation type should be AUTOALLOCATE (or SYSTEM managed) to allow Oracle to automatically determine extent size based on data profile
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Table containing SecureFiles LOB storage belongs to a tablespace with extent allocation type that is SYSTEM managed (AUTOALLOCATE)

 Query returned no rows which is expected when the SQL check passes.

Top

Top

ASM MEMORY_TARGET Size

Recommendation
 
Benefit / Impact:

Enhanced database stability when using Oracle ASM for database storage



Risk:

In some cases ASM default memory allocation for some customers has proven to be too small leading to ORA-4031 errors which can in turn lead to database instability.  


Action / Repair:

It is a best practice and recommended to use Automatic Memory Management (AMM) for the ASM instance in version 11.2 and above regardless of platform.  AMM is configured in part through the use of the memory_target initialization parameter.

As a result, the default setting for memory_target in 11.2.0.4 and 12.1 versions has been increased to 1024M from the previous default of 256M.  Oracle also advises increasing memory_target to 1024M as a minimum in version 11.2 as a proactive measure to match the new defaults.  No patch is required in order to increase the value of memory_target.
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => ASM memory_target is set to recommended value

 DATA FROM D4JTFMCVURD02 - ASM MEMORY_TARGET SIZE 




MEMORY_TARGET
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ASM  memory_target = 1076M


Status on d4jtfmcvurd01:
PASS => ASM memory_target is set to recommended value

 DATA FROM D4JTFMCVURD01 - ASM MEMORY_TARGET SIZE 




MEMORY_TARGET
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ASM  memory_target = 1076M

Top

Top

Verify online(hot) patches are not applied on ORACLE_HOME

Recommendation
 (1) Online patches are recommended when a downtime cannot be scheduled and the patch needs to be applied urgently.
(2) Online patches consume additional memory and if kept permanently the memory consumption increases as the number of processes in the system increases.
(3) It is strongly recommended to rollback all Online patches and replace them with regular (offline) patches on next instance shutdown or the earliest maintenance window.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Online(hot) patches are not applied to ORACLE_HOME.

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - VERIFY ONLINE(HOT) PATCHES ARE NOT APPLIED ON ORACLE_HOME 




Patch File Name  				  State
================ 				=========
No patches currently installed

Status on d4jtfmcvurd01:
PASS => Online(hot) patches are not applied to ORACLE_HOME.

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - VERIFY ONLINE(HOT) PATCHES ARE NOT APPLIED ON ORACLE_HOME 




Patch File Name  				  State
================ 				=========
No patches currently installed
Top

Top

UHC_DB_Foundation_Install:System Core File Limit

Recommendation
 
Benefit / Impact:

Improper setting of COREDUMPSIZE could prevent or prolong troubleshooting.



Risk:

The system may not have an adequate COREDUMPSISZE. It is recommended to set the OS core file size to 2GB or greater to aid in troubleshooting issues that generate core dumps (i.e. ORA-7445 errors).



Action / Repair:

A value of 0 means that no core files will be generated. A value between 0 and 2GB could mean that required data for troubleshooting may not be collected.
 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
INFO => Consider increasing the COREDUMPSIZE size

 DATA FROM D4JTFMCVURD02 - UHC_DB_FOUNDATION_INSTALL:SYSTEM CORE FILE LIMIT 



core file size          (blocks, -c) 0

Status on d4jtfmcvurd01:
INFO => Consider increasing the COREDUMPSIZE size

 DATA FROM D4JTFMCVURD01 - UHC_DB_FOUNDATION_INSTALL:SYSTEM CORE FILE LIMIT 



core file size          (blocks, -c) 0
Top

Top

UHC_DB_Foundation_Performance:Optimizer Parameters with Non-Default Settings (Check 1)

Recommendation
 
Benefit / Impact:

One or more optimizer related parameters are set to a non-default value.



Risk:

The settings for these parameters should be recommended by the application vendor or by Oracle Support. If any optimizer related parameters are set, the application should be certified and tested to work well with the parameter settings. Ensure that the performance is as expected.

Non default settings of optimizer related parameters may result in performance issues. The settings of such optimizer parameters can have an impact on the entire application performance.

Please Note:  If the database instance is running Siebel/Peoplesoft Applications then you may not need to change any settings, please check your product documentation.



Action / Repair:

To modify the parameter value, use the alter session or alter system commands. Refer to the Initialization Parameter chapter of the Oracle Database Reference Documentation for more details on setting these parameters.
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => All optimizer related parameters are set to the default value (Parameter Set 1 of 3)

 Query returned no rows which is expected when the SQL check passes.

Top

Top

Verify Berkeley Database location for Cloned GI homes

Recommendation
 
Benefit / Impact:

After cloning a Grid Home the Berkeley Database configuration file ($GI_HOME/crf/admin/crf.ora) in the new home should not be pointing to the previous GI home where it is cloned from. During previous patch set updates Berkeley Database configuration files were found still pointing to the 'before (previously cloned from) home'. It was due an invalid cloning procedure the Berkeley Database location of the 'new home' was not updated during the out of place bundle patching procedure.



Risk:

Berkeley Database configurations still pointing to the old GI home, will fail GI Upgrades to 11.2.0.3. Error messages in $GRID_HOME/log/crflogd/crflogdOUT.log logfile


Action / Repair:

Detect:
cat $GI_HOME/crf/admin/crf`hostname -s`.ora | grep CRFHOME | grep $GI_HOME | wc -l 

cat $GI_HOME/crf/admin/crf`hostname -s`.ora | grep BDBLOC | egrep "default|$GI_HOME | wc -l
 
For each of the above commands, when no '1' is returned, the CRFHOME or BDBLOC as mentioned the crf.ora file has the correct reference to the GI_HOME in it.

To solve this, manually edit $GI_HOME/crf/admin/crf.ora in the cloned Grid Infrastructure Home and change the values for BDBLOC and CRFHOME and make sure none of them point to the previous GI_HOME but to their current home. The same change needs to be done on all nodes in the cluster. It is recommended to set BDBLOC to "default". This needs to be done prior the upgrade.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Berkeley Database location points to correct GI_HOME

 DATA FROM D4JTFMCVURD02 - VERIFY BERKELEY DATABASE LOCATION FOR CLONED GI HOMES 



BDBLOC=default
HOSTS=d4jtfmcvurd01,d4jtfmcvurd02
REPLICA=d4jtfmcvurd02
MYNAME=d4jtfmcvurd02
CLUSTERNAME=d4jtfmc-cluster
USERNAME=grid
CRFHOME=/oracle/app/11.2.0/grid
BDBSIZE=61511
d4jtfmcvurd02 5=127.0.0.1 0
d4jtfmcvurd02 1=127.0.0.1 0
d4jtfmcvurd02 0=192.168.61.95 61020
d4jtfmcvurd01 5=127.0.0.1 0
d4jtfmcvurd01 1=127.0.0.1 0
d4jtfmcvurd01 0=192.168.61.94 61020
d4jtfmcvurd01 2=192.168.61.94 61021
ACTIVE=d4jtfmcvurd01,d4jtfmcvurd02 Click for more data 

Status on d4jtfmcvurd01:
PASS => Berkeley Database location points to correct GI_HOME

 DATA FROM D4JTFMCVURD01 - VERIFY BERKELEY DATABASE LOCATION FOR CLONED GI HOMES 



BDBLOC=default
MASTER=d4jtfmcvurd01
MYNAME=d4jtfmcvurd01
CLUSTERNAME=d4jtfmc-cluster
USERNAME=grid
CRFHOME=/oracle/app/11.2.0/grid
MASTERPUB=10.0.61.94
d4jtfmcvurd01 5=127.0.0.1 0
d4jtfmcvurd01 1=127.0.0.1 0
d4jtfmcvurd01 0=192.168.61.94 61020
DEAD=
d4jtfmcvurd01 2=192.168.61.94 61021
BDBSIZE=61511
ACTIVE=d4jtfmcvurd01,d4jtfmcvurd02
REPLICA=d4jtfmcvurd02
HOSTS=d4jtfmcvurd01,d4jtfmcvurd02 Click for more data 
Top

Top

Disk I/O Scheduler on Linux

Recommendation
 Benefit / Impact:

Starting with the 2.6 kernel, for example Red Hat Enterprise Linux 4 or 5, the I/O scheduler can be
changed at boot time which controls the way the kernel commits reads and writes to disks. For more
information on various I/O scheduler, see Choosing an I/O Scheduler for Red Hat Enterprise Linux 4
and the 2.6 Kernel 1


The Completely Fair Queuing (CFQ) scheduler is the default algorithm in Red Hat Enterprise Linux
4 which is suitable for a wide variety of applications and provides a good compromise between
throughput and latency. In comparison to the CFQ algorithm, the Deadline scheduler caps maximum
latency per request and maintains a good disk throughput which is best for disk-intensive database
applications. Hence, the Deadline scheduler is recommended for database systems.

Action / Repair:

Red Hat Enterprise Linux 5 and above allows users to change I/O schedulers dynamically,
one way this can be done is by executing the command echo sched_name > /sys/block/
/queue/scheduler.

 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
WARNING => Linux Disk I/O Scheduler should be configured to [Deadline]

 DATA FROM D4JTFMCVURD02 - DISK I/O SCHEDULER ON LINUX 




Status on d4jtfmcvurd01:
WARNING => Linux Disk I/O Scheduler should be configured to [Deadline]

 DATA FROM D4JTFMCVURD01 - DISK I/O SCHEDULER ON LINUX 



Top

Top

Verify control_file_record_keep_time value is in recommended range

Success Factor ORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 
Benefit / Impact:

When a Recovery Manager catalog is not used, the initialization parameter "control_file_record_keep_time" controls the period of time for which circular reuse records are maintained within the database control file. RMAN repository records are kept in circular reuse records.  The optimal setting is the maximum number of days in the past that is required to restore and recover a specific database without the use of a RMAN recovery catalog.  Setting this parameter within a recommended range (1 to 9 days) has been shown to address most recovery scenarios by ensuring archive logs and backup records are not prematurely aged out making database recovery much more challenging.    

The impact of verifying that the initialization parameter control_file_record_keep_time value is in the recommended range is minimal. Increasing this value will increase the size of the controlfile and possible query time for backup meta data and archive data.



Risk:

If the control_file_record_keep_time is set to 0, no RMAN repository records are retained in the controlfile, which causes a much more challenging database recovery operation if RMAN recovery catalog is not available.

If the control_file_record_keep_time is set too high, problems can arise with space management within the control file, expansion of the control file, and control file contention issues.



Action / Repair:

To verify that the FRA space management function is not blocked, as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

CF_RECORD_KEEP_TIME="";
CF_RECORD_KEEP_TIME=$(echo -e "set heading off feedback off\n select value from V\$PARAMETER where name = 'control_file_record_keep_time';" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [[ $CF_RECORD_KEEP_TIME -ge "1" && $CF_RECORD_KEEP_TIME -le "9" ]]
then echo -e "\nPASS:  control_file_record_keep_time is within recommended range [1-9]:" $CF_RECORD_KEEP_TIME;
elif [ $CF_RECORD_KEEP_TIME -eq "0" ]
then echo -e "\nFAIL:  control_file_record_keep_time is set to zero:" $CF_RECORD_KEEP_TIME;
else echo -e "\nWARNING:  control_file_record_keep_time is not within recommended range [1-9]:" $CF_RECORD_KEEP_TIME;
fi;

The expected output should be:

PASS:  control_file_record_keep_time is within recommended range [1-9]: 7

If the output is not as expected, investigate and correct the condition(s).

NOTE: The use of an RMAN recovery catalog is recommended as the best way to avoid the loss of RMAN metadata because of overwritten control file records.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => control_file_record_keep_time is within recommended range [1-9] for hbdbuat

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - VERIFY CONTROL_FILE_RECORD_KEEP_TIME VALUE IS IN RECOMMENDED RANGE 



control_file_record_keep_time = 7
Top

Top

Verify rman controlfile autobackup is set to ON

Success Factor ORACLE RECOVERY MANAGER(RMAN) BEST PRACTICES
Recommendation
 
Benefit / Impact:

The control file is a binary file that records the physical structure of the database and contains important meta data required to recover the database. The database cannot startup or stay up unless all control files are valid. When a Recovery Manager catalog is not used, the control file is needed for database recovery because it contains all backup and recovery meta data.

The impact of verifying and setting "CONTROLFILE AUTOBACKUP" to "ON" is minimal. 



Risk:

When a Recovery Manager catalog is not used, loss of the controlfile results in loss of all backup and recovery meta data, which causes a much more challenging database recovery operation


Action / Repair:

To verify that RMAN "CONTROLFILE AUTOBACKUP" is set to "ON", as the owner userid of the oracle home with the environment properly set for the target database, execute the following command set:

RMAN_AUTOBACKUP_STATUS="";
RMAN_AUTOBACKUP_STATUS=$(echo -e "set heading off feedback off\n select value from V\$RMAN_CONFIGURATION where name = 'CONTROLFILE AUTOBACKUP';" | $ORACLE_HOME/bin/sqlplus -s "/ as sysdba");
if [ -n "$RMAN_AUTOBACKUP_STATUS" ] && [ "$RMAN_AUTOBACKUP_STATUS" = "ON" ]
then echo -e "\nPASS:  RMAN "CONTROLFILE AUTOBACKUP" is set to \"ON\":" $RMAN_AUTOBACKUP_STATUS;
else
echo -e "\nFAIL:  RMAN "CONTROLFILE AUTOBACKUP" should be set to \"ON\":" $RMAN_AUTOBACKUP_STATUS;
fi;

The expected output should be:

PASS:  RMAN CONTROLFILE AUTOBACKUP is set to "ON": ON

If the output is not as expected, investigate and correct the condition(s).

For additional information, review information on CONFIGURE syntax in Oracle脗庐 Database Backup and Recovery Reference 11g Release 2 (11.2).

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

NOTE: Oracle MAA also recommends periodically backing up the controlfile to trace as additional backup.

SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
 
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
WARNING => RMAN controlfile autobackup should be set to ON

Top

Top

Registered diskgroups in clusterware registry

Recommendation
 ASM disk groups are registered in clusterware by automatic,after you mount them. please make sure required ASM disk groups are mounted.
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry

 DATA FROM D4JTFMCVURD02 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

OCR
DATA01

Diskgroups from Clusterware resources:-

DATA01
OCR

Status on d4jtfmcvurd01:
PASS => All diskgroups from v$asm_diskgroups are registered in clusterware registry

 DATA FROM D4JTFMCVURD01 - REGISTERED DISKGROUPS IN CLUSTERWARE REGISTRY 



Diskgroups from v$asm_diskgroups:-

OCR
DATA01

Diskgroups from Clusterware resources:-

DATA01
OCR
Top

Top

Check for parameter cvuqdisk|1.0.9|1|x86_64

Recommendation
 Install the operating system package cvuqdisk. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you receive the error message "Package cvuqdisk not installed" when you run Cluster Verification Utility. Use the cvuqdisk rpm for your hardware (for example, x86_64, or i386).
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64

Status on d4jtfmcvurd01:
PASS => Package cvuqdisk-1.0.9-1-x86_64 meets or exceeds recommendation

cvuqdisk|1.0.9|1|x86_64
Top

Top

OLR Integrity

Recommendation
 Any Kind of OLR corruption should be remedied before attempting upgrade otherwise 11.2 GI rootupgrade.sh fails with "Invalid  OLR during upgrade"
 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => OLR Integrity check Succeeded

 DATA FROM D4JTFMCVURD02 FOR OLR INTEGRITY 



Status of Oracle Local Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       2696
	 Available space (kbytes) :     259424
	 ID                       :  676998442
	 Device/File Name         : /oracle/app/11.2.0/grid/cdata/d4jtfmcvurd02.olr
                                    Device/File integrity check succeeded

	 Local registry integrity check succeeded

	 Logical corruption check succeeded

Top

Top

pam_limits check

Recommendation
 This is required to make the shell limits work properly and applies to 10g,11g and 12c.  

Add the following line to the /etc/pam.d/login file, if it does not already exist:

session    required     pam_limits.so

 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => pam_limits configured properly for shell limits

 DATA FROM D4JTFMCVURD02 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so Click for more data 

Status on d4jtfmcvurd01:
PASS => pam_limits configured properly for shell limits

 DATA FROM D4JTFMCVURD01 - PAM_LIMITS CHECK 



#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth       include      system-auth
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
session    optional     pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      system-auth
-session   optional     pam_ck_connector.so Click for more data 
Top

Top

Verify vm.min_free_kbytes

Recommendation
 
Benefit / Impact:

Maintaining vm.min_free_kbytes >= 524288 (512MB) && <= 1048576 (1GB)  helps a Linux system to reclaim memory faster and avoid LowMem pressure issues which can lead to node eviction or other outage or performance issues.

The impact of verifying vm.min_free_kbytes >= 524288 && <= 1048576 is minimal. The impact of adjusting the parameter should include editing the /etc/sysctl.conf file and rebooting the system. It is possible, but not recommended, especially for a system already under LowMem pressure, to modify the setting interactively. However, a reboot should still be performed to make sure the interactive setting is retained through a reboot.



Risk:

Exposure to unexpected node eviction and reboot.



Action / Repair:

To verify that vm.min_free_kbytes is properly set to  >= 524288 && <= 1048576 execute the following command

/sbin/sysctl -n vm.min_free_kbytes

cat /proc/sys/vm/min_free_kbytes

If the output is not as expected, investigate and correct the condition. For example if the value is incorrect in /etc/sysctl.conf but current memory matches the incorrect value, simply edit the /etc/sysctl.conf file to include the line "vm.min_free_kbytes  >= 524288 && <= 1048576" and reboot the node. 
 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
WARNING => vm.min_free_kbytes should be set as recommended.

 DATA FROM D4JTFMCVURD02 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 67584

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 67584

Status on d4jtfmcvurd01:
WARNING => vm.min_free_kbytes should be set as recommended.

 DATA FROM D4JTFMCVURD01 - VERIFY VM.MIN_FREE_KBYTES 



Value in sysctl = 67584

Value in active memory (from /proc/sys/vm/min_free_kbytes) = 67584
Top

Top

Verify transparent hugepages are disabled

Recommendation
 
Benefit / Impact:

Linux transparent huge pages are enabled by default in OEL 6 and SuSE 11 which might cause soft lockup of CPUs and make system unresponsive which can cause node eviction.

if AnonHugePages are found and value is greater than 0 in /proc/meminfo, means transparent huge pages are enabled. 

Risk:

Because Transparent HugePages are known to cause unexpected node reboots and performance problems with RAC, Oracle strongly advises to disable the use of Transparent HugePages. In addition, Transparent Hugepages may cause problems even in a single-instance database environment with unexpected performance problems or delays. As such, Oracle recommends disabling Transparent HugePages on all Database servers running Oracle

Action / Repair:

To turn this feature off,put this in /etc/rc.local:-

echo never > /sys/kernel/mm/transparent_hugepage/enabled 
echo never > /sys/kernel/mm/transparent_hugepage/defrag 
 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
WARNING => Linux transparent huge pages are enabled

 DATA FROM D4JTFMCVURD02 - VERIFY TRANSPARENT HUGEPAGES ARE DISABLED 



Note:- Look for AnonHugePages:



MemTotal:        8062844 kB
MemFree:          208068 kB
Buffers:          367036 kB
Cached:          5482580 kB
SwapCached:         1928 kB
Active:          3553504 kB
Inactive:        3456700 kB
Active(anon):    1932252 kB
Inactive(anon):  1192100 kB
Active(file):    1621252 kB
Inactive(file):  2264600 kB
Unevictable:      321320 kB Click for more data 

Status on d4jtfmcvurd01:
WARNING => Linux transparent huge pages are enabled

 DATA FROM D4JTFMCVURD01 - VERIFY TRANSPARENT HUGEPAGES ARE DISABLED 



Note:- Look for AnonHugePages:



MemTotal:        8062844 kB
MemFree:          352084 kB
Buffers:          334160 kB
Cached:          4863736 kB
SwapCached:        12080 kB
Active:          4015808 kB
Inactive:        2800840 kB
Active(anon):    2312920 kB
Inactive(anon):  1296128 kB
Active(file):    1702888 kB
Inactive(file):  1504712 kB
Unevictable:      333300 kB Click for more data 
Top

Top

Verify data files are recoverable

Success Factor DATA CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 
Benefit / Impact:

When you perform a DML or DDL operation using the NOLOGGING or UNRECOVERABLE clause, database backups made prior to the unrecoverable operation are invalidated and new backups are required. You can specify the SQL ALTER DATABASE or SQL ALTER TABLESPACE statement with the FORCE LOGGING clause to override the NOLOGGING setting; however, this statement will not repair a database that is already invalid.



Risk:

Changes under NOLOGGING will not be available after executing database recovery from a backup made prior to the unrecoverable change.


Action / Repair:

To verify that the data files are recoverable, execute the following Sqlplus command as the userid that owns the oracle home for the database:
select file#, unrecoverable_time, unrecoverable_change# from v$datafile where unrecoverable_time is not null;
If there are any unrecoverable actions, the output will be similar to:
     FILE# UNRECOVER UNRECOVERABLE_CHANGE#
---------- --------- ---------------------
        11 14-JAN-13               8530544
If nologging changes have occurred and the data must be recoverable then a backup of those datafiles that have nologging operations within should be done immediately. Please consult the following sections of the Backup and Recovery User guide for specific steps to resolve files that have unrecoverable changes

The standard best practice is to enable FORCE LOGGING at the database level (ALTER DATABASE FORCE LOGGING;) to ensure that all transactions are recoverable. However, placing the a database in force logging mode for ETL operations can lead to unnecessary database overhead. MAA best practices call for isolating data that does not need to be recoverable. Such data would include:

Data resulting from temporary loads
Data resulting from transient transformations
Any non critical data

To reduce unnecessary redo generation, do the following:

Specifiy FORCE LOGGING for all tablespaces that you explicitly wish to protect (ALTERTABLESPACE FORCE LOGGING;)
Specify NO FORCE LOGGING for those tablespaces that do not need protection (ALTERTABLESPACE NO FORCE LOGGING;).
Disable force logging at the database level (ALTER DATABASE NO FORCE LOGGING;) otherwise the database level settings will override the tablespace settings.

Once the above is complete, redo logging will function as follows:

Explicit no logging operations on objects in the no logging tablespace will not generate the normal redo (a small amount of redo is always generated for no logging operations to signal that a no logging operation was performed).

All other operations on objects in the no logging tablespace will generate the normal redo.
Operations performed on objects in the force logging tablespaces always generate normal redo.

Note:-Please seek oracle support assistance to mitigate this problem. Upon their guidance, the following commands could help validate, identify corrupted blocks.

              oracle> dbv file= userid=sys/******
              RMAN> validate check logical database;
              SQL> select COUNT(*) from v$database_block_corruption;

 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => The data files are all recoverable

 Query returned no rows which is expected when the SQL check passes.

Top

Top

Check for parameter unixODBC-devel|2.2.14|11.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package unixODBC-devel-2.2.14-11.el6-x86_64 meets or exceeds recommendation

unixODBC-devel|2.2.14|12.el6_3|i686
unixODBC-devel|2.2.14|12.el6_3|x86_64

Status on d4jtfmcvurd01:
PASS => Package unixODBC-devel-2.2.14-11.el6-x86_64 meets or exceeds recommendation

unixODBC-devel|2.2.14|12.el6_3|i686
unixODBC-devel|2.2.14|12.el6_3|x86_64
Top

Top

OCR and Voting file location

Recommendation
 Starting with Oracle 11gR2, our recommendation is to use Oracle ASM to store OCR and Voting Disks. With appropriate redundancy level (HIGH or NORMAL) of the ASM Disk Group being used, Oracle can create required number of Voting Disks as part of installation
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => OCR and Voting disks are stored in ASM

 DATA FROM D4JTFMCVURD02 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       3164
	 Available space (kbytes) :     258956
	 ID                       : 1547283109
	 Device/File Name         :       +OCR
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured Click for more data 

Status on d4jtfmcvurd01:
PASS => OCR and Voting disks are stored in ASM

 DATA FROM D4JTFMCVURD01 - OCR AND VOTING FILE LOCATION 



Status of Oracle Cluster Registry is as follows :
	 Version                  :          3
	 Total space (kbytes)     :     262120
	 Used space (kbytes)      :       3164
	 Available space (kbytes) :     258956
	 ID                       : 1547283109
	 Device/File Name         :       +OCR
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured Click for more data 
Top

Top

Parallel Execution Health-Checks and Diagnostics Reports

Recommendation
 This audit check captures information related to Oracle Parallel Query (PQ), DOP, PQ/PX Statistics, Database Resource Plans, Consumers Groups etc. This is primarily for Oracle Support Team consumption. However, customers may also review this to identify/troubleshoot related problems.
For every database, there will be a zip file of format  in raccheck output directory. 
 
Needs attention on d4jtfmcvurd02
Passed on -

Status on d4jtfmcvurd02:
INFO => Parallel Execution Health-Checks and Diagnostics Reports

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - PARALLEL EXECUTION HEALTH-CHECKS AND DIAGNOSTICS REPORTS 



left empty intentionally
Top

Top

Hardware clock synchronization

Recommendation
 /etc/init.d/halt file is called when system is rebooted or halt. this file must have instructions to synchronize system time to hardware clock.

it should have commands like 

[ -x /sbin/hwclock ] && action $"Syncing hardware clock to system time" /sbin/hwclock $CLOCKFLAGS
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => System clock is synchronized to hardware clock at system shutdown

 DATA FROM D4JTFMCVURD02 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc

Status on d4jtfmcvurd01:
PASS => System clock is synchronized to hardware clock at system shutdown

 DATA FROM D4JTFMCVURD01 - HARDWARE CLOCK SYNCHRONIZATION 



[ -x /sbin/hwclock -a -e /dev/rtc ] && action $"Syncing hardware clock to system time" /sbin/hwclock --systohc
Top

Top

TFA Collector status

Recommendation
 TFA Collector (aka TFA) is a diagnostic collection utility to simplify  diagnostic data collection on Oracle Clusterware/Grid Infrastructure and RAC  systems.  TFA is similar to the diagcollection utility packaged with Oracle  Clusterware in the fact that it collects and packages diagnostic data however  TFA is MUCH more powerful than diagcollection with its ability to centralize  and automate the collection of diagnostic information. This helps speed up  the data collection and upload process with Oracle Support, minimizing delays  in data requests and analysis.
TFA provides the following key benefits:- 
  - Encapsulates diagnostic data collection for all CRS/GI and RAC components  on all cluster nodes into a single command executed from a single node 
  - Ability to "trim" diagnostic files during data collection to reduce data  upload size 
  - Ability to isolate diagnostic data collection to a given time period 
  - Ability to centralize collected diagnostic output to a single server in  the cluster 
  - Ability to isolate diagnostic collection to a particular product  component, e.g. ASM, RDBMS, Clusterware 
  - Optional Real Time Scan Alert Logs for conditions indicating a problem (DB 
  - Alert Logs, ASM Alert Logs, Clusterware Alert Logs, etc) 
  - Optional Automatic Data Collection based off of Real Time Scan findings 
  - Optional On Demand Scan (user initialted) of all log and trace files for  conditions indicating a problem 
  - Optional Automatic Data Collection based off of On Demand Scan findings 
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => TFA Collector is installed and running

 DATA FROM D4JTFMCVURD02 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10446 Sep 11 10:20 /etc/init.d/init.tfa 

-rw-r--r-- 1 root root 6 Sep 11 10:20 /oracle/app/11.2.0/grid/tfa/d4jtfmcvurd02/tfa_home/internal/.pidfile

Status on d4jtfmcvurd01:
PASS => TFA Collector is installed and running

 DATA FROM D4JTFMCVURD01 - TFA COLLECTOR STATUS 




-rwxr-xr-x 1 root root 10446 Sep 11 10:12 /etc/init.d/init.tfa 

-rw-r--r-- 1 root root 6 Sep 11 10:12 /oracle/app/11.2.0/grid/tfa/d4jtfmcvurd01/tfa_home/internal/.pidfile
Top

Top

Clusterware resource status

Recommendation
 Resources were found to be in an UNKNOWN state on the system.  Having  resources in this state often results in issues when upgrading.  It is  recommended to correct resources in an UNKNOWN state prior to upgrading.   

 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => No clusterware resource are in unknown state

 DATA FROM D4JTFMCVURD02 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [d4jtfmcvurd02] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA01.dg
               ONLINE  ONLINE       d4jtfmcvurd01                                
               ONLINE  ONLINE       d4jtfmcvurd02                                
ora.LISTENER.lsnr
               ONLINE  ONLINE       d4jtfmcvurd01 Click for more data 

Status on d4jtfmcvurd01:
PASS => No clusterware resource are in unknown state

 DATA FROM D4JTFMCVURD01 - CLUSTERWARE RESOURCE STATUS 



Oracle Clusterware active version on the cluster is [11.2.0.4.0] 
Oracle Clusterware version on node [d4jtfmcvurd01] is [11.2.0.4.0]
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA01.dg
               ONLINE  ONLINE       d4jtfmcvurd01                                
               ONLINE  ONLINE       d4jtfmcvurd02                                
ora.LISTENER.lsnr
               ONLINE  ONLINE       d4jtfmcvurd01 Click for more data 
Top

Top

ORA-15196 errors in ASM alert log

Recommendation
 ORA-15196 errors means ASM encountered an invalid metadata block. Please see the trace file for more information next to ORA-15196 error in ASM alert log.  If this is an old error, you can ignore this finding otherwise open service request with Oracle support to find the cause and fix it


 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)

 DATA FROM D4JTFMCVURD02 - ORA-15196 ERRORS IN ASM ALERT LOG 




Status on d4jtfmcvurd01:
PASS => No corrupt ASM header blocks indicated in ASM alert log (ORA-15196 errors)

 DATA FROM D4JTFMCVURD01 - ORA-15196 ERRORS IN ASM ALERT LOG 



Top

Top

Disks without Disk Group

Recommendation
 The GROUP_NUMBER and DISK_NUMBER columns in GV$ASM_DISK will only be valid if the disk is part of a disk group which is currently mounted by the instance. Otherwise, GROUP_NUMBER will be 0, and DISK_NUMBER will be a unique value with respect to the other disks that also have a group number of 0. Run following query to find out the disks which are not part of any disk group.

select name,path,HEADER_STATUS,GROUP_NUMBER  from gv$asm_disk where group_number=0;
 
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => No disks found which are not part of any disk group

 DATA FROM D4JTFMCVURD02 - DISKS WITHOUT DISK GROUP 




no rows selected

Top

Top

localhost entry in /etc/hosts

Recommendation
 
Benefit / Impact:

Various Oracle processes use the loopback address for checking the status of a localhost.

Risk:

Incorrect loopback address configuration in /etc/hosts may result in failure of those processes.

Action / Repair:

IP address 127.0.0.1 should only map to localhost and/or localhost.localdomain, not anything else.  

Correct configuration for the loopback address in /etc/hosts is as follows:

127.0.0.1 localhost.localdomain localhost
or
127.0.0.1 localhost  localhost.localdomain

The following configuration would be considered incorrect:

127.0.0.1  localhost  myhost.com  localhost.localdomain
or
127.0.0.1  myhost.com  localhost  localhost.localdomain

Where myhost.com is the actual hostname or network identity of the host and which should map to the public IP address

In other words, the actual short hostname and/or the fully qualified hostname should not be configured at all for the loopback address.
 
See the below referenced documents for more details on this subject.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => loopback address is configured as recommended in /etc/hosts

 DATA FROM D4JTFMCVURD02 - LOCALHOST ENTRY IN /ETC/HOSTS 



127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

Status on d4jtfmcvurd01:
PASS => loopback address is configured as recommended in /etc/hosts

 DATA FROM D4JTFMCVURD01 - LOCALHOST ENTRY IN /ETC/HOSTS 



127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
Top

Top

Redo log file write time latency

Recommendation
 When the latency hits 500ms, a Warning message is written to the lgwr trace file(s). For example:

Warning: log write elapsed time 564ms, size 2KB

Even though this threshold is very high and latencies below this range could impact the application performance, it is still worth to capture and report it to customers for necessary action.The performance impact of LGWR latencies include commit delays,Broadcast-on-Commit delays etc.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Redo log write time is less than 500 milliseconds

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - REDO LOG FILE WRITE TIME LATENCY 




Status on d4jtfmcvurd01:
PASS => Redo log write time is less than 500 milliseconds

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - REDO LOG FILE WRITE TIME LATENCY 



Top

Top

Broadcast Requirements for Networks

Success Factor USE SEPARATE SUBNETS FOR INTERFACES CONFIGURED FOR REDUNDANT INTERCONNECT (HAIP)
Recommendation
 all public and private interconnect network cards should be able to arping to all remote nodes in cluster.

For example using public network card, arping remote node using following command and output should be "Received 1 response(s)"

/sbin/arping -b -f -c 1 -w 1 -I eth1 nodename2.

Here eth1 is public network interface and nodename2 is second node in cluster.

 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => Grid infastructure network broadcast requirements are met

 DATA FROM D4JTFMCVURD02 FOR BROADCAST REQUIREMENTS FOR NETWORKS 



ARPING 10.0.61.94 from 10.0.61.95 eth1
Unicast reply from 10.0.61.94 [00:50:56:9B:7F:0A]  1.109ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 10.0.61.94 from 192.168.61.95 eth0
Unicast reply from 10.0.61.94 [00:50:56:9B:4A:A9]  1.071ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
Top

Top

Primary database protection with Data Guard

Success Factor DATABASE/CLUSTER/SITE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Oracle 11g and higher Active Data Guard is the real-time data protection and availability solution that eliminates single point of failure by maintaining one or more synchronized physical replicas of the production database. If an unplanned outage of any kind impacts the production database, applications and users can quickly failover to a synchronized standby, minimizing downtime and preventing data loss. An Active Data Guard standby can be used to offload read-only applications, ad-hoc queries, and backups from the primary database or be dual-purposed as a test system at the same time it provides disaster protection. An Active Data Guard standby can also be used to minimize downtime for planned maintenance when upgrading to new Oracle Database patch sets and releases and for select migrations.  
 
For zero data loss protection and fastest recovery time, deploy a local Data Guard standby database with Data Guard Fast-Start Failover and integrated client failover. For protection against outages impacting both the primary and the local standby or the entire data center, or a broad geography, deploy a second Data Guard standby database at a remote location.

Key HA Benefits:

With Oracle 11g release 2 and higher Active Data Guard and real time apply, data block corruptions can be repaired automatically and downtime can be reduced from hours and days of application impact to zero downtime with zero data loss.

With MAA best practices, Data Guard Fast-Start Failover (typically a local standby) and integrated client failover, downtime from database, cluster and site failures can be reduced from hours to days and seconds and minutes.

With remote standby database (Disaster Recovery Site), you have protection from complete site failures.

In all cases, the Active Data Guard instances can be active and used for other activities.

Data Guard can reduce risks and downtime for planned maintenance activities by using Database rolling upgrade with transient logical standby, standby-first patch apply and database migrations.

Active Data Guard provides optimal data protection by using physical replication and comprehensive Oracle validation to maintain an exact byte-for-byte copy of the primary database that can be open read-only to offload reporting, ad-hoc queries and backups. For other advanced replication requirements where read-write access to a replica database is required while it is being synchronized with the primary database see Oracle GoldenGate logical replication.Oracle GoldenGate can be used to support heterogeneous database platforms and database releases, an effective read-write full or subset logical replica and to reduce or eliminate downtime for application, database or system changes. Oracle GoldenGate flexible logical replication solution鈥檚 main trade-off is the additional administration for application developer and database administrators.
 
Links
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
FAIL => Primary database is NOT protected with Data Guard (standby database) for real-time data protection and availability

Top

Top

Locally managed tablespaces

Success Factor DATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 In order to reduce contention to the data dictionary, rollback data, and reduce the amount of generated redo, locally managed tablespaces should be used rather than dictionary managed tablespaces.Please refer to the below referenced notes for more information about benefits of locally managed tablespace and how to migrate a tablesapce from dictionary managed to locally managed.
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => All tablespaces are locally managed tablespace

 dictionary_managed_tablespace_count = 0                                         
Top

Top

Automatic segment storage management

Success Factor DATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Starting with Oracle 9i Auto Segment Space Management (ASSM) can be used by specifying the SEGMENT SPACE MANAGEMENT clause, set to AUTO in the CREATE TABLESPACE statement. Implementing the ASSM feature allows Oracle to use bitmaps to manage the free space within segments. The bitmap describes the status of each data block within a segment, with respect to the amount of space in the block available for inserting rows. The current status of the space available in a data block is reflected in the bitmap allowing for Oracle to manage free space automatically with ASSM. ASSM tablespaces automate freelist management and remove the requirement/ability to specify PCTUSED, FREELISTS, and FREELIST GROUPS storage parameters for individual tables and indexes created in these tablespaces. 
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => All tablespaces are using Automatic segment storage management

 Query returned no rows which is expected when the SQL check passes.

Top

Top

Default Temporary Tablespace

Success Factor DATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Its recommended to set default temporary tablespace at database level to achieve optimal performance for queries which requires sorting the data.
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Default temporary tablespace is set

 DEFAULT_TEMP_TABLESPACE                                                         
TEMP                                                                            
                                                                                
Top

Top

Archivelog Mode

Success Factor DATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 Running the database in ARCHIVELOG mode and using database FORCE LOGGING mode are prerequisites for database recovery operations. The ARCHIVELOG mode enables online database backup and is necessary to recover the database to a point in time later than what has been restored. Features such as Oracle Data Guard and Flashback Database require that the production database run in ARCHIVELOG mode.
 
Links
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
WARNING => Database Archivelog Mode should be set to ARCHIVELOG

 Archivelog Mode = NOARCHIVELOG                                                  
Top

Top

Verify AUD$ and FGA_LOG$ tables use Automatic Segment Space Management

Recommendation
 
Benefit / Impact:

With AUDIT_TRAIL set for database (AUDIT_TRAIL=db), and the AUD$ and FGA_LOG$ tables located in a dictionary segment space managed SYSTEM tablespace, "gc" wait events are sometimes observed during heavy periods of database logon activity. Testing has shown that under such conditions, placing the AUD$ and FGA_LOG$ tables in the SYSAUX tablespace, which uses automatic segment space management, reduces the space related wait events.

The impact of verifying that the AUD$ and FGA_LOG$ tables are in the SYSAUX table space is low. Moving them if they are not located in the SYSAUX does not require an outage, but should be done during a scheduled maintenance period or slow audit record generation window.



Risk:

If AUD$ and FGA_LOG$ tables are not verifed to use automatic segment space management, there is a risk of a performance slowdown during periods of high database login activity.


Action / Repair:

To verify the segment space management policy currently in use by the AUD$ and FGA_LOG$ tables, use the following Sqlplus command:

select t.table_name,ts.segment_space_management from dba_tables t, dba_tablespaces ts where ts.tablespace_name = t.tablespace_name and t.table_name in ('AUD$','FGA_LOG$');

The output should be:
TABLE_NAME                     SEGMEN
------------------------------ ------
FGA_LOG$                       AUTO
AUD$                           AUTO 
If one or both of the AUD$ or FGA_LOG$ tables return "MANUAL", use the DBMS_AUDIT_MGMT package to move them to the SYSAUX tablespace:

BEGIN
DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,--this moves table AUD$ 
audit_trail_location_value => 'SYSAUX');  
END;  
/

BEGIN
DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,--this moves table FGA_LOG$ 
audit_trail_location_value => 'SYSAUX');
END;
/   

The output should be similar to:
PL/SQL procedure successfully completed. 

If the output is not as above, investigate and correct the condition.
NOTE: This "DBMS_AUDIT_MGMT.set_audit_trail" command should be executed as part of the dbca template post processing scripts, but for existing databases, the command can be executed, but since it moves the AUD$ & FGA_LOG$ tables using "alter table ... move" command, it should be executed at a "quiet" time
 
Links
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
FAIL => Table AUD$[FGA_LOG$] should use Automatic Segment Space Management for hbdbuat

 AUD$                           MANUAL                                           
FGA_LOG$                       MANUAL                                           
Top

Top

Check for parameter libgcc|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64
libgcc|4.4.7|3.el6|i686

Status on d4jtfmcvurd01:
PASS => Package libgcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64
libgcc|4.4.7|3.el6|i686
Top

Top

ASM disk read write error

Recommendation
 Read errors can be the result of a loss of access to the entire disk or media corruptions on an otherwise a healthy disk. ASM tries to recover from read errors on corrupted sectors on a disk. When a read error by the database or ASM triggers the ASM instance to attempt bad block remapping, ASM reads a good copy of the extent and copies it to the disk that had the read error.

If the write to the same location succeeds, then the underlying allocation unit (sector) is deemed healthy. This might be because the underlying disk did its own bad block reallocation.

If the write fails, ASM attempts to write the extent to a new allocation unit on the same disk. If this write succeeds, the original allocation unit is marked as unusable. If the write fails, the disk is taken offline.

One unique benefit on ASM-based mirroring is that the database instance is aware of the mirroring. For many types of logical corruptions such as a bad checksum or incorrect System Change Number (SCN), the database instance proceeds through the mirror side looking for valid content and proceeds without errors. If the process in the database that encountered the read is in a position to obtain the appropriate locks to ensure data consistency, it writes the correct data to all mirror sides.

When encountering a write error, a database instance sends the ASM instance a disk offline message.

If database can successfully complete a write to at least one extent copy and receive acknowledgment of the offline disk from ASM, the write is considered successful.

If the write to all mirror side fails, database takes the appropriate actions in response to a write error such as taking the tablespace offline.

When the ASM instance receives a write error message from an database instance or when an ASM instance encounters a write error itself, ASM instance attempts to take the disk offline. ASM consults the Partner Status Table (PST) to see whether any of the disk's partners are offline. If too many partners are already offline, ASM forces the dismounting of the disk group. Otherwise, ASM takes the disk offline.

The ASMCMD remap command was introduced to address situations where a range of bad sectors exists on a disk and must be corrected before ASM or database I/O
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => No read/write errors found for ASM disks

 0                  0                                            
Top

Top

Block Corruptions

Success Factor DATA CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 The V$DATABASE_BLOCK_CORRUPTION view displays blocks marked corrupt by Oracle Database components such as RMAN commands, ANALYZE, dbv, SQL queries, and so on. Any process that encounters a corrupt block records the block corruption in this view.  Repair techniques include block media recovery, restoring data files, recovering with incremental backups, and block newing. Block media recovery can repair physical corruptions, but not logical corruptions. It is also recommended to use RMAN 鈥淐HECK LOGICAL鈥? option to check for data block corruptions periodically. Please consult the Oracle庐 Database Backup and Recovery User's Guide for repair instructions
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => No reported block corruptions in V$DATABASE_BLOCK_CORRUPTIONS

 0 block_corruptions found in v$database_block_corruptions                       
Top

Top

Check for parameter sysstat|9.0.4|11.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package sysstat-9.0.4-11.el6-x86_64 meets or exceeds recommendation

sysstat|9.0.4|20.el6|x86_64
Top

Top

Check for parameter libgcc|4.4.4|13.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64
libgcc|4.4.7|3.el6|i686

Status on d4jtfmcvurd01:
PASS => Package libgcc-4.4.4-13.el6-i686 meets or exceeds recommendation

libgcc|4.4.7|3.el6|x86_64
libgcc|4.4.7|3.el6|i686
Top

Top

Check for parameter binutils|2.20.51.0.2|5.11.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils-devel|2.20.51.0.2|5.36.el6|x86_64
binutils|2.20.51.0.2|5.36.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package binutils-2.20.51.0.2-5.11.el6-x86_64 meets or exceeds recommendation

binutils-devel|2.20.51.0.2|5.36.el6|x86_64
binutils|2.20.51.0.2|5.36.el6|x86_64
Top

Top

Check for parameter glibc|2.12|1.7.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6|x86_64
glibc|2.12|1.107.el6|x86_64
glibc|2.12|1.107.el6|i686
glibc-headers|2.12|1.107.el6|x86_64
compat-glibc|2.5|46.2|x86_64
glibc-common|2.12|1.107.el6|x86_64
compat-glibc-headers|2.5|46.2|x86_64

Status on d4jtfmcvurd01:
PASS => Package glibc-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6|x86_64
glibc|2.12|1.107.el6|x86_64
glibc|2.12|1.107.el6|i686
glibc-headers|2.12|1.107.el6|x86_64
compat-glibc|2.5|46.2|x86_64
glibc-common|2.12|1.107.el6|x86_64
compat-glibc-headers|2.5|46.2|x86_64
Top

Top

Check for parameter libstdc++|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-296|2.96|144.el6|i686
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++-docs|4.4.7|3.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package libstdc++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-296|2.96|144.el6|i686
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++-docs|4.4.7|3.el6|x86_64
Top

Top

Check for parameter libstdc++|4.4.4|13.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-296|2.96|144.el6|i686
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++-docs|4.4.7|3.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package libstdc++-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++|4.4.7|3.el6|x86_64
compat-libstdc++-296|2.96|144.el6|i686
libstdc++-devel|4.4.7|3.el6|x86_64
compat-libstdc++-33|3.2.3|69.el6|x86_64
libstdc++-docs|4.4.7|3.el6|x86_64
Top

Top

Check for parameter glibc|2.12|1.7.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6|x86_64
glibc|2.12|1.107.el6|x86_64
glibc|2.12|1.107.el6|i686
glibc-headers|2.12|1.107.el6|x86_64
compat-glibc|2.5|46.2|x86_64
glibc-common|2.12|1.107.el6|x86_64
compat-glibc-headers|2.5|46.2|x86_64

Status on d4jtfmcvurd01:
PASS => Package glibc-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6|x86_64
glibc|2.12|1.107.el6|x86_64
glibc|2.12|1.107.el6|i686
glibc-headers|2.12|1.107.el6|x86_64
compat-glibc|2.5|46.2|x86_64
glibc-common|2.12|1.107.el6|x86_64
compat-glibc-headers|2.5|46.2|x86_64
Top

Top

Check for parameter gcc|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-objc|4.4.7|3.el6|x86_64
gcc-objc++|4.4.7|3.el6|x86_64
compat-gcc-34|3.4.6|19.el6|x86_64
gcc-gfortran|4.4.7|3.el6|x86_64
gcc-c++|4.4.7|3.el6|x86_64
compat-gcc-34-c++|3.4.6|19.el6|x86_64
compat-gcc-34-g77|3.4.6|19.el6|x86_64
gcc-java|4.4.7|3.el6|x86_64
gcc-gnat|4.4.7|3.el6|x86_64
gcc|4.4.7|3.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package gcc-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-objc|4.4.7|3.el6|x86_64
gcc-objc++|4.4.7|3.el6|x86_64
compat-gcc-34|3.4.6|19.el6|x86_64
gcc-gfortran|4.4.7|3.el6|x86_64
gcc-c++|4.4.7|3.el6|x86_64
compat-gcc-34-c++|3.4.6|19.el6|x86_64
compat-gcc-34-g77|3.4.6|19.el6|x86_64
gcc-java|4.4.7|3.el6|x86_64
gcc-gnat|4.4.7|3.el6|x86_64
gcc|4.4.7|3.el6|x86_64
Top

Top

Check for parameter make|3.81|19.el6|

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package make-3.81-19.el6 meets or exceeds recommendation

make|3.81|20.el6|x86_64
Top

Top

Check for parameter libstdc++-devel|4.4.4|13.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package libstdc++-devel-4.4.4-13.el6-i686 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
Top

Top

Check for parameter libaio-devel|0.3.107|10.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|i686

Status on d4jtfmcvurd01:
PASS => Package libaio-devel-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|i686
Top

Top

Check for parameter libaio|0.3.107|10.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|i686
libaio|0.3.107|10.el6|i686

Status on d4jtfmcvurd01:
PASS => Package libaio-0.3.107-10.el6-x86_64 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|i686
libaio|0.3.107|10.el6|i686
Top

Top

Check for parameter unixODBC-devel|2.2.14|11.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package unixODBC-devel-2.2.14-11.el6-i686 meets or exceeds recommendation

unixODBC-devel|2.2.14|12.el6_3|i686
unixODBC-devel|2.2.14|12.el6_3|x86_64

Status on d4jtfmcvurd01:
PASS => Package unixODBC-devel-2.2.14-11.el6-i686 meets or exceeds recommendation

unixODBC-devel|2.2.14|12.el6_3|i686
unixODBC-devel|2.2.14|12.el6_3|x86_64
Top

Top

Check for parameter compat-libstdc++-33|3.2.3|69.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-i686 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
Top

Top

Check for parameter glibc-devel|2.12|1.7.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package glibc-devel-2.12-1.7.el6-x86_64 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6|x86_64
Top

Top

Check for parameter glibc-devel|2.12|1.7.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package glibc-devel-2.12-1.7.el6-i686 meets or exceeds recommendation

glibc-devel|2.12|1.107.el6|x86_64
Top

Top

Check for parameter compat-libcap1|1.10|1|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64

Status on d4jtfmcvurd01:
PASS => Package compat-libcap1-1.10-1-x86_64 meets or exceeds recommendation

compat-libcap1|1.10|1|x86_64
Top

Top

Check for parameter ksh|20100621|12.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
WARNING => Package ksh-20100621-12.el6-x86_64 is recommended but NOT installed


				

Status on d4jtfmcvurd01:
WARNING => Package ksh-20100621-12.el6-x86_64 is recommended but NOT installed


				
Top

Top

Check for parameter unixODBC|2.2.14|11.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package unixODBC-2.2.14-11.el6-i686 meets or exceeds recommendation

unixODBC|2.2.14|12.el6_3|x86_64
unixODBC-devel|2.2.14|12.el6_3|i686
unixODBC-devel|2.2.14|12.el6_3|x86_64
unixODBC|2.2.14|12.el6_3|i686

Status on d4jtfmcvurd01:
PASS => Package unixODBC-2.2.14-11.el6-i686 meets or exceeds recommendation

unixODBC|2.2.14|12.el6_3|x86_64
unixODBC-devel|2.2.14|12.el6_3|i686
unixODBC-devel|2.2.14|12.el6_3|x86_64
unixODBC|2.2.14|12.el6_3|i686
Top

Top

Check for parameter libaio|0.3.107|10.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|i686
libaio|0.3.107|10.el6|i686

Status on d4jtfmcvurd01:
PASS => Package libaio-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|i686
libaio|0.3.107|10.el6|i686
Top

Top

Check for parameter libstdc++-devel|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package libstdc++-devel-4.4.4-13.el6-x86_64 meets or exceeds recommendation

libstdc++-devel|4.4.7|3.el6|x86_64
Top

Top

Check for parameter gcc-c++|4.4.4|13.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package gcc-c++-4.4.4-13.el6-x86_64 meets or exceeds recommendation

gcc-c++|4.4.7|3.el6|x86_64
Top

Top

Check for parameter compat-libstdc++-33|3.2.3|69.el6|x86_64

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64

Status on d4jtfmcvurd01:
PASS => Package compat-libstdc++-33-3.2.3-69.el6-x86_64 meets or exceeds recommendation

compat-libstdc++-33|3.2.3|69.el6|x86_64
Top

Top

Check for parameter libaio-devel|0.3.107|10.el6|i686

Recommendation
 Please review MOS Note 169706.1 - Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2)
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|i686

Status on d4jtfmcvurd01:
PASS => Package libaio-devel-0.3.107-10.el6-i686 meets or exceeds recommendation

libaio-devel|0.3.107|10.el6|x86_64
libaio-devel|0.3.107|10.el6|i686
Top

Top

Remote listener set to scan name

Recommendation
 For Oracle Database 11g Release 2, the REMOTE_LISTENER parameter should be set to the SCAN. This allows the instances to register with the SCAN Listeners to provide information on what services are being provided by the instance, the current load, and a recommendation on how many incoming connections should be directed to the
instance.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Remote listener is set to SCAN name

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = d4jtfmcvurd 

scan name =  d4jtfmcvurd

Status on d4jtfmcvurd01:
PASS => Remote listener is set to SCAN name

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - REMOTE LISTENER SET TO SCAN NAME 



remote listener name = d4jtfmcvurd 

scan name =  d4jtfmcvurd
Top

Top

tnsping to remote listener parameter

Recommendation
 If value of remote_listener parameter is set to non-pingable tnsnames,instances will not be cross registered and will not balance the load across cluster.In case of node or instance failure, connections may not failover to serviving node. For more information about remote_listener,load balancing and failover.

 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Value of remote_listener parameter is able to tnsping

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 17-SEP-2014 09:07:31

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.61.98)(PORT=1526)))
OK (0 msec)

Status on d4jtfmcvurd01:
PASS => Value of remote_listener parameter is able to tnsping

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - TNSPING TO REMOTE LISTENER PARAMETER 




TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 17-SEP-2014 09:21:36

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.61.98)(PORT=1526)))
OK (0 msec)
Top

Top

SCAN TNSNAMES.ora resolution

Recommendation
 
Benefit / Impact:

Database instance SCAN listener registration, connection failover and load balancing implications



Risk:

Potential problems with database instances registering with SCAN listeners, connection load balancing and fail over issues

Action / Repair:

No server-side TNSNAMES.ora should have an alias resolving using the SCAN EZ-syntax,eg., 
 
myscan.oracle.com:1521= (DESCRIPTION= (ADDRESS_LIST=(ADDRESS=鈥?

If any such syntax exists, for whatever reason such as third party application configuration, it should be removed. 
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => No server-side TNSNAMES.ora aliases resolve using SCAN EZ syntax

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - SCAN TNSNAMES.ORA RESOLUTION 



scan name = d4jtfmcvurd

 /oracle/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora file is 


HBDBUAT =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = d4jtfmcvurd)(PORT = 1526))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = hbdbuat)
    )
  ) Click for more data 

Status on d4jtfmcvurd01:
PASS => No server-side TNSNAMES.ora aliases resolve using SCAN EZ syntax

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - SCAN TNSNAMES.ORA RESOLUTION 



scan name = d4jtfmcvurd

 /oracle/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora file is 

HBDBUAT =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = d4jtfmcvurd)(PORT = 1526))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = hbdbuat)
    )
  )



HBDBUAT1 = Click for more data 
Top

Top

ezconnect configuration in sqlnet.ora

Recommendation
 EZCONNECT eliminates the need for service name lookups in tnsnames.ora files when connecting to an Oracle database across a TCP/IP network. In fact, no naming or directory system is required when using this method.It extends the functionality of the host naming method by enabling clients to connect to a database with an optional port and service name in addition to the host name of the database.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => ezconnect is configured in sqlnet.ora

 DATA FROM D4JTFMCVURD02 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /oracle/app/grid


Status on d4jtfmcvurd01:
PASS => ezconnect is configured in sqlnet.ora

 DATA FROM D4JTFMCVURD01 - EZCONNECT CONFIGURATION IN SQLNET.ORA 




NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ADR_BASE = /oracle/app/grid

Top

Top

Local listener set to node VIP

Recommendation
 The LOCAL_LISTENER parameter should be set to the node VIP. If you need fully qualified domain names, ensure that LOCAL_LISTENER is set to the fully qualified domain name (node-vip.mycompany.com). By default a local listener is created during cluster configuration that runs out of the grid infrastructure home and listens on the specified port(default is 1521) of the node VIP.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Local listener init parameter is set to local node VIP

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=10.0.61.97)(PORT=1526)) VIP Names=d4jtfmcvurd02-vip VIP IPs=10.0.61.97

Status on d4jtfmcvurd01:
PASS => Local listener init parameter is set to local node VIP

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - LOCAL LISTENER SET TO NODE VIP 



Local Listener= (ADDRESS=(PROTOCOL=TCP)(HOST=10.0.61.96)(PORT=1526)) VIP Names=d4jtfmcvurd01-vip VIP IPs=10.0.61.96
Top

Top

Check for parameter parallel_execution_message_size

Success Factor CONFIGURE PARALLEL_EXECUTION_MESSAGE_SIZE FOR BETTER PARALLELISM PERFORMANCE
Recommendation
 
Benefit / Impact: 

Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized.
The parameters are common to all database instances. The impact of setting these parameters is minimal.
The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact.

Risk: 

If

Risk: 

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.


Action / Repair: 

PARALLEL_EXECUTION_MESSAGE_SIZE = 16384 Improves Parallel Query performance
 
Links
Needs attention on -
Passed on hbdbuat2, hbdbuat1

Status on hbdbuat2:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

hbdbuat2.parallel_execution_message_size = 16384                                

Status on hbdbuat1:
PASS => Database Parameter parallel_execution_message_size is set to the recommended value

hbdbuat1.parallel_execution_message_size = 16384                                
Top

Top

Hang and Deadlock material

Recommendation
 Ways to troubleshoot database hang and deadlocks:- 

1. V$Wait_Chains - The DB (the dia0 BG process) samples local hanganalyze every 3 seconds and global hanganalyze every 10 seconds and stores it in memory.  V$Wait_Chains is an interface to seeing this "hanganalyze cache".  That means at any moment you can query v$wait_chains and see what hanganalyze knows about the current wait chains at any given time.  In 11.2 with a live hang this is the first thing you can use to know who the blocker and final blockers are.For more info see following: NOTE:1428210.1 - Troubleshooting Database Contention With V$Wait_Chains

2. Procwatcher - In v11, this script samples v$wait_chains every 90 seconds and collects interesting info about the processes involved in wait chains (short stacks, current wait, current SQL, recent ASH data, locks held, locks waiting for, latches held, etc...).  This script works in RAC and non-RAC and is a proactive way to trap hang data even if you can't predict when the problem will happen.  Some very large customers are proactively either planning to, or using this script on hundreds of systems to catch session contention.  For more info see followings: NOTE:459694.1 - Procwatcher: Script to Monitor and Examine Oracle DB and Clusterware Processes AND NOTE:1352623.1 - How To Troubleshoot Database Contention With Procwatcher.

3. Hanganalyze Levels - Hanganalyze format and output is completely different starting in version 11.  In general we recommend getting hanganalyze dumps at level 3. Make sure you always get a global hanganalyze in RAC.

4. Systemstate Levels - With a large SGA and a large number of processes, systemstate dumps at level 266 or 267 can dump a HUGE amount of data and take even hours to dump on large systems.  That situation should be avoided.  One lightweight alternative is a systemstate dump at level 258.  This is basically a level 2 systemstate plus short stacks and is much cheaper than level 266 or 267 and level 258 still has the most important info that support engineers typically look at like process info, latch info, wait events, short stacks, and more at a fraction of the cost.

Note that bugs 11800959 and 11827088 have significant impact on systemstate dumps.  If not on 11.2.0.3+ or a version that has both fixes applied, systemstate dumps at levels 10, 11, 266, and 267 can be VERY expensive in RAC.  In versions < 11.2.0.3 without these fixes applied, systemstate dumps at level 258 would typically be advised. 

NOTE:1353073.1 - Exadata Diagnostic Collection Guide while for Exadata many of the concepts for hang detection and analysis are the same for normal RAC systems.

5.Hang Management and LMHB provide good proactive hang related data.  For Hang Management see following note: 1270563.1 Hang Manager 11.2.0.2
 
Links
Needs attention on d4jtfmcvurd02
Passed on -
Top

Top

Check for parameter recyclebin

Success Factor LOGICAL CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 
Benefit / Impact: 
  
Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice  values set at deployment time. By setting these database initialization  parameters as recommended, known problems may be avoided and performance  maximized. 
The parameters are common to all database instances. The impact of setting  these parameters is minimal.  The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance  settings can be done after careful performance evaluation and clear understanding of the performance impact. 
  


Risk: 
  
If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization  parameter is not set as recommended, and the actual set value. 
  

Action / Repair: 
  
"RECYCLEBIN = ON" provides higher availability by enabling the Flashback Drop  feature. "ON" is the default value and should not be changed. 

 
Needs attention on -
Passed on hbdbuat2, hbdbuat1

Status on hbdbuat2:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

hbdbuat2.recyclebin = on                                                        

Status on hbdbuat1:
PASS => RECYCLEBIN on PRIMARY is set to the recommended value

hbdbuat1.recyclebin = on                                                        
Top

Top

Check ORACLE_PATCH 17478514 for RDBMS home

Recommendation
 Result: A - SCN Headroom is good.

Apply the latest recommended CPU or PSU patches based on your maintenance schedule.

For further information review the referenced MOS document.
 
Links
Needs attention on d4jtfmcvurd02:/oracle/app/oracle/product/11.2.0/db_1, d4jtfmcvurd01:/oracle/app/oracle/product/11.2.0/db_1
Passed on -

Status on d4jtfmcvurd02:/oracle/app/oracle/product/11.2.0/db_1:
INFO => IMPORTANT: Oracle Database Patch 17478514 PSU is NOT applied to RDBMS Home

 Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /oracle/app/oracle/product/11.2.0/db_1
Central Inventory : /oracle/app/oraInventory
   from           : /oracle/app/oracle/product/11.2.0/db_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /oracle/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/opatch2014-09-17_09-04-03AM_1.log

Lsinventory Output file location : /oracle/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/lsinv/lsinventory2014-09-17_09-04-03AM.txt

------------------------------------------------------------------------------------------------------
Installed Top-level Products (1): 

Oracle Database 11g                                                  11.2.0.4.0
There are 1 products installed in this Oracle Home. Click for more data 

Status on d4jtfmcvurd01:/oracle/app/oracle/product/11.2.0/db_1:
INFO => IMPORTANT: Oracle Database Patch 17478514 PSU is NOT applied to RDBMS Home

 Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /oracle/app/oracle/product/11.2.0/db_1
Central Inventory : /oracle/app/oraInventory
   from           : /oracle/app/oracle/product/11.2.0/db_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /oracle/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/opatch2014-09-17_09-11-04AM_1.log

Lsinventory Output file location : /oracle/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/lsinv/lsinventory2014-09-17_09-11-04AM.txt

------------------------------------------------------------------------------------------------------
Installed Top-level Products (1): 

Oracle Database 11g                                                  11.2.0.4.0
There are 1 products installed in this Oracle Home. Click for more data 
Top

Top

Check for parameter fast_start_mttr_target

Success Factor COMPUTER FAILURE PREVENTION BEST PRACTICES
Recommendation
 
Benefit / Impact:

To optimize run time performance for write/redo generation intensive workloads.  Increasing fast_start_mttr_target from the default will reduce checkpoint writes from DBWR processes, making more room for LGWR IO.



Risk:

Performance implications if set too aggressively (lower setting = more aggressive), but a trade-off between performance and availability.  This trade-off and the type of workload needs to be evaluated and a decision made whether the default is needed to meet RTO objectives.  fast_start_mttr_target should be set to the desired RTO (Recovery Time Objective) while still maintaing performance SLAs. So this needs to be evaluated on a case by case basis.


Action / Repair:

Consider increasing fast_start_mttr_target to 300 (five minutes) from the default. The trade-off is that instance recovery will run longer, so if instance recovery is more important than performance, then keep fast_start_mttr_target at the default.

Keep in mind that an application with inadequately sized redo logs will likley not see an affect from this change due to frequent log switches so follow best practices for sizing redo logs.

Considerations for a direct writes in a data warehouse type of application: Even though direct operations aren't using the buffer cache, fast_start_mttr_target is very effective at controlling crash recovery time because it ensures adequate checkpointing for the few buffers that are resident (ex: undo segment headers).
 
Needs attention on hbdbuat2, hbdbuat1
Passed on -

Status on hbdbuat2:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

hbdbuat2.fast_start_mttr_target = 0                                             

Status on hbdbuat1:
WARNING => fast_start_mttr_target should be greater than or equal to 300.

hbdbuat1.fast_start_mttr_target = 0                                             
Top

Top

Check for parameter undo_retention

Success Factor LOGICAL CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Oracle Flashback Technology enables fast logical failure repair. Oracle recommends that you use automatic undo management with sufficient space to attain your desired undo retention guarantee, enable Oracle Flashback Database, and allocate sufficient space and I/O bandwidth in the fast recovery area.  Application monitoring is required for early detection.  Effective and fast repair comes from leveraging and rehearsing the most common application specific logical failures and using the different flashback features effectively (e.g flashback query, flashback version query, flashback transaction query, flashback transaction, flashback drop, flashback table, and flashback database).

Key HA Benefits:

With application monitoring and rehearsed repair actions with flashback technologies, application downtime can reduce from hours and days to the time to detect the logical inconsistency.

Fast repair for logical failures caused by malicious or accidental DML or DDL operations.

Effect fast point-in-time repair at the appropriate level of granularity: transaction, table, or database.
 
Questions:

Can your application or monitoring infrastructure detect logical inconsistencies?

Is your operations team prepared to use various flashback technologies to repair quickly and efficiently?

Is security practices enforced to prevent unauthorized privileges that can result logical inconsistencies?
 
Needs attention on -
Passed on hbdbuat2, hbdbuat1

Status on hbdbuat2:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

hbdbuat2.undo_retention = 900                                                   

Status on hbdbuat1:
PASS => Database parameter UNDO_RETENTION on PRIMARY is not null

hbdbuat1.undo_retention = 900                                                   
Top

Top

Verify all "BIGFILE" tablespaces have non-default "MAXBYTES" values set

Recommendation
 
Benefit / Impact:

"MAXBYTES" is the SQL attribute that expresses the "MAXSIZE" value that is used in the DDL command to set "AUTOEXTEND" to "ON". By default, for a bigfile tablespace, the value is "3.5184E+13", or "35184372064256". The benefit of having "MAXBYTES" set at a non-default value for "BIGFILE" tablespaces is that a runaway operation or heavy simultaneous use (e.g., temp tablespace) cannot take up all the space in a diskgroup.

The impact of verifying that "MAXBYTES" is set to a non-default value is minimal. The impact of setting the "MAXSIZE" attribute to a non-default value "varies depending upon if it is done during database creation, file addition to a tablespace, or added to an existing file.



Risk:

The risk of running out of space in a diskgroup varies by application and cannot be quantified here. A diskgroup running out of space may impact the entire database as well as ASM operations (e.g., rebalance operations).


Action / Repair:

To obtain a list of file numbers and bigfile tablespaces that have the "MAXBYTES" attribute at the default value, enter the following sqlplus command logged into the database as sysdba:
select file_id, a.tablespace_name, autoextensible, maxbytes
from (select file_id, tablespace_name, autoextensible, maxbytes from dba_data_files where autoextensible='YES' and maxbytes = 35184372064256) a, (select tablespace_name from dba_tablespaces where bigfile='YES') b
where a.tablespace_name = b.tablespace_name
union
select file_id,a.tablespace_name, autoextensible, maxbytes
from (select file_id, tablespace_name, autoextensible, maxbytes from dba_temp_files where autoextensible='YES' and maxbytes = 35184372064256) a, (select tablespace_name from dba_tablespaces where bigfile='YES') b
where a.tablespace_name = b.tablespace_name;

The output should be:no rows returned 

If you see output similar to:

   FILE_ID TABLESPACE_NAME                AUT   MAXBYTES
---------- ------------------------------ --- ----------
         1 TEMP                           YES 3.5184E+13
         3 UNDOTBS1                       YES 3.5184E+13
         4 UNDOTBS2                       YES 3.5184E+13

Investigate and correct the condition.
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => All bigfile tablespaces have non-default maxbytes values set

 Query returned no rows which is expected when the SQL check passes.

Top

Top

Clusterware status

Success Factor CLIENT FAILOVER OPERATIONAL BEST PRACTICES
Recommendation
 Oracle clusterware is required for complete client failover integration.  Please consult the following whitepaper for further information
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Clusterware is running

 DATA FROM D4JTFMCVURD02 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA01.dg
               ONLINE  ONLINE       d4jtfmcvurd01                                
               ONLINE  ONLINE       d4jtfmcvurd02                                
ora.LISTENER.lsnr
               ONLINE  ONLINE       d4jtfmcvurd01                                
               ONLINE  ONLINE       d4jtfmcvurd02                                
ora.OCR.dg
               ONLINE  ONLINE       d4jtfmcvurd01                                
               ONLINE  ONLINE       d4jtfmcvurd02                                
ora.asm
               ONLINE  ONLINE       d4jtfmcvurd01            Started Click for more data 

Status on d4jtfmcvurd01:
PASS => Clusterware is running

 DATA FROM D4JTFMCVURD01 - CLUSTERWARE STATUS 



--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA01.dg
               ONLINE  ONLINE       d4jtfmcvurd01                                
               ONLINE  ONLINE       d4jtfmcvurd02                                
ora.LISTENER.lsnr
               ONLINE  ONLINE       d4jtfmcvurd01                                
               ONLINE  ONLINE       d4jtfmcvurd02                                
ora.OCR.dg
               ONLINE  ONLINE       d4jtfmcvurd01                                
               ONLINE  ONLINE       d4jtfmcvurd02                                
ora.asm
               ONLINE  ONLINE       d4jtfmcvurd01            Started Click for more data 
Top

Top

Flashback database on primary

Success Factor LOGICAL CORRUPTION PREVENTION BEST PRACTICES
Recommendation
 Oracle Flashback Technology enables fast logical failure repair. Oracle recommends that you use automatic undo management with sufficient space to attain your desired undo retention guarantee, enable Oracle Flashback Database, and allocate sufficient space and I/O bandwidth in the fast recovery area.  Application monitoring is required for early detection.  Effective and fast repair comes from leveraging and rehearsing the most common application specific logical failures and using the different flashback features effectively (e.g flashback query, flashback version query, flashback transaction query, flashback transaction, flashback drop, flashback table, and flashback database).

Key HA Benefits:

With application monitoring and rehearsed repair actions with flashback technologies, application downtime can reduce from hours and days to the time to detect the logical inconsistency.

Fast repair for logical failures caused by malicious or accidental DML or DDL operations.

Effect fast point-in-time repair at the appropriate level of granularity: transaction, table, or database.
 
Questions:

Can your application or monitoring infrastructure detect logical inconsistencies?

Is your operations team prepared to use various flashback technologies to repair quickly and efficiently?

Is security practices enforced to prevent unauthorized privileges that can result logical inconsistencies?
 
Links
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
FAIL => Flashback on PRIMARY is not configured

 Flashback status = NO                                                           
Top

Top

Database init parameter DB_BLOCK_CHECKING

Recommendation
 
Benefit / Impact:

Intially db_block_checking is set to off due to potential performance impact. Performance testing is particularly important given that overhead is incurred on every block change. Block checking typically causes 1% to 10% overhead, but for update and insert intensive applications (such as Redo Apply at a standby database) the overhead can be much higher. OLTP compressed tables also require additional checks that can result in higher overhead depending on the frequency of updates to those tables. Workload specific testing is required to assess whether the performance overhead is acceptable.


Risk:

If 

Risk:

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.


Action / Repair:

Based on performance testing results set the primary or standby database to either medium or full depending on the impact. If performance concerns prevent setting DB_BLOCK_CHECKING to either FULL or MEDIUM at a primary database, then it becomes even more important to enable this at the standby database. This protects the standby database from logical corruption that would be undetected at the primary database.
For higher data corruption detection and prevention, enable this setting but performance impacts vary per workload.Evaluate performance impact.

 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE

Status on d4jtfmcvurd01:
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value.

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - DATABASE INIT PARAMETER DB_BLOCK_CHECKING 



DB_BLOCK_CHECKING = FALSE
Top

Top

umask setting for RDBMS owner

Recommendation
 
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => umask for RDBMS owner is set to 0022

 DATA FROM D4JTFMCVURD02 - UMASK SETTING FOR RDBMS OWNER 



0022

Status on d4jtfmcvurd01:
PASS => umask for RDBMS owner is set to 0022

 DATA FROM D4JTFMCVURD01 - UMASK SETTING FOR RDBMS OWNER 



0022
Top

Top

Manage ASM Audit File Directory Growth with cron

Recommendation
 
Benefit / Impact:

The audit file destination directories for an ASM instance can grow to contain a very large number of files if they are not regularly maintained. Use the Linux cron(8) utility and the find(1) command to manage the number of files in the audit file destination directories.

The impact of using cron(8) and find(1) to manage the number of files in the audit file destination directories is minimal.



Risk:

Having a very large number of files can cause the file system to run out of free disk space or inodes, or can cause Oracle to run very slowly due to file system directory scaling limits, which can have the appearance that the ASM instance is hanging on startup.



Action / Repair:

Refer to the referenced MOS Note
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => ASM Audit file destination file count <= 100,000

 DATA FROM D4JTFMCVURD02 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /oracle/app/11.2.0/grid/rdbms/audit = 619

Status on d4jtfmcvurd01:
PASS => ASM Audit file destination file count <= 100,000

 DATA FROM D4JTFMCVURD01 - MANAGE ASM AUDIT FILE DIRECTORY GROWTH WITH CRON 



Number of audit files at /oracle/app/11.2.0/grid/rdbms/audit = 123
Top

Top

GI shell limits hard stack

Recommendation
 The hard stack shell limit for the Oracle Grid Infrastructure software install owner should be >= 10240.

What's being checked here is the /etc/security/limits.conf file as documented in 11gR2 Grid Infrastructure Installation Guide, section 2.15.3 Setting Resource Limits for the Oracle Software Installation Users.  

If the /etc/security/limits.conf file is not configured as described in the documentation then to check the hard stack configuration while logged into the software owner account (eg. grid).

$ ulimit -Hs
10240

As long as the hard stack limit is 10240 or above then the configuration should be ok.

 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => Shell limit hard stack for GI is configured according to recommendation

 DATA FROM D4JTFMCVURD02 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 62834
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 62834 Click for more data 
Top

Top

Check for parameter asm_power_limit

Recommendation
 ASM_POWER_LIMIT specifies the maximum power on an Automatic Storage Management instance for disk rebalancing. The higher the limit, the faster rebalancing will complete. Lower values will take longer, but consume fewer processing and I/O resources.

Syntax to specify power limit while adding or droping disk is :- alter diskgroup  add disk '/dev/raw/raw37' rebalance power 10;
 
Needs attention on -
Passed on +ASM2, +ASM1

Status on +ASM2:
PASS => asm_power_limit is set to recommended value of 1

+ASM2.asm_power_limit = 1                                                       

Status on +ASM1:
PASS => asm_power_limit is set to recommended value of 1

+ASM1.asm_power_limit = 1                                                       
Top

Top

NTP with correct setting

Success Factor MAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Make sure machine clocks are synchronized on all nodes to the same NTP source.
Implement NTP (Network Time Protocol) on all nodes.
Prevents evictions and helps to facilitate problem diagnosis.

Also use  the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact the CRS. Enterprise Linux: see /etc/sysconfig/ntpd; Solaris: set "slewalways yes" and "disable pll" in /etc/inet/ntp.conf. 
Like:- 
       # Drop root to id 'ntp:ntp' by default.
       OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
       # Set to 'yes' to sync hw clock after successful ntpdate
       SYNC_HWCLOCK=no
       # Additional options for ntpdate
       NTPDATE_OPTIONS=""

The Time servers operate in a pyramid structure where the top of the NTP stack is usually an external time source (such as GPS Clock).  This then trickles down through the Network Switch stack to the connected server.  
This NTP stack acts as the NTP Server and ensuring that all the RAC Nodes are acting as clients to this server in a slewing method will keep time changes to a minute amount.

Changes in global time to account for atomic accuracy's over Earth rotational wobble , will thus be accounted for with minimal effect.   This is sometimes referred to as the " Leap Second " " epoch ", (between  UTC  12/31/2008 23:59.59 and 01/01/2009 00:00.00  has the one second inserted).

More information can be found in Note 759143.1
"NTP leap second event causing Oracle Clusterware node reboot"
Linked to this Success Factor.

RFC "NTP Slewing for RAC" has been created successfully. CCB ID 462 
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => NTP is running with correct setting

 DATA FROM D4JTFMCVURD02 - NTP WITH CORRECT SETTING 



ntp      10143     1  0 Sep11 ?        00:00:21 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g -x

Status on d4jtfmcvurd01:
PASS => NTP is running with correct setting

 DATA FROM D4JTFMCVURD01 - NTP WITH CORRECT SETTING 



ntp       5202     1  0 Sep11 ?        00:00:21 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g -x
Top

Top

Jumbo frames configuration for interconnect

Success Factor USE JUMBO FRAMES IF SUPPORTED AND POSSIBLE IN THE SYSTEM
Recommendation
 A performance improvement can be seen with MTU frame size of approximately 9000.  Check with your SA and network admin first and if possible, configure jumbo frames for the interconnect.  Depending upon your network gear the supported frame sizes may vary between NICs and switches.  The highest setting supported by BOTH devices should be considered.  Please see below referenced notes for more detail specific to platform.    

To validate whether jumbo frames are configured correctly end to end (ie., NICs and switches), run the following commands as root.  Invoking ping using a specific interface requires root.

export CRS_HOME= To your GI or clusterware home like export CRS_HOME=/u01/app/12.1.0/grid

/bin/ping -s 8192 -c 2 -M do -I `$CRS_HOME/bin/oifcfg getif |grep cluster_interconnect|tail -1|awk '{print $1}'` hostname 

Substitute your frame size as required for 8192 in the above command.  The actual frame size varies from one networking vendor to another.

If you get errors similar to the  following then jumbo frames are not configured properly for your frame size.

From 192.168.122.186 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 192.168.122.186 icmp_seq=1 Frag needed and DF set (mtu = 1500)

--- hostname@company.com ping statistics ---
0 packets transmitted, 0 received, +2 errors


if jumbo frames are configured properly for your frame size you should obtain output similar to the following:

8192 bytes from hostname (10.208.111.43): icmp_seq=1 ttl=64 time=0.683 ms
8192 bytes from hostname(10.208.111.43): icmp_seq=2 ttl=64 time=0.243 ms

--- hostname ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.243/0.463/0.683/0.220 ms
 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect

 DATA FROM D4JTFMCVURD02 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth0      Link encap:Ethernet  HWaddr 00:50:56:9B:45:5E  
          inet addr:192.168.61.95  Bcast:192.168.61.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14744730 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15077753 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:7744509687 (7.2 GiB)  TX bytes:11210000127 (10.4 GiB)


Status on d4jtfmcvurd01:
INFO => Jumbo frames (MTU >= 8192) are not configured for interconnect

 DATA FROM D4JTFMCVURD01 - JUMBO FRAMES CONFIGURATION FOR INTERCONNECT 



eth0      Link encap:Ethernet  HWaddr 00:50:56:9B:4A:A9  
          inet addr:192.168.61.94  Bcast:192.168.61.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17480902 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12328430 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:11399966752 (10.6 GiB)  TX bytes:7586335339 (7.0 GiB)

Top

Top

OSWatcher status

Success Factor INSTALL AND RUN OSWATCHER PROACTIVELY FOR OS RESOURCE UTILIZATION DIAGNOSIBILITY
Recommendation
 Operating System Watcher  (OSW) is a collection of UNIX shell scripts intended to collect and archive operating system and network metrics to aid diagnosing performance issues. OSW is designed to run continuously and to write the metrics to ASCII files which are saved to an archive directory. The amount of archived data saved and frequency of collection are based on user parameters set when starting OSW.
 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
WARNING => OSWatcher is not running as is recommended.

 DATA FROM D4JTFMCVURD02 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running

Status on d4jtfmcvurd01:
WARNING => OSWatcher is not running as is recommended.

 DATA FROM D4JTFMCVURD01 - OSWATCHER STATUS 



"ps -ef | grep -i osw|grep -v grep" returned no rows which means OSWatcher is not running
Top

Top

CSS reboot time

Success Factor UNDERSTAND CSS TIMEOUT COMPUTATION IN ORACLE CLUSTERWARE
Recommendation
 Reboottime (default 3 seconds) is the amount of time allowed for a node to complete a reboot after the CSS daemon has been evicted.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => CSS reboottime is set to the default value of 3

 DATA FROM D4JTFMCVURD02 - CSS REBOOT TIME 



CRS-4678: Successful get reboottime 3 for Cluster Synchronization Services.
Top

Top

CSS disktimeout

Success Factor UNDERSTAND CSS TIMEOUT COMPUTATION IN ORACLE CLUSTERWARE
Recommendation
 The maximum amount of time allowed for a voting file I/O to complete; if this time is exceeded the voting disk will be marked as offline.  Note that this is also the amount of time that will be required for initial cluster formation, i.e. when no nodes have previously been up and in a cluster.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => CSS disktimeout is set to the default value of 200

 DATA FROM D4JTFMCVURD02 - CSS DISKTIMEOUT 



CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services.
Top

Top

ohasd Log File Ownership

Success Factor VERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => ohasd Log Ownership is Correct (root root)

 DATA FROM D4JTFMCVURD02 - OHASD LOG FILE OWNERSHIP 



total 30384
-rw-r--r-- 1 root root 10551920 Sep 15 10:01 ohasd.l01
-rw-r--r-- 1 root root 10552544 Sep 13 08:13 ohasd.l02
-rw-r--r-- 1 root root  9984685 Sep 17 09:06 ohasd.log
-rw-r--r-- 1 root root      192 Sep 11 10:21 ohasdOUT.log

Status on d4jtfmcvurd01:
PASS => ohasd Log Ownership is Correct (root root)

 DATA FROM D4JTFMCVURD01 - OHASD LOG FILE OWNERSHIP 



total 30008
-rw-r--r-- 1 root root 10552422 Sep 15 11:29 ohasd.l01
-rw-r--r-- 1 root root 10553198 Sep 13 09:01 ohasd.l02
-rw-r--r-- 1 root root  9598054 Sep 17 09:20 ohasd.log
-rw-r--r-- 1 root root      192 Sep 11 10:13 ohasdOUT.log
Top

Top

ohasd/orarootagent_root Log File Ownership

Success Factor VERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
  • Oracle Bug # 9837321 - OWNERSHIP OF CRSD TRACES GOT CHANGE FROM ROOT TO ORACLE BY PATCHING SCRIPT
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)

 DATA FROM D4JTFMCVURD02 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 24160
-rw-r--r-- 1 root root 10568278 Sep 16 12:22 orarootagent_root.l01
-rw-r--r-- 1 root root 10566476 Sep 13 23:12 orarootagent_root.l02
-rw-r--r-- 1 root root  3581369 Sep 17 09:06 orarootagent_root.log
-rw-r--r-- 1 root root        6 Sep 11 10:24 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 11 10:24 orarootagent_rootOUT.log

Status on d4jtfmcvurd01:
PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)

 DATA FROM D4JTFMCVURD01 - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 24888
-rw-r--r-- 1 root root 10566768 Sep 16 08:56 orarootagent_root.l01
-rw-r--r-- 1 root root 10567453 Sep 13 21:26 orarootagent_root.l02
-rw-r--r-- 1 root root  4333196 Sep 17 09:19 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 11 10:15 orarootagent_root.pid
-rw-r--r-- 1 root root        0 Sep 11 10:15 orarootagent_rootOUT.log
Top

Top

crsd/orarootagent_root Log File Ownership

Success Factor VERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)

 DATA FROM D4JTFMCVURD02 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 42764
-rw-r--r-- 1 root root 10558992 Sep 17 04:09 orarootagent_root.l01
-rw-r--r-- 1 root root 10557777 Sep 15 17:46 orarootagent_root.l02
-rw-r--r-- 1 root root 10558230 Sep 14 07:21 orarootagent_root.l03
-rw-r--r-- 1 root root 10557996 Sep 12 20:49 orarootagent_root.l04
-rw-r--r-- 1 root root  1524900 Sep 17 09:06 orarootagent_root.log
-rw-r--r-- 1 root root        6 Sep 11 10:25 orarootagent_root.pid

Status on d4jtfmcvurd01:
PASS => crsd/orarootagent_root Log Ownership is Correct (root root)

 DATA FROM D4JTFMCVURD01 - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP 



total 44440
-rw-r--r-- 1 root root 10555456 Sep 16 23:04 orarootagent_root.l01
-rw-r--r-- 1 root root 10554234 Sep 15 14:12 orarootagent_root.l02
-rw-r--r-- 1 root root 10555442 Sep 14 05:13 orarootagent_root.l03
-rw-r--r-- 1 root root 10555616 Sep 12 19:43 orarootagent_root.l04
-rw-r--r-- 1 root root  3244466 Sep 17 09:19 orarootagent_root.log
-rw-r--r-- 1 root root        5 Sep 11 10:19 orarootagent_root.pid
Top

Top

crsd Log File Ownership

Success Factor VERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 CRSD trace file should owned by "root:root", but due to Bug 9837321application of patch may have resulted in changing the trace file ownership for patching and not changing it back.
 
Links
  • Oracle Bug # 9837321 - Bug 9837321 - Ownership of crsd traces gets changed from root by patching script - OWNERSHIP OF CRSD TRACES GOT CHANGE FROM ROOT TO ORACLE BY PATCHING SCRIPT
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => crsd Log Ownership is Correct (root root)

 DATA FROM D4JTFMCVURD02 - CRSD LOG FILE OWNERSHIP 



total 3600
-rw-r--r-- 1 root root 3676524 Sep 17 09:06 crsd.log
-rw-r--r-- 1 root root     125 Sep 11 10:24 crsdOUT.log

Status on d4jtfmcvurd01:
PASS => crsd Log Ownership is Correct (root root)

 DATA FROM D4JTFMCVURD01 - CRSD LOG FILE OWNERSHIP 



total 6984
-rw-r--r-- 1 root root 7139730 Sep 17 09:19 crsd.log
-rw-r--r-- 1 root root     250 Sep 11 10:18 crsdOUT.log
Top

Top

VIP NIC bonding config.

Success Factor CONFIGURE NIC BONDING FOR 10G VIP (LINUX)
Recommendation
 To avoid single point of failure for VIPs, Oracle highly recommends to configure redundant network for VIPs using NIC BONDING.  Follow below note for more information on how to configure bonding in linux
 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
WARNING => NIC bonding is NOT configured for public network (VIP)

 DATA FROM D4JTFMCVURD02 - VIP NIC BONDING CONFIG. 



eth1      Link encap:Ethernet  HWaddr 00:50:56:9B:3F:97  
          inet addr:10.0.61.95  Bcast:10.0.61.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4835837 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2089659 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:9433044758 (8.7 GiB)  TX bytes:374350646 (357.0 MiB)


Status on d4jtfmcvurd01:
WARNING => NIC bonding is NOT configured for public network (VIP)

 DATA FROM D4JTFMCVURD01 - VIP NIC BONDING CONFIG. 



eth1      Link encap:Ethernet  HWaddr 00:50:56:9B:7F:0A  
          inet addr:10.0.61.94  Bcast:10.0.61.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4935865 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2545312 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:729804518 (695.9 MiB)  TX bytes:10050171719 (9.3 GiB)

Top

Top

Interconnect NIC bonding config.

Success Factor CONFIGURE NIC BONDING FOR 10G VIP (LINUX)
Recommendation
 To avoid single point of failure for interconnect, Oracle highly recommends to configure redundant network for interconnect using NIC BONDING.  Follow below note for more information on how to configure bonding in linux.

NOTE: If customer is on 11.2.0.2 or above and HAIP is in use with two or more interfaces then this finding can be ignored.
 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
WARNING => NIC bonding is not configured for interconnect

 DATA FROM D4JTFMCVURD02 - INTERCONNECT NIC BONDING CONFIG. 



eth0  192.168.61.0  global  cluster_interconnect

Status on d4jtfmcvurd01:
WARNING => NIC bonding is not configured for interconnect

 DATA FROM D4JTFMCVURD01 - INTERCONNECT NIC BONDING CONFIG. 



eth0  192.168.61.0  global  cluster_interconnect
Top

Top

Verify operating system hugepages count satisfies total SGA requirements

Recommendation
 
Benefit / Impact:

Properly configuring operating system hugepages on Linux and using the database initialization parameter "use_large_pages" to "only" results in more efficient use of memory and reduced paging.
The impact of validating that the total current hugepages are greater than or equal to estimated requirements for all currently active SGAs is minimal. The impact of corrective actions will vary depending on the specific configuration, and may require a reboot of the database server.



Risk:

The risk of not correctly configuring operating system hugepages in advance of setting the database initialization parameter "use_large_pages" to "only" is that if not enough huge pages are configured, some databases will not start after you have set the parameter.


Action / Repair:

Pre-requisite: All database instances that are supposed to run concurrently on a database server must be up and running for this check to be accurate.

NOTE: Please refer to below referenced My Oracle Support notes for additional details on configuring hugepages.

NOTE: If you have not reviewed the below referenced My Oracle Support notes and followed their guidance BEFORE using the database parameter "use_large_pages=only", this check will pass the environment but you will still not be able to start instances once the configured pool of operating system hugepages have been consumed by instance startups. If that should happen, you will need to change the "use_large_pages" initialization parameter to one of the other values, restart the instance, and follow the instructions in the below referenced My Oracle Support notes. The brute force alternative is to increase the huge page count until the newest instance will start, and then adjust the huge page count after you can see the estimated requirements for all currently active SGAs.
 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
FAIL => Operating system hugepages count does not satisfy total SGA requirements

 DATA FROM D4JTFMCVURD02 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) are greater than or equal to
estimated requirements for all currently active SGAs (0).


Status on d4jtfmcvurd01:
FAIL => Operating system hugepages count does not satisfy total SGA requirements

 DATA FROM D4JTFMCVURD01 - VERIFY OPERATING SYSTEM HUGEPAGES COUNT SATISFIES TOTAL SGA REQUIREMENTS 




Total current hugepages (0) are greater than or equal to
estimated requirements for all currently active SGAs (0).

Top

Top

Check for parameter memory_target

Recommendation
 It is recommended to use huge pages for efficient use of memory and reduced paging. Huge pages can not be configured if database is using automatic memory management. To take benefit of huge pages, its recommended to disable automatic memory management by unsetting to following init parameters.
MEMORY_TARGET
MEMORY_MAX_TARGET
 
Needs attention on hbdbuat2, hbdbuat1
Passed on -

Status on hbdbuat2:
WARNING => Database Parameter memory_target is not set to the recommended value

hbdbuat2.memory_target = 3305111552                                             

Status on hbdbuat1:
WARNING => Database Parameter memory_target is not set to the recommended value

hbdbuat1.memory_target = 3305111552                                             
Top

Top

CRS and ASM version comparison

Recommendation
 you should always run equal or higher version of CRS than ASM. running higher ASM version than CRS is non-supported configuration and may run into issues.
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => CRS version is higher or equal to ASM version.

 DATA FROM D4JTFMCVURD02 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040

Status on d4jtfmcvurd01:
PASS => CRS version is higher or equal to ASM version.

 DATA FROM D4JTFMCVURD01 - CRS AND ASM VERSION COMPARISON 



CRS_ACTIVE_VERSION = 112040 
ASM Version = 112040
Top

Top

Number of SCAN listeners

Recommendation
 
Benefit / Impact:

Application scalability and/or availability



Risk:

Potential reduced scalability and/or availability of applications


Action / Repair:

The recommended number of SCAN listeners is 3....  See the referenced document for more details.
 
Links
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
INFO => Number of SCAN listeners is NOT equal to the recommended number of 3.

 DATA FROM D4JTFMCVURD02 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1526

Status on d4jtfmcvurd01:
INFO => Number of SCAN listeners is NOT equal to the recommended number of 3.

 DATA FROM D4JTFMCVURD01 - NUMBER OF SCAN LISTENERS 



SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1526
Top

Top

Voting disk status

Success Factor USE EXTERNAL OR ORACLE PROVIDED REDUNDANCY FOR OCR
Recommendation
 
Benefit / Impact:

Stability, Availability



Risk:

Cluster instability


Action / Repair:

Voting disks that are not online would indicate a problem with the clusterware
and should be investigated as soon as possible.  All voting disks are expected to be ONLINE.

Use the following command to list the status of the voting disks

$CRS_HOME/bin/crsctl query css votedisk|sed 's/^ //g'|grep ^[0-9]

The output should look similar to the following, one row were voting disk, all disks should indicate ONLINE

1. ONLINE   192c8f030e5a4fb3bf77e43ad3b8479a (o/192.168.10.102/DBFS_DG_CD_02_sclcgcel01) [DBFS_DG]
2. ONLINE   2612d8a72d194fa4bf3ddff928351c41 (o/192.168.10.104/DBFS_DG_CD_02_sclcgcel03) [DBFS_DG]
3. ONLINE   1d3cceb9daeb4f0bbf23ee0218209f4c (o/192.168.10.103/DBFS_DG_CD_02_sclcgcel02) [DBFS_DG]
 
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => All voting disks are online

 DATA FROM D4JTFMCVURD02 - VOTING DISK STATUS 



##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   a96e821d2cfd4fdebf8c738edaf040df (/dev/asm-diskc) [OCR]
Located 1 voting disk(s).
Top

Top

css misscount

Success Factor UNDERSTAND CSS TIMEOUT COMPUTATION IN ORACLE CLUSTERWARE
Recommendation
 The CSS misscount parameter represents the maximum time, in seconds, that a network heartbeat can be missed before entering into a cluster reconfiguration to evict the node
 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => CSS misscount is set to the default value of 30

 DATA FROM D4JTFMCVURD02 - CSS MISSCOUNT 



CRS-4678: Successful get misscount 30 for Cluster Synchronization Services.
Top

Top

Same size of redo log files

Recommendation
 Having asymmetrical size of redo logs can lead to database hang and its best practice to keep same size for all redo log files. run following query to find out size of each member. 
column member format a50
select f.member,l.bytes/1024/1024 as "Size in MB" from v$log l,v$logfile f where l.group#=f.group#;
Resizing redo logs to make it same size does not need database downtime. 
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => All redo log files are of same size

 1           .048828125                                                 
         2           .048828125                                                 
         3           .048828125                                                 
         4           .048828125                                                 
Top

Top

SELinux status

Success Factor RPM THROWS ERROR WITH SELINUX ENABLED
Recommendation
 On Rhel4 u3 x86_64 2.6.9-34.ELsmp kernel , when selinux is enabled, rpm
installation gives the error:
'scriptlet failed, exit status 255'

The default selinux settings are used
# cat /etc/sysconfig/selinux
SELINUX=enforcing
SELINUXTYPE=targeted
e.g. on installing asm rpms:
# rpm -ivh *.rpm
Preparing...                ###########################################
[100%]
  1:oracleasm-support      ########################################### [33%]
  error: %post(oracleasm-support-2.0.2-1.x86_64) scriptlet failed, exit status 255
  2:oracleasm-2.6.9-34.ELsm########################################### [67%]
  error: %post(oracleasm-2.6.9-34.ELsmp-2.0.2-1.x86_64) scriptlet failed, exit status 255
   3:oracleasmlib           ###########################################  [100%]

However, asm rpms gets installed
# rpm -qa | grep asm
oracleasm-support-2.0.2-1
oracleasmlib-2.0.2-1
oracleasm-2.6.9-34.ELsmp-2.0.2-1

There is no error during oracleasm configure, creadisks, Also, oracleasm is able to start on reboot and the tests done around rac/asm seems to be fine.

# rpm -q -a | grep -i selinux
selinux-policy-targeted-1.17.30-2.126
selinux-policy-targeted-sources-1.17.30-2.126
libselinux-1.19.1-7
libselinux-1.19.1-7

Solution
--
If the machine is installed with selinux --disabled, it is possible that the selinux related pre/post activities have not been performed during the installation and as a result extended attribute is not getting set for /bin/*sh 

1. ensure that the kickstart config file does not have 'selinux --disabled'
Also, not specifying selinux in the config file will default to selinux --enforcing and the extended attribute will get set for /bin/*sh 
OR
2. If the machine has been installed with selinux --disabled then perform the below step manually # setfattr -n security.selinux --value="system_u:object_r:shell_exec_t\000"
bin/sh 
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => SELinux is not being Enforced.

 DATA FROM D4JTFMCVURD02 - SELINUX STATUS 



Disabled

Status on d4jtfmcvurd01:
PASS => SELinux is not being Enforced.

 DATA FROM D4JTFMCVURD01 - SELINUX STATUS 



Disabled
Top

Top

Public interface existence

Recommendation
 it is important to ensure that your public interface is properly marked as public and not private. This can be checked with the oifcfg getif command. If it is inadvertantly marked private, you can get errors such as "OS system dependent operation:bind failed with status" and "OS failure message: Cannot assign requested address". It can be corrected with a command like oifcfg setif -global eth0/:public
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Public interface is configured and exists in OCR

 DATA FROM D4JTFMCVURD02 - PUBLIC INTERFACE EXISTENCE 



eth1  10.0.61.0  global  public
eth0  192.168.61.0  global  cluster_interconnect

Status on d4jtfmcvurd01:
PASS => Public interface is configured and exists in OCR

 DATA FROM D4JTFMCVURD01 - PUBLIC INTERFACE EXISTENCE 



eth1  10.0.61.0  global  public
eth0  192.168.61.0  global  cluster_interconnect
Top

Top

ip_local_port_range

Recommendation
 Starting with Oracle Clusterware 11gR1, ip_local_port_range should be between 9000 (minimum) and 65500 (maximum).
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => ip_local_port_range is configured according to recommendation

 DATA FROM D4JTFMCVURD02 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500

Status on d4jtfmcvurd01:
PASS => ip_local_port_range is configured according to recommendation

 DATA FROM D4JTFMCVURD01 - IP_LOCAL_PORT_RANGE 



minimum port range = 9000
maximum port range = 65500
Top

Top

kernel.shmmax

Recommendation
 
Benefit / Impact:

Optimal system memory management.



Risk:

In an Oracle RDBMS application, setting kernel.shmmax too high is not needed and could enable configurations that may leave inadequate system memory for other necessary functions.


Action / Repair:

Oracle Support officially recommends a "minimum" for SHMMAX of 1/2 of physical RAM. However, many Oracle customers choose a higher fraction, at their discretion.  Setting the kernel.shmmax as recommended only causes a few more shared memory segments to be used for whatever total SGA that you subsequently configure in Oracle.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => kernel.shmmax parameter is configured according to recommendation

 DATA FROM D4JTFMCVURD02 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4128176128
total system memory = 8256352256
1/2 total system memory = 4128176128

Status on d4jtfmcvurd01:
PASS => kernel.shmmax parameter is configured according to recommendation

 DATA FROM D4JTFMCVURD01 - KERNEL.SHMMAX 




NOTE: All results reported in bytes

kernel.shmmax actual = 4128176128
total system memory = 8256352256
1/2 total system memory = 4128176128
Top

Top

Check for parameter fs.file-max

Recommendation
 - In 11g we introduced automatic memory management which requires more file descriptors than previous versions.

- At a _MINIMUM_ we require 512*PROCESSES (init parameter) file descriptors per database instance + some for the OS and other non-oracle processes

- Since we cannot know at install time how many database instances the customer may run and how many PROCESSES they may configure for those instances and whether they will use automatic memory management or how many non-Oracle processes may be run and how many file descriptors they will require we recommend the file descriptor limit be set to a very high number (6553600) to minimize the potential for running out.

- Setting fs.file-max "too high" doesn't hurt anything because file descriptors are allocated dynamically as needed up to the limit of fs.file-max

- Oracle is not aware of any customers having problems from setting fs.file-max "too high" but we have had customers have problems from setting it too low.  A problem from having too few file descriptors is preventable.

- As for a formula, given 512*PROCESSES (as a minimum) fs.file-max should be a sufficiently high number to minimize the chance that ANY customer would suffer an outage from having fs.file-max set too low.  At a limit of 6553600 customers are likely to have other problems to worry about before they hit that limit. 

- If an individual customer wants to deviate from fs.file-max = 6553600 then they are free to do so based on their knowledge of their environment and implementation as long as they make sure they have enough file descriptors to cover all their database instances, other non-oracle processes and the OS.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744

Status on d4jtfmcvurd01:
PASS => Kernel Parameter fs.file-max is configuration meets or exceeds recommendation

fs.file-max = 6815744
Top

Top

DB shell limits hard stack

Recommendation
 The hard stack shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 10240.

What's being checked here is the /etc/security/limits.conf file as documented in 11gR2 Grid Infrastructure  Installation Guide, section 2.15.3 Setting Resource Limits for the Oracle Software Installation Users.  

If the /etc/security/limits.conf file is not configured as described in the documentation then to check the hard stack configuration while logged into the software owner account (eg. oracle).

$ ulimit -Hs
10240

As long as the hard stack limit is 10240 or above then the configuration should be ok.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Shell limit hard stack for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD02 - DB SHELL LIMITS HARD STACK 



oracle hard stack unlimited

Status on d4jtfmcvurd01:
PASS => Shell limit hard stack for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD01 - DB SHELL LIMITS HARD STACK 



oracle hard stack unlimited
Top

Top

/tmp directory free space

Recommendation
 There should be a minimum of 1GB of free space in the /tmp directory
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB

 DATA FROM D4JTFMCVURD02 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-rootlv
                       50G  9.0G   38G  20% /

Status on d4jtfmcvurd01:
PASS => Free space in /tmp directory meets or exceeds recommendation of minimum 1GB

 DATA FROM D4JTFMCVURD01 - /TMP DIRECTORY FREE SPACE 



Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-rootlv
                       50G  9.0G   38G  20% /
Top

Top

GI shell limits hard nproc

Recommendation
 The hard nproc shell limit for the Oracle GI software install owner should be >= 16384.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => Shell limit hard nproc for GI is configured according to recommendation

 DATA FROM D4JTFMCVURD02 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 62834
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 62834 Click for more data 
Top

Top

DB shell limits soft nofile

Recommendation
 The soft nofile shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 1024.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Shell limit soft nofile for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD02 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536

Status on d4jtfmcvurd01:
PASS => Shell limit soft nofile for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD01 - DB SHELL LIMITS SOFT NOFILE 



oracle soft nofile 65536
Top

Top

GI shell limits hard nofile

Recommendation
 The hard nofile shell limit for the Oracle GI software install owner should be >= 65536
 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => Shell limit hard nofile for GI is configured according to recommendation

 DATA FROM D4JTFMCVURD02 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 62834
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 62834 Click for more data 
Top

Top

DB shell limits hard nproc

Recommendation
 The hard nproc shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 16384.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Shell limit hard nproc for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD02 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384

Status on d4jtfmcvurd01:
PASS => Shell limit hard nproc for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD01 - DB SHELL LIMITS HARD NPROC 



oracle hard nproc 16384
Top

Top

GI shell limits soft nofile

Recommendation
 The soft nofile shell limit for the Oracle GI software install owner should be >= 1024.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => Shell limit soft nofile for GI is configured according to recommendation

 DATA FROM D4JTFMCVURD02 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 62834
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 62834 Click for more data 
Top

Top

GI shell limits soft nproc

Recommendation
 The soft nproc shell limit for the Oracle GI software install owner should be >= 2047.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02

Status on d4jtfmcvurd02:
PASS => Shell limit soft nproc for GI is configured according to recommendation

 DATA FROM D4JTFMCVURD02 FOR GRID INFASTRUCTURE USER SHELL LIMITS CONFIGURATION 



Soft limits(ulimit -Sa) 

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 62834
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 62834 Click for more data 
Top

Top

DB shell limits hard nofile

Recommendation
 The hard nofile shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 65536.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Shell limit hard nofile for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD02 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536

Status on d4jtfmcvurd01:
PASS => Shell limit hard nofile for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD01 - DB SHELL LIMITS HARD NOFILE 



oracle hard nofile 65536
Top

Top

DB shell limits soft nproc

Recommendation
 This recommendation represents a change or deviation from the documented values and should be considered a temporary measure until the code addresses the problem in a more permanent way.

Problem Statement: 
------------------ 
The soft limit of nproc is not adjusted at runtime by the database. As a 
result, if that limit is reached, the database may become unstable since it 
will fail to fork additional processes. 

Workaround: 
----------- 
Ensure that the soft limit for nproc in /etc/security/limits.conf is set high 
enough to accommodate the maximum number of concurrent threads on the system 
for the given workload. If in doubt, set it to the hard limit. For example: 

oracle  soft    nproc   16384 
oracle  hard    nproc   16384

The soft nproc shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 2047.  So the above advice of setting soft nproc = hard nproc = 16384 should be considered a temporary proactive measure to avoid the possibility of the database not being able to fork enough processes.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Shell limit soft nproc for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD02 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384

Status on d4jtfmcvurd01:
PASS => Shell limit soft nproc for DB is configured according to recommendation

 DATA FROM D4JTFMCVURD01 - DB SHELL LIMITS SOFT NPROC 



oracle soft nproc 16384
Top

Top

Linux Swap Size

Success Factor CORRECTLY SIZE THE SWAP SPACE
Recommendation
 The following table describes the relationship between installed RAM and the configured swap space requirement:

Note:
On Linux, the Hugepages feature allocates non-swappable memory for large page tables using memory-mapped files. If you enable Hugepages, then you should deduct the memory allocated to Hugepages from the available RAM before calculating swap space.

RAM between 1 GB and 2 GB, Swap 1.5 times the size of RAM (minus memory allocated to Hugepages)

RAM between 2 GB and 16 GB, Swap equal to the size of RAM (minus memory allocated to Hugepages)

RAM (minus memory allocated to Hugepages)
more than 16 GB,  Swap 16 GB

In other words the maximum swap size for Linux that Oracle would recommend would be 16GB
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Linux Swap Configuration meets or exceeds Recommendation

 DATA FROM D4JTFMCVURD02 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 8062844
Swap memory found on system = 8454128
Recommended Swap = 8062844

Status on d4jtfmcvurd01:
PASS => Linux Swap Configuration meets or exceeds Recommendation

 DATA FROM D4JTFMCVURD01 - LINUX SWAP SIZE 



Total meory on system(Physical RAM - Huge Pagess Size) = 8062844
Swap memory found on system = 8454128
Recommended Swap = 8062844
Top

Top

/tmp on dedicated filesystem

Recommendation
 It is a best practice to locate the /tmp directory on a dedicated filesystem, otherwise accidentally filling up /tmp could lead to filling up the root (/) filesystem as the result of other file management (logs, traces, etc.) and lead to availability problems.  For example, Oracle creates socket files in /tmp.  Make sure 1GB of free space is maintained in /tmp.
 
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
WARNING => /tmp is NOT on a dedicated filesystem

 DATA FROM D4JTFMCVURD02 - /TMP ON DEDICATED FILESYSTEM 




Status on d4jtfmcvurd01:
WARNING => /tmp is NOT on a dedicated filesystem

 DATA FROM D4JTFMCVURD01 - /TMP ON DEDICATED FILESYSTEM 



Top

Top

ASMLIB check

Recommendation
 We recommend the use of ASMLIB on Linux.  ASMLIB handles device persistence for ASM disks across reboots as well as many other features which make managing ASM disks more convenient
 
Needs attention on d4jtfmcvurd02, d4jtfmcvurd01
Passed on -

Status on d4jtfmcvurd02:
INFO => oracleasm (asmlib) module is NOT loaded

 DATA FROM D4JTFMCVURD02 - ASMLIB CHECK 



	Look for oracleasm module 

Module                  Size  Used by
fuse                   69253  0 
oracleacfs           1990406  0 
oracleadvm            250040  0 
oracleoks             427672  2 oracleacfs,oracleadvm
hangcheck_timer         2526  0 
autofs4                26513  3 
sunrpc                260521  1 
vsock                  55454  0 
ipv6                  321422  0 
uinput                  7992  0 
ppdev                   8537  0 
parport_pc             22690  0 
parport                36209  2 ppdev,parport_pc Click for more data 

Status on d4jtfmcvurd01:
INFO => oracleasm (asmlib) module is NOT loaded

 DATA FROM D4JTFMCVURD01 - ASMLIB CHECK 



	Look for oracleasm module 

Module                  Size  Used by
oracleacfs           1990406  0 
oracleadvm            250040  1 
oracleoks             427672  2 oracleacfs,oracleadvm
fuse                   69253  0 
hangcheck_timer         2526  0 
autofs4                26513  3 
sunrpc                260521  1 
vsock                  55454  0 
ipv6                  321422  0 
uinput                  7992  0 
ppdev                   8537  0 
parport_pc             22690  0 
parport                36209  2 ppdev,parport_pc Click for more data 
Top

Top

Non-autoextensible data and temp files

Recommendation
 
Benefit / Impact:

The benefit of having "AUTOEXTEND" on is that applications may avoid out of space errors.
The impact of verifying that the "AUTOEXTEND" attribute is "ON" is minimal. The impact of setting "AUTOEXTEND" to "ON" varies depending upon if it is done during database creation, file addition to a tablespace, or added to an existing file.



Risk:

The risk of running out of space in either the tablespace or diskgroup varies by application and cannot be quantified here. A tablespace that runs out of space will interfere with an application, and a diskgroup running out of space could impact the entire database as well as ASM operations (e.g., rebalance operations).


Action / Repair:

To obtain a list of tablespaces that are not set to "AUTOEXTEND", enter the following sqlplus command logged into the database as sysdba:
select file_id, file_name, tablespace_name from dba_data_files where autoextensible <>'YES'
union
select file_id, file_name, tablespace_name from dba_temp_files where autoextensible <> 'YES'; 
The output should be:
no rows selected
If any rows are returned, investigate and correct the condition.
NOTE: Configuring "AUTOEXTEND" to "ON" requires comparing space utilization growth projections at the tablespace level to space available in the diskgroups to permit the expected projected growth while retaining sufficient storage space in reserve to account for ASM rebalance operations that occur either as a result of planned operations or component failure. The resulting growth targets are implemented with the "MAXSIZE" attribute that should always be used in conjunction with the "AUTOEXTEND" attribute. The "MAXSIZE" settings should allow for projected growth while minimizing the prospect of depleting a disk group. The "MAXSIZE" settings will vary by customer and a blanket recommendation cannot be given here.

NOTE: When configuring a file for "AUTOEXTEND" to "ON", the size specified for the "NEXT" attribute should cover all disks in the diskgroup to optimize balance. For example, with a 4MB AU size and 168 disks, the size of the "NEXT" attribute should be a multiple of 672M (4*168).
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => All data and temporary are autoextensible

 Query returned no rows which is expected when the SQL check passes.

Top

Top

Non-multiplexed redo logs

Recommendation
 The online redo logs of an Oracle database are critical to availability and recoverability and should always be multiplexed even in cases where fault tolerance is provided at the storage level.
 
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
WARNING => One or more redo log groups are NOT multiplexed

 1          1                                                           
         2          1                                                           
         4          1                                                           
         3          1                                                           
Top

Top

Multiplexed controlfiles

Recommendation
 The controlfile of an Oracle database is critical to availability and recoverability and should always be multiplexed even in cases where fault tolerance is provided at the storage level.
 
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
WARNING => Controlfile is NOT multiplexed

 +DATA01/hbdbuat/controlfile/current.256.857995095                               
Top

Top

Check audit_file_dest

Recommendation
 we should clean old audit files from audit_file_dest regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => audit_file_dest does not have any audit files older than 30 days

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /oracle/app/oracle/admin/hbdbuat/adump = 0

Status on d4jtfmcvurd01:
PASS => audit_file_dest does not have any audit files older than 30 days

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - CHECK AUDIT_FILE_DEST 



Number of audit files last modified over 30 days ago at /oracle/app/oracle/admin/hbdbuat/adump = 0
Top

Top

oradism executable ownership

Success Factor VERIFY OWNERSHIP OF ORADISM EXECUTABLE IF LMS PROCESS NOT RUNNING IN REAL TIME
Recommendation
 
Benefit / Impact:

The oradism executable is invoked after database startup to change the scheduling priority of LMS and other database background processes to the realtime scheduling class in order to maximize the ability of these key processes to be scheduled on the CPU in a timely way at times of high CPU utilization.



Risk:

The oradism executable should be owned by root and the owner s-bit should be set, eg. -rwsr-x---, where the s is the setuid bit (s-bit) for root in this case.  If the LMS process is not running at the proper scheduling priority it can lead to instance evictions due to IPC send timeouts or ORA-29740 errors.  oradism must be owned by root and it's s-bit set in order to be able to change the scheduling priority.   If oradism ownership is not root and the owner s-bit is not set then something must have gone wrong in the installation process or the ownership or the permission was otherwise changed.  


Action / Repair:

Please check with Oracle Support to determine the best course to take for your platform to correct the problem.
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => $ORACLE_HOME/bin/oradism ownership is root

 DATA FROM D4JTFMCVURD02 - /ORACLE/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Sep 11 11:09 /oracle/app/oracle/product/11.2.0/db_1/bin/oradism

Status on d4jtfmcvurd01:
PASS => $ORACLE_HOME/bin/oradism ownership is root

 DATA FROM D4JTFMCVURD01 - /ORACLE/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE OWNERSHIP 



-rwsr-x--- 1 root oinstall 71790 Aug 24  2013 /oracle/app/oracle/product/11.2.0/db_1/bin/oradism
Top

Top

oradism executable permission

Success Factor VERIFY OWNERSHIP OF ORADISM EXECUTABLE IF LMS PROCESS NOT RUNNING IN REAL TIME
Recommendation
 
Benefit / Impact:

The oradism executable is invoked after database startup to change the scheduling priority of LMS and other database background processes to the realtime scheduling class in order to maximize the ability of these key processes to be scheduled on the CPU in a timely way at times of high CPU utilization.



Risk:

The oradism executable should be owned by root and the owner s-bit should be set, eg. -rwsr-x---, where the s is the setuid bit (s-bit) for root in this case.  If the LMS process is not running at the proper scheduling priority it can lead to instance evictions due to IPC send timeouts or ORA-29740 errors.  oradism must be owned by root and it's s-bit set in order to be able to change the scheduling priority.   If oradism ownership is not root and the owner s-bit is not set then something must have gone wrong in the installation process or the ownership or the permission was otherwise changed.  


Action / Repair:

Please check with Oracle Support to determine the best course to take for your platform to correct the problem.
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set

 DATA FROM D4JTFMCVURD02 - /ORACLE/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Sep 11 11:09 /oracle/app/oracle/product/11.2.0/db_1/bin/oradism

Status on d4jtfmcvurd01:
PASS => $ORACLE_HOME/bin/oradism setuid bit is set

 DATA FROM D4JTFMCVURD01 - /ORACLE/APP/ORACLE/PRODUCT/11.2.0/DB_1 DATABASE_HOME - ORADISM EXECUTABLE PERMISSION 



-rwsr-x--- 1 root oinstall 71790 Aug 24  2013 /oracle/app/oracle/product/11.2.0/db_1/bin/oradism
Top

Top

Avg message sent queue time on ksxp

Recommendation
 Avg message sent queue time on ksxp (ms) should be very low, average numbers are usually below 2 ms on most systems.  Higher averages usually mean the system is approaching interconnect or CPU capacity, or else there may be an interconnect problem.  The higher the average above 2ms the more severe the problem is likely to be.  

Interconnect performance should be investigated further by analysis using AWR and ASH reports and other network diagnostic tools.  
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Avg message sent queue time on ksxp is <= recommended

 avg_message_sent_queue_time_on_ksxp_in_ms = 0                                   
Top

Top

Avg message sent queue time (ms)

Recommendation
 Avg message sent queue time (ms) as derived from AWR should be very low, average numbers are usually below 2 ms on most systems.  Higher averages usually mean the system is approaching interconnect or CPU capacity, or else there may be an interconnect problem.  The higher the average above 2ms the more severe the problem is likely to be.  

Interconnect performance should be investigated further by analysis using AWR and ASH reports and other network diagnostic tools. 
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Avg message sent queue time is <= recommended

 avg_message_sent_queue_time_in_ms = 0                                           
Top

Top

Avg message received queue time

Recommendation
 Avg message receive queue time (ms) as derived from AWR should be very low, average numbers are usually below 2 ms on most systems.  Higher averages usually mean the system is approaching interconnect or CPU capacity, or else there may be an interconnect problem.  The higher the average above 2ms the more severe the problem is likely to be.  

Interconnect performance should be investigated further by analysis using AWR and ASH reports and other network diagnostic tools. 
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Avg message received queue time is <= recommended

 avg_message_received_queue_time_in_ms = 0                                       
Top

Top

GC block lost

Success Factor GC LOST BLOCK DIAGNOSTIC GUIDE
Recommendation
 The RDBMS reports global cache lost blocks statistics ("gc cr block lost" and/or "gc current block lost") which could indicate a negative impact on interconnect performance and global cache processing. 

The vast majority of escalations attributed to RDBMS global cache lost blocks can be directly related to faulty or misconfigured interconnects. This guide serves as a starting point for evaluating common (and sometimes obvious) causes.

 1. Is Jumbo Frames configured? 

A Jumbo Frame is a Packet Size around 9000bytes. 5000 bytes are called Mini Jumbo Frames.  All the servers , switches and routers in operation must be configured to support the same size of packets.

Primary Benefit: performance
Secondary Benefit: cluster stability for IP overhead, less misses for network heartbeat checkins.

 2. What is the configured MTU size for each interconnect interface and interconnect switch ports? 

The MTU is the "Maximum Transmission Unit" or the frame size.  The default is 1500 bytes for Ethernet.

 3. Do you observe frame loss at the OS, NIC or switch layer?   netstat, ifconfig, ethtool, switch port stats would help you determine that.

Using netstat -s look for:
x fragments dropped after timeout
x packet reassembles failed

 4. Are network cards speed force full duplex? 

 5. Are network card speed and mode (autonegotiate, fixed full duplex, etc) identical on all nodes and switch? 

 6. Is the PCI bus at the same speed on all nodes that the NIC (Network Interface Cards) are using?  

 7. Have you modified the ring buffers away from default for the interconnect NIC for all nodes? 

 8. Have you measured interconnect capacity and are you saturating available bandwidth? 

Remember that all network values are averaged over a time period.  Best to keep the average time period as small as possible so that spikes of activity are not masked out.

 9. Are the CPUs overloaded (ie load average > 20 for new lintel architecture) on the nodes that exhibit block loss?  "Uptime" command will display load average information on most platforms.

 10. Have you modified transmit and receive (tx/rx) UDP buffer queue size for the OS from recommended settings?  
          Send and receive queues should be the same size. 
          Queue max and default should be the same size. 
          Recommended queue size = 4194304 (4 megabytes). 
                  
 11. What is the NIC driver version and is it the same on all nodes? 

 12. Is the NIC driver NAPI (New Application Program Interface) enabled on all nodes (recommended)? 

 13. What is the % of block loss compared to total gc block processing for that node?  View AWR reports for peak load periods.

Total # of blocks lost:
SQL> select INST_ID, NAME, VALUE from gv$sysstat where name like 'global cache %lost%' and value > 0;

 14. Is flow control enabled (tx & rx) for switch and NIC?   Its not just the servers that need the transmission to pause (Xoff) but also the network equipment.

 15.  Using a QOS (Quality of Service) is not advised for the Network that the RAC Private Interconnect is comminucating to other nodes of the cluster with.  This includes the Server, Switch and DNS (or any other item connected on this segment of the network).
We have a case at AIX QOS service was turned on but not configured on Cisco 3750 switch causing excessive amount of gc cr block lost and other GC waits. Waits caused application performance issues. 
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => No Global Cache lost blocks detected

 No of GC lost block in last 24 hours = 0                                        
Top

Top

Session Failover configuration

Success Factor CONFIGURE ORACLE NET SERVICES LOAD BALANCING PROPERLY TO DISTRIBUTE CONNECTIONS
Recommendation
 
Benefit / Impact:

Higher application availability



Risk:

Application availability problems in case of failed nodes or database instances


Action / Repair:

Application connection failover and load balancing is highly recommended for OLTP environments but may not apply for DSS workloads.  DSS application customers may want to ignore this warning.


The following query will identify the application user sessions that do not have basic connection failover configured:

select username, sid, serial#,process,failover_type,failover_method FROM gv$session where upper(failover_method) != 'BASIC' and upper(failover_type) !='SELECT' and upper(username) not in ('SYS','SYSTEM','SYSMAN','DBSNMP');

 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly

 Query returned no rows which is expected when the SQL check passes.

Top

Top

Avg GC Current Block Receive Time

Recommendation
 The average gc current block receive time should typically be less than 15 milliseconds depending on your system configuration and volume.  This is the average latency of a current request round-trip from the requesting instance to the holding instance and back to the requesting instance.

Use the following query to determine the average gc current block receive time for each instance.

set numwidth 20 
column "AVG CURRENT BLOCK RECEIVE TIME (ms)" format 9999999.9 
select b1.inst_id, ((b1.value / decode(b2.value,0,1)) * 10) "AVG CURRENT BLOCK RECEIVE TIME (ms)" 
from gv$sysstat b1, gv$sysstat b2 
where b1.name = 'gc current block receive time' and 
b2.name = 'gc current blocks received' and b1.inst_id = b2.inst_id ;
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Avg GC CURRENT Block Receive Time Within Acceptable Range

 avg_gc_current_block_receive_time_15ms_exceeded = 0                             
Top

Top

Avg GC CR Block Receive Time

Recommendation
 The average gc cr block receive time should typically be less than 15 milliseconds depending on your system configuration and volume.  This is the average latency of a consistent-read request round-trip from the requesting instance to the holding instance and back to the requesting instance.

Use the following query to determine the average gc cr block receive time for each instance.

set numwidth 20 
column "AVG CR BLOCK RECEIVE TIME (ms)" format 9999999.9 
select b1.inst_id, ((b1.value / decode(b2.value,0,1)) * 10) "AVG CR BLOCK RECEIVE TIME (ms)" 
from gv$sysstat b1, gv$sysstat b2 
where b1.name = 'gc cr block receive time' and 
b2.name = 'gc cr blocks received' and b1.inst_id = b2.inst_id ;
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Avg GC CR Block Receive Time Within Acceptable Range

 avg_gc_cr_block_receive_time_15ms_exceeded = 0                                  
Top

Top

Tablespace allocation type

Recommendation
 It is recommended that for all locally managed tablespaces the allocation type specified be SYSTEM to allow Oracle to automatically determine extent size based on the data profile.
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => Tablespace allocation type is SYSTEM for all appropriate tablespaces for hbdbuat

 Query returned no rows which is expected when the SQL check passes.

Top

Top

Old trace files in background dump destination

Recommendation
 we should clean old trace files from background_dump_destination regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => background_dump_dest does not have any files older than 30 days

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



files at background dump destination(/oracle/app/oracle/diag/rdbms/hbdbuat/hbdbuat2/trace) older than 30 days = 0

Status on d4jtfmcvurd01:
PASS => background_dump_dest does not have any files older than 30 days

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - OLD TRACE FILES IN BACKGROUND DUMP DESTINATION 



files at background dump destination(/oracle/app/oracle/diag/rdbms/hbdbuat/hbdbuat1/trace) older than 30 days = 0
Top

Top

Alert log file size

Recommendation
 If alert log file is larger than 50 MB, it should be rolled over to new file and old file should be backed up.
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Alert log is not too big

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - ALERT LOG FILE SIZE 



-rw-r----- 1 oracle asmadmin 14372 Sep 17 02:02 /oracle/app/oracle/diag/rdbms/hbdbuat/hbdbuat2/trace/alert_hbdbuat2.log

Status on d4jtfmcvurd01:
PASS => Alert log is not too big

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - ALERT LOG FILE SIZE 



-rw-r----- 1 oracle asmadmin 55071 Sep 17 03:00 /oracle/app/oracle/diag/rdbms/hbdbuat/hbdbuat1/trace/alert_hbdbuat1.log
Top

Top

Check ORA-07445 errors

Recommendation
 ORA-07445 errors may lead to database block corruption or some serious issue. Please see the trace file for more information next to ORA-07445 error in alert log.If you are not able to resolve the problem,Please open service request with Oracle support.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => No ORA-07445 errors found in alert log

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - CHECK ORA-07445 ERRORS 




Status on d4jtfmcvurd01:
PASS => No ORA-07445 errors found in alert log

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - CHECK ORA-07445 ERRORS 



Top

Top

Check ORA-00600 errors

Recommendation
 ORA-00600 errors may lead to database block corruption or some serious issue. Please see the trace file for more information next to ORA-00600 error in alert log.If you are not able to resolve the problem,Please open service request with Oracle support.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => No ORA-00600 errors found in alert log

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - CHECK ORA-00600 ERRORS 




Status on d4jtfmcvurd01:
PASS => No ORA-00600 errors found in alert log

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - CHECK ORA-00600 ERRORS 



Top

Top

Check user_dump_destination

Recommendation
 we should clean old trace files from user_dump_destination regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => user_dump_dest does not have trace files older than 30 days

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /oracle/app/oracle/diag/rdbms/hbdbuat/hbdbuat2/trace which are older than 30 days

Status on d4jtfmcvurd01:
PASS => user_dump_dest does not have trace files older than 30 days

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - CHECK USER_DUMP_DESTINATION 



0 files found at /oracle/app/oracle/diag/rdbms/hbdbuat/hbdbuat1/trace which are older than 30 days
Top

Top

Check core_dump_destination

Recommendation
 we should clean old trace files from core_dump_destination regularly otherwise one may run out of space on ORACLE_BASE mount point and may not be able to collect diagnostic information when failure occurs.

For this check to pass, you should not have core files older than 30 days in core_dump_destination.
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => core_dump_dest does not have too many older core dump files

 DATA FROM D4JTFMCVURD02 - HBDBUAT DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /oracle/app/oracle/diag/rdbms/hbdbuat/hbdbuat2/cdump which are older than 30 days

Status on d4jtfmcvurd01:
PASS => core_dump_dest does not have too many older core dump files

 DATA FROM D4JTFMCVURD01 - HBDBUAT DATABASE - CHECK CORE_DUMP_DESTINATION 



0 files found at /oracle/app/oracle/diag/rdbms/hbdbuat/hbdbuat1/cdump which are older than 30 days
Top

Top

Check for parameter semmns

Recommendation
 SEMMNS should be set >= 32000
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on d4jtfmcvurd01:
PASS => Kernel Parameter SEMMNS OK

semmns = 32000
Top

Top

Check for parameter kernel.shmmni

Recommendation
 kernel.shmmni  should be >= 4096
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096

Status on d4jtfmcvurd01:
PASS => Kernel Parameter kernel.shmmni OK

kernel.shmmni = 4096
Top

Top

Check for parameter semmsl

Recommendation
 SEMMSL should be set >= 250
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on d4jtfmcvurd01:
PASS => Kernel Parameter SEMMSL OK

semmsl = 250
Top

Top

Check for parameter semmni

Recommendation
 SEMMNI should be set >= 128
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Kernel Parameter SEMMNI OK

semmni = 128

Status on d4jtfmcvurd01:
PASS => Kernel Parameter SEMMNI OK

semmni = 128
Top

Top

Check for parameter semopm

Recommendation
 
Benefit / Impact:

SEMOPM should be set >= 100 

Risk:

Kernel Parameter SEMOPM Is Lower Than The Recommended Value

Action / Repair:

SEMOPM should be set >= 100 
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on d4jtfmcvurd01:
PASS => Kernel Parameter SEMOPM OK

semopm = 100
Top

Top

Check for parameter kernel.shmall

Recommendation
 Starting with Oracle 10g, kernel.shmall should be set >= 2097152.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 2097152

Status on d4jtfmcvurd01:
PASS => Kernel Parameter kernel.shmall OK

kernel.shmall = 2097152
Top

Top

Verify sys and system users default tablespace is system

Success Factor DATABASE FAILURE PREVENTION BEST PRACTICES
Recommendation
 
Benefit / Impact:

It's recommended to Keep the Default Tablespace for SYS and SYSTEM Schema mapped to the Default SYSTEM. All Standard Dictionary objects as well as all the added option will be located in the same place with no risk to record Dictionary data in other Datafiles.



Risk:

If Default tablespace for SYS and SYSTEM is not set to SYSTEM, Data dictionary Object can be created in other locations and cannot be controlled during maintenance activitiesof the database. Due to this, there's a potentil risk to run into severe Data Dictionary Corruptuion that may implicate time consuming Recovery Steps.


Action / Repair:

If SYS or SYSTEM schema have a Default Tablespace different than SYSTEM, it's recommended to follow instruction given into NoteID? : 1111111.2

SQL> SELECT username, default_tablespace
     FROM dba_users
     WHERE username in ('SYS','SYSTEM');

If  DEFAULT_TABLESPACE is anything other than SYSTEM tablespace, modify the default tablespace to SYSTEM by using the below command.
 
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => The SYS and SYSTEM userids have a default tablespace of SYSTEM

 SYSTEM                                                                          
SYSTEM                                                                          
Top

Top

maximum parallel asynch io

Recommendation
 A message in the alert.log similar to the one below is indicative of /proc/sys/fs/aio-max-nr being too low but you should set this to 1048576 proactively and even increase it if you get a similar message.  A problem in this area could lead to availability issues.

Warning: OS async I/O limit 128 is lower than recovery batch 1024
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)

 DATA FROM D4JTFMCVURD02 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576

Status on d4jtfmcvurd01:
PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)

 DATA FROM D4JTFMCVURD01 - MAXIMUM PARALLEL ASYNCH IO 



aio-max-nr = 1048576
Top

Top

Old log files in client directory in crs_home

Recommendation
 Having many old log files in $CRS_HOME/log/hostname/client directory can cause CRS performance issue.  So please delete log files older than 15 days.
 
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => $CRS_HOME/log/hostname/client directory does not have too many older log files

 DATA FROM D4JTFMCVURD02 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



0 files in /oracle/app/11.2.0/grid/log/d4jtfmcvurd02/client directory are older than 15 days

Status on d4jtfmcvurd01:
PASS => $CRS_HOME/log/hostname/client directory does not have too many older log files

 DATA FROM D4JTFMCVURD01 - OLD LOG FILES IN CLIENT DIRECTORY IN CRS_HOME 



0 files in /oracle/app/11.2.0/grid/log/d4jtfmcvurd01/client directory are older than 15 days
Top

Top

OCR backup

Success Factor USE EXTERNAL OR ORACLE PROVIDED REDUNDANCY FOR OCR
Recommendation
 Oracle Clusterware automatically creates OCR backups every four hours. At any one time, Oracle Database always retains the last three  backup copies of the OCR. The CRSD process that creates the backups also creates and retains an OCR backup for each full day and at the end of each week.
 
Needs attention on d4jtfmcvurd02
Passed on -

Status on d4jtfmcvurd02:
WARNING => OCR is NOT being backed up daily

 DATA FROM D4JTFMCVURD02 - OCR BACKUP 




d4jtfmcvurd01     2014/09/17 06:33:32     /oracle/app/11.2.0/grid/cdata/d4jtfmc-cluster/backup00.ocr

d4jtfmcvurd01     2014/09/17 02:33:31     /oracle/app/11.2.0/grid/cdata/d4jtfmc-cluster/backup01.ocr

d4jtfmcvurd01     2014/09/16 22:33:31     /oracle/app/11.2.0/grid/cdata/d4jtfmc-cluster/backup02.ocr

d4jtfmcvurd01     2014/09/16 02:33:29     /oracle/app/11.2.0/grid/cdata/d4jtfmc-cluster/day.ocr

d4jtfmcvurd01     2014/09/11 14:33:16     /oracle/app/11.2.0/grid/cdata/d4jtfmc-cluster/week.ocr
Top

Top

Check for parameter net.core.rmem_max

Success Factor VALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304

Status on d4jtfmcvurd01:
PASS => net.core.rmem_max is Configured Properly

net.core.rmem_max = 4194304
Top

Top

Check for parameter spfile

Recommendation
 Oracle recommendes to use one spfile for all instances in clustered database.  Using spfile, DBA can change many parameters dynamically.
 
Links
Needs attention on -
Passed on hbdbuat2, hbdbuat1

Status on hbdbuat2:
PASS => Instance is using spfile

hbdbuat2.spfile = +DATA01/hbdbuat/spfilehbdbuat.ora                             

Status on hbdbuat1:
PASS => Instance is using spfile

hbdbuat1.spfile = +DATA01/hbdbuat/spfilehbdbuat.ora                             
Top

Top

Non-routable network for interconnect

Success Factor USE NON-ROUTABLE NETWORK ADDRESSES FOR PRIVATE INTERCONNECT
Recommendation
 
Benefit / Impact:

Secure and efficient interconnect performance.

Risk:

Latency and security issues.

Action / Repair:

The cluster interconnect should be a completely private/isolated (layer 2 packet processing), non-routable network (the only nodes connected to it are the cluster members themselves).

The 10.x.x.x, 172.x.x.x and 192.x.x.x networks are defined as non-routable networks by standard.  Customers who use other networks for the interconnect should ensure that they are not being routed, in which case this finding can be ignored.
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => Interconnect is configured on non-routable network addresses

 DATA FROM D4JTFMCVURD02 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth1  10.0.61.0  global  public
eth0  192.168.61.0  global  cluster_interconnect

Status on d4jtfmcvurd01:
PASS => Interconnect is configured on non-routable network addresses

 DATA FROM D4JTFMCVURD01 - NON-ROUTABLE NETWORK FOR INTERCONNECT 



eth1  10.0.61.0  global  public
eth0  192.168.61.0  global  cluster_interconnect
Top

Top

Hostname Formating

Success Factor DO NOT USE UNDERSCORE IN HOST OR DOMAIN NAME
Recommendation
 Underscores should not be used in a  host or domainname..

 According to RFC952 - DoD Internet host table specification 
The same applies for Net, Host, Gateway, or Domain name.


 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => None of the hostnames contains an underscore character

 DATA FROM D4JTFMCVURD02 - HOSTNAME FORMATING 



d4jtfmcvurd01
d4jtfmcvurd02

Status on d4jtfmcvurd01:
PASS => None of the hostnames contains an underscore character

 DATA FROM D4JTFMCVURD01 - HOSTNAME FORMATING 



d4jtfmcvurd01
d4jtfmcvurd02
Top

Top

Check for parameter net.core.rmem_default

Success Factor VALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on d4jtfmcvurd01:
PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144
Top

Top

Check for parameter net.core.wmem_max

Success Factor VALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048586

Status on d4jtfmcvurd01:
PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048586
Top

Top

Check for parameter net.core.wmem_default

Success Factor VALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g and above)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on d4jtfmcvurd01:
PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144
Top

Top

Archive log mode

Success Factor MAA: ENABLE ARCHIVELOG MODE
Recommendation
 First MAA test
 
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
WARNING => Without ARCHIVELOG mode the database cannot be recovered from an online backup and Data Guard cannot be used.

 archive_log_mode_check = NOARCHIVELOG                                           
Top

Top

CRS HOME env variable

Success Factor AVOID SETTING ORA_CRS_HOME ENVIRONMENT VARIABLE
Recommendation
 
Benefit / Impact:

Avoid unexpected results running various Oracle utilities



Risk:

Setting this variable can cause problems for various Oracle components, and it is never necessary for CRS programs because they all have wrapper scripts.


Action / Repair:

Unset ORA_CRS_HOME in the execution environment.  If a variable is needed for automation purposes or convenience then use a different variable name (eg., GI_HOME, etc.)
 
Links
Needs attention on -
Passed on d4jtfmcvurd02, d4jtfmcvurd01

Status on d4jtfmcvurd02:
PASS => ORA_CRS_HOME environment variable is not set

 DATA FROM D4JTFMCVURD02 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set


Status on d4jtfmcvurd01:
PASS => ORA_CRS_HOME environment variable is not set

 DATA FROM D4JTFMCVURD01 - CRS HOME ENV VARIABLE 




ORA_CRS_HOME environment variable not set

Top

Top

AUDSES$ sequence cache size

Success Factor CACHE APPLICATION SEQUENCES AND SOME SYSTEM SEQUENCES FOR BETTER PERFORMANCE
Recommendation
 Use large cache value of maybe 10,000 or more. NOORDER most effective, but impact on strict ordering. Performance. Might not get strict time ordering of sequence numbers.
There are problems reported with Audses$ and ora_tq_base$ which are both internal sequences  . Also particularly if the order of the application sequence is not important or this is used during the login process and hence can be involved in a login storm then this needs to be taken care of. Some sequences need to be presented in a particular order and hence caching those is not a good idea but in the interest of performance if order does not matter then this could be cached and presented. This also manifests itself as waits in "rowcache" for "dc_sequences" which is a rowcache type for sequences. 


For Applications this can cause significant issues especially with Transactional Sequences.  
Please see note attached.

Oracle General Ledger - Version: 11.5.0 to 11.5.10
Oracle Payables - Version: 11.5.0 to 11.5.10
Oracle Receivables - Version: 11.5.10.2
Information in this document applies to any platform.
ARXTWAI,ARXRWMAI 

Increase IDGEN1$ to a value of 1000, see notes below.  This is the default as of 11.2.0.1.
 
Links
Needs attention on hbdbuat
Passed on -

Status on hbdbuat:
WARNING => SYS.AUDSES$ sequence cache size < 10,000

 audses$.cache_size = 10000                                                      
Top

Top

IDGEN$ sequence cache size

Success Factor CACHE APPLICATION SEQUENCES AND SOME SYSTEM SEQUENCES FOR BETTER PERFORMANCE
Recommendation
 Sequence contention (SQ enqueue) can occur if SYS.IDGEN1$ sequence is not cached to 1000.  This condition can lead to performance issues in RAC.  1000 is the default starting in version 11.2.0.1.
 
Links
Needs attention on -
Passed on hbdbuat

Status on hbdbuat:
PASS => SYS.IDGEN1$ sequence cache size >= 1,000

 idgen1$.cache_size = 1000                                                       
Top

Top

Skipped Checks

skipping GI shell limits soft nproc(checkid:-841C7DEB776DB4BBE040E50A1EC0782E) because o_crs_user_limits_d4jtfmcvurd01.out not found
skipping GI shell limits soft nofile(checkid:-841D87785594F263E040E50A1EC020D6) because o_crs_user_limits_d4jtfmcvurd01.out not found
skipping GI shell limits hard nofile(checkid:-841E706550995C68E040E50A1EC05EFB) because o_crs_user_limits_d4jtfmcvurd01.out not found
skipping GI shell limits hard nproc(checkid:-841F8C3E78906005E040E50A1EC00357) because o_crs_user_limits_d4jtfmcvurd01.out not found
skipping GI shell limits hard stack(checkid:-9DAFD1040CA9389FE040E50A1EC0307C) because o_crs_user_limits_d4jtfmcvurd01.out not found
skipping Broadcast Requirements for Networks(checkid:-D112D25A574F13DCE0431EC0E50A55CD) because o_arping_broadcast_d4jtfmcvurd01.out not found
skipping OLR Integrity(checkid:-E1500ADF060A3EA2E04313C0E50A3676) because o_olrintegrity_d4jtfmcvurd01.out not found
skipping root shell limits hard nproc(checkid:-ED3A26771AD14E74E04313C0E50AB17A) because o_root_user_limits_d4jtfmcvurd01.out not found

Top

Top 10 Time Consuming Checks

NOTE: This information is primarily used for helping Oracle optimize the run time of orachk.

These timings are not necessarily indicative of any problem and may vary widely from one system to another.

Name Type Target Execution Duration
Parallel Execution Health-Checks and Diagnostics Reports OS Check d4jtfmcvurd01:hbdbuat 24 secs
Parallel Execution Health-Checks and Diagnostics Reports OS Check d4jtfmcvurd02:hbdbuat 24 secs
Cluster interconnect (clusterware) OS Collection d4jtfmcvurd01 14 secs
Cluster interconnect (clusterware) OS Collection d4jtfmcvurd02 13 secs
VIP NIC bonding config. OS Check d4jtfmcvurd01 8 secs
NIC Bonding Mode for interconnect OS Check d4jtfmcvurd01 7 secs
More than one card for interconnect? OS Check d4jtfmcvurd01 7 secs
Subnet mask for cluster_interconnect NICS branch OS Check d4jtfmcvurd01 7 secs
Jumbo frames configuration for interconnect OS Check d4jtfmcvurd01 7 secs
Interconnect NIC bonding config. OS Check d4jtfmcvurd01 7 secs

请使用浏览器的分享功能分享到微信等