RHP(Rapid Home Provisioning)是什么 随着IT信息化的发展。现在数据中心的规模越来越大,对管理员的要求也越来越高。同时,用户希望快速访问始终在线的服务,因此对于企业来说,部署和维护必须高效且对业务运行无干扰。为了跟上步伐,必须减少运维复杂性和手动参与的步骤。Oracle RHP (Rapid Home Provisioning) 的解决方案标准化、简化了软件分发和管理。自动化和高效率是她的特点,她最小化了对大规模部署的影响。
Rapid Home Provisioning (RHP) 代表了一种标准的方法,以统一的方式,在软件基础设施的所有体系结构层 (Oracle Database和其他第三方定制软件) 上进行部署、补丁、升级、迁移等工作,尤其是Oracle集群、数据库的部署、升级、补丁、迁移,以及集群节点的伸缩等操作非常便捷。
[orgrid@ohs1 ~]$ rhpctl
Usage: rhpctl []
commands: add|addnode|allow|delete|deleteimage|deletenode|disallow|discover|export|grant|import|insertimage|instantiate|modify|move|promote|query|register|revoke|subscribe|uninstantiate|unregister|unsubscribe|upgrade|verify|enable|disable|collect|deploy
objects: audit|client|credentials|database|gihome|image|imagetype|job|node|osconfig|peerserver|role|series|server|user|useraction|workingcopy
For detailed help on each command and object and its options use:
rhpctl -help
[orgrid@ohs1 ~]$
下图记录了在已安装18c(18.3)集群上建立RHP Server的详细步骤。
ohs1和ohs2是已安装18c的集群,其中RHP Client不是可选的。
停止RHP Server
[orgrid@ohs1 ~]$ srvctl stop rhpserver
[orgrid@ohs1 ~]$ srvctl remove rhpserver
PRCN-2018 : Current user orgrid is not a privileged user
[orgrid@ohs1 ~]$ which srvctl
/pgold/orgrid/oracle/product/183/bin/srvctl
[orgrid@ohs1 ~]$ su -
Password:
移除RHP Server
[root@ohs1 ~]# /pgold/orgrid/oracle/product/183/bin/srvctl remove rhpserver
[root@ohs1 ~]#
增加RHP Server
[root@ohs1 ~]# /pgold/orgrid/oracle/product/183/bin/srvctl add rhpserver -storage /rhpstorage -diskgroup DATA -verbose
ohs1.ohsdba.cn: Creating a new volume...
ohs1.ohsdba.cn: Checking for the existence of file system...
ohs1.ohsdba.cn: Creating a new ACFS file system...
ohs1.ohsdba.cn: Starting the ACFS file system...
ohs1.ohsdba.cn: Creating authentication keys...
[root@ohs1 ~]# su – orgrid
Note:/rhpstorage会被自动创建
启动RHP Server
[root@ohs1 ~]# /pgold/orgrid/oracle/product/183/bin/srvctl start rhpserver
[root@ohs1 ~]# /pgold/orgrid/oracle/product/183/bin/srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is running on node ohs1
查看RHP Server配置
[root@ohs1 ~]# /pgold/orgrid/oracle/product/183/bin/srvctl config rhpserver
Storage base path: /rhpstorage
Disk Groups: DATA
Port number: 23795
Transfer port range:
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is individually enabled on nodes:
Rapid Home Provisioning Server is individually disabled on nodes:
Email address:
Mail server address:
Mail server port:
Transport Level Security disabled
HTTP Secure is enabled
[orgrid@ohs1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_ohs-lv_root
50G 5.1G 42G 11% /
tmpfs 2.4G 1.1G 1.3G 46% /dev/shm
/dev/sda1 477M 84M 368M 19% /boot
/dev/mapper/vg_ohs-lv_pgold
537G 23G 487G 5% /pgold
/dev/asm/ghchkpt-33 5.5G 784M 4.8G 14% /rhpstorage/chkbase
/dev/asm/ghvol464715-33
12G 5.7G 6.4G 47% /rhpstorage/images/iDB112957258
/dev/asm/ghvol895499-33
22G 12G 11G 51% /rhpstorage/images/iDB183271079
[orgrid@ohs1 ~]$
增加HAVIP
[root@ohs1 ~]# /pgold/orgrid/oracle/product/183/bin/srvctl add havip -id havip -address 192.168.56.6
Performs Rapid Home Provisioning operations and manages Rapid Home Provisioning Servers and Clients.
Usage:
rhpctl add Adds a resource, type or other entity.
rhpctl addnode Adds nodes or instances of specific resources.
rhpctl addpdb Adds a pluggable database to the specified multitenant container database.
rhpctl allow Allows access to the image, series or image type.
rhpctl collect Collects backup of operating system configuration for the cluster.
rhpctl compare Compares operating system configurations for the specified cluster.
rhpctl delete Deletes a resource, type or other entity.
rhpctl deleteimage Deletes an image from a series.
rhpctl deletenode Deletes nodes or instances of specific resources.
rhpctl deletepdb Removes a pluggable database from the specified multitenant container database.
rhpctl deploy Deploys OS image for the cluster.
rhpctl disable Disables the scheduled daily backup of operating system configuration for the cluster.
rhpctl disallow Disallows access to the image, series or image type.
rhpctl discover Validates and discovers parameters to generate a response file.
rhpctl enable Enables the scheduled daily backup of operating system configuration for the cluster.
rhpctl export Exports data from the repository to a client or server data file.
rhpctl grant Grants a role to a client user.
rhpctl import Creates a new image from the specified path.
rhpctl insertimage Inserts a new image into a series.
rhpctl instantiate Requests images from another server.
rhpctl modify Modifies a resource, type or other entity.
rhpctl move Moves a resource from a source path to a destination path.
rhpctl promote Promotes an image.
rhpctl query Gets information of a resource, type or other entity.
rhpctl recover Recovers a node after its failure.
rhpctl register Registers an image, user or server.
rhpctl replicate Replicate image from server to a specified client.
rhpctl revoke Revokes a role of a client user.
rhpctl subscribe Subscribes the specified user to an image series.
rhpctl uninstantiate Stops updates for previously requested images from another server.
rhpctl unregister Unregisters an image, user or server.
rhpctl unsubscribe Unsubscribes the specified user to an image series.
rhpctl upgrade Upgrades a resource.
rhpctl verify Validates and creates or completes a response file.
rhpctl zdtupgrade Performs zero downtime upgrade of a database.
For detailed help on each command use:
rhpctl -help
[root@ohs1 ~]#
下面是原始环境的信息。我们通过在RHP Server ohs1上导入ood 11204的ORACLE_HOME,在ohs1上导入183的ORACLE HOME。然后在ohs1上通过RHP为ohs部署11204的ORACLE_HOME,并创建数据库,然后再部署183的ORACLE_HOME,最后将11204的数据库升级都183。
OS Server
ohs1,ohs2
RHP Server
ood
ohs
GI HOME
/pgold/orgrid/oracle/product/183
N/A
N/A
Database HOME
/pgold/ordb/oracle/product/183
/u01/app/oracle/product/11204
N/A
18c在本地模式下通过rhpctl move切换ORACLE HOME 注意:move只适合于大版本相同,小版本不同的情况。比如这里是介绍从Oracle 18.2切换到18.3。
查看情况环境
[oracle@sdb09] /home/oracle> env |grep ORACLE
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1
ORACLE_HOSTNAME=sdb09
ORACLE_PATH=/home/oracle/scripts
ORACLE_SID=cdb2
ORACLE_UNQNAME=cdb2
[oracle@sdb09] /home/oracle> sqlplus "/as sysdba"
SQL*Plus: Release 18.0.0.0.0 - Production on Tue Jul 24 16:43:00 2018
Version 18.2.0.0.0
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
CDB$ROOT@cdb2>show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY YES
3 PDB2 READ WRITE NO
CDB$ROOT@cdb2>exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
[oracle@sdb09] /home/oracle>
Running RHPCTL for Stand Alone Home
SQL*Plus: Release 18.0.0.0.0 - Production on Tue Jul 24 16:46:23 2018
Version 18.2.0.0.0
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
SQL> Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
SQL*Plus: Release 18.0.0.0.0 - Production on Tue Jul 24 16:46:55 2018
Version 18.3.0.0.0
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to an idle instance.
SQL> ORACLE instance started.
Total System Global Area 1073741304 bytes
Fixed Size 8904184 bytes
Variable Size 771751936 bytes
Database Buffers 289406976 bytes
Redo Buffers 3678208 bytes
Database mounted.
Database opened.
SQL> Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
/u01/app/oracle/product/18.0.0/dbhome_3
cdb2
SQL Patching tool version 18.0.0.0.0 Production on Tue Jul 24 16:47:19 2018
Copyright (c) 2012, 2018, Oracle. All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_533_2018_07_24_16_47_19/sqlpatch_invocation.log
Connecting to database...OK
Gathering database info...done
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of interim SQL patches:
Interim patch 27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)):
Binary registry: Installed
PDB CDB$ROOT: Rolled back with errors on 24-JUL-18 04.30.06.505223 PM
PDB PDB$SEED: Rolled back with errors on 24-JUL-18 04.30.06.873780 PM
PDB PDB2: Rolled back with errors on 24-JUL-18 04.30.07.119076 PM
Current state of release update SQL patches:
Binary registry:
18.3.0.0.0 Release_Update 1806280943: Installed
PDB CDB$ROOT:
Rolled back to 18.2.0.0.0 Release_Update 1804041635 successfully on 24-JUL-18 04.30.06.555634 PM
PDB PDB$SEED:
Rolled back to 18.2.0.0.0 Release_Update 1804041635 successfully on 24-JUL-18 04.30.06.951502 PM
PDB PDB2:
Rolled back to 18.2.0.0.0 Release_Update 1804041635 successfully on 24-JUL-18 04.30.07.155698 PM
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB2
No interim patches need to be rolled back
Patch 28090523 (Database Release Update : 18.3.0.0.180717 (28090523)):
Apply from 18.2.0.0.0 Release_Update 1804041635 to 18.3.0.0.0 Release_Update 1806280943
The following interim patches will be applied:
27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415))
Installing patches...
Patch installation complete. Total patches installed: 6
Validating logfiles...done
Patch 28090523 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB2_CDBROOT_2018Jul24_16_49_17.log (no errors)
Patch 27923415 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB2_CDBROOT_2018Jul24_16_51_54.log (no errors)
Patch 28090523 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB2_PDBSEED_2018Jul24_16_52_15.log (no errors)
Patch 27923415 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB2_PDBSEED_2018Jul24_16_55_08.log (no errors)
Patch 28090523 apply (pdb PDB2): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB2_PDB2_2018Jul24_16_52_16.log (no errors)
Patch 27923415 apply (pdb PDB2): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB2_PDB2_2018Jul24_16_55_01.log (no errors)
SQL Patching tool complete on Tue Jul 24 16:55:24 2018
[oracle@sdb09] /home/oracle>
Oracle Database - Enterprise Edition - Version 12.2.0.1 and later Oracle Database Cloud Schema Service - Version N/A and later Oracle Database Exadata Express Cloud Service - Version N/A and later Oracle Database Exadata Cloud Machine - Version N/A and later Oracle Cloud Infrastructure - Database Service - Version N/A and later IBM AIX on POWER Systems (64-bit) Linux x86-64 Oracle Solaris on SPARC (64-bit) Oracle Solaris on x86-64 (64-bit)
GOAL
To provide the steps to setup a Rapid Home Provisioning (RHP) Server & Client
SOLUTION
Introduction
Rapid Home Provisioning (RHP) represents a standard way for provisioning, patching and update at organizational level, in a unified manner, across all architectural layers of software infrastructure – Oracle databases and custom software.
Rapid Home Provisioning is a method of deploying software homes from single cluster where you create, store and manage templates of Oracle homes as images - called gold images - of Oracle software. The DBA can make a working copy of any gold image and then provision that working copy to any RHP Client in a data center.
RHP is installed as part of Grid Infrastructure. Oracle Clusterware manages the components that form the Rapid Home Provisioning Server. These components include the RHP server itself, Grid Naming Service (GNS) which is used to advertise the location of the RHP server, a VIP to support HA-NFS (required if there are any clients that you want to provision to - whether you use NFS storage for the workingcopies or not) and Oracle ASM Cluster File System (ACFS) which is used to store snapshots of the working copies.
The gold images represent an installed home, whether that is an Oracle Database software home or some custom software home. The gold image is stored in an Oracle Automatic Software Management Cluster File System (Oracle ACFS).
Metadata describing an installed home is stored as an image series in the Management Repository. The Management Repository (or Management Database MGMTDB) is created when installing Oracle Grid Infrastructure.
Note: in order to provide an Oracle Home (workingcopy) to an RHP Client will require about
60min if LOCAL,
30min if by NFS (storagetype NFS). Many operations are needed like DBHome cloning, relink, etc.
Using the rhpctl utility, gold images can be imported from an installed home on any of the RHP clients or RHP Server.
An existing working copy can be promoted to a gold image.
A working copy is an instantiation of an Oracle database software home
Working copies are writable and independent of one another
Using RHP we can provision Oracle GI/DB software for the various versions
11.2.0.* (Oracle Database 11g)
12.1.0.* (Oracle Database 12c)
12.2.0.* (Oracle Database 12c)
It's recommended on RHP Server to use the
latest GI (Grid Infrastructure) version (at the time of writing this article 12.2.0.1_180116).
Using Rapid Home Provisioning
Rapid Home Provisioning requires ASM, the Management DB and the Grid Naming Service (GNS). As part of the Grid Infrastructure deploy
ASM is configured
The Management Repository (Management DB) is available. The Management DB is installed by default with Oracle Grid Infrastructure 12.2.0.1
# $GRID_HOME/bin/srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node oda101
As the Management DB (MGMTDB) is doing a key role with the RHP, you should consider to setup a backup for it, in particular for the schema: '
GHSUSER' (for example using
expdp).
The size of it depends on the number of images, workingcopies, sites, users, etc.
In general, the size requirement is very low since we only store the metadata: for 100 images, 2000 workingcopies and 25 RHP Clients, our internal test showed a size of about 200 Mb only.
You need to configure the GNS such that a GNS VIP is provided. A GNS with Zone Delegation is
not required by RHP.
Note:
you could skip the manual steps to setup RHP Server using "setrhp" the "1-Click" utility described on
Note:2124960.1 -
Rapid Home Provisioning (RHP) setup in "1-Click"
GNS Setup
Execute the following command as root, providing a valid IP address (domain is not needed on RHP):
# $GRID_HOME/bin/srvctl status gns
GNS is running on node rhps.
GNS is enabled on node rhps.
# $GRID_HOME/bin/srvctl config gns
GNS is enabled.
GNS VIP addresses: 10.xxx.51.63
Domain served by GNS: No domain is being forwarded.
Note: currently the GNS setup is required on RHP Server only. We advertise to GNS the exact host and port where RHP Server is running which is used by RHP Clients to connect to RHP Server.
Once the GNS is configured the Rapid Home Provisioning server can be configured.
Creating a Rapid Home Provisioning Server
The Rapid Home Provisioning Server uses an Oracle ACFS file system for Oracle database software homes that will be published to clients.
To configure the Rapid Home Provisioning Server you will need to:
Provide an ASM diskgroup (DATA/RECO). It is recommended that this diskgroup is with at least 100Gb in free space. The diskgroup is used to store gold images and RHP managed NFS-provisioned workingcopies.
Provide a mount path that exists on all nodes of the RHP server. Oracle ACFS snapshots can be used to provision server-local workingcopies or NFS-mounted workingcopies on clients.
Configure an HA VIP. The NFS client, on the RHP client clusters will communicate with the NFS server on the RHP server over this IP address for NFS mounted homes. The VIP allows for the configuration of HANFS. Note that the HA VIP has to be configured in the same subnet as the default network configured on the RHP server.
As
HANFS requires NFS in order to run. It is assumed that NFS (and its associated needs) will be started by init scripts at node boot time. NFS will need to be running on all RHP server nodes. We can check this on each node issuing:
# service
rpcbind status
rpcbind (pid 13256) is
running...
# service
nfslock status
rpc.statd (pid 23372) is
running...
If one of these is not running, you could start it by using (as root):
/etc/init.d/ status
The 'chkconfig' command can be used to ensure that these services are started at boot time
chkconfig nfs on
chkconfig rpcbind on
chkconfig nfslock on
Note: on OL5, the "rpcbind" service is called "portmap"
NFS operations such 'export' and 'mount' may fail due to misconfiguration or bugs.
You may verify the NFS functionalities with 'mnttest' tool, see
note:2167541.1 - RHP (Rapid Home Provisioning): mnttest tool to test NFS 'export' and 'mount' functionalities
NFS Usage: NFS is not required for 19c RHPS managing 19c RHPC or 18.4RHPC or later, however RHPS needs NFS for managing 122 RHPC.
Once the pre-requisites mentioned above have been met, the RHP server can be added to Grid Infrastructure and started.
RHP Server: Create the Rapid Home Provisioning Server resource
Starting with Oracle Grid Infrastructure 18.1.0.0.0, when you install Oracle Grid Infrastructure, the Rapid Home Provisioning Server is configured, by default, in the local mode to support the local switch home capability. If you must configure the general Rapid Home Provisioning Server product, then you must remove the current local-mode Rapid Home Provisioning Server, using the following commands, as root:
# srvctl stop rhpserver
Ignore a message similar to "Rapid Home Provisioning Server is not running".
# srvctl remove rhpserver
Assuming
DATA diskgroup usage, and '
/rhp_storage' as ACFS mount point, issue the following command (as 'root' user):
example:
$GRID_HOME/bin/srvctl add rhpserver -storage /rhp_storage -diskgroup DATA
Note: The storage path "/rhp_storage"
is automatically created by srvctl
If adding rhpserver with above command you are getting the warning message:
PRCT-1431 : The Oracle ASM Dynamic Volume Manager compatibility attribute is not set for disk group
you can set it up with the command (attaching ASM instance with sqlplus)
ALTER DISKGROUP data SET ATTRIBUTE 'compatible.
advm' = '12.2';
You can check the status for rhp server doing:
# $GRID_HOME/bin/
srvctl status rhpserver Rapid Home Provisioning Server
is enabled Rapid Home Provisioning Server
is not running
On all RHP Server nodes at this time, you have a new ACFS mount point:
[root@oda101 ~]# mount
(...)
/dev/asm/ on /rhp_storage/chkbase type acfs (rw)
RHP Server: Start the Rapid Home Provisioning Server resource
Issue the following command (as 'root' user):
# $GRID_HOME/bin/
srvctl start rhpserver
example:
[root@oda101 ~]# $GRID_HOME/bin/srvctl start rhpserver
[root@oda101 ~]# $GRID_HOME/bin/srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is running on node oda102
RHP Server: HA-VIP setup
On the RHP server as the Grid Infrastructure owner determine if an HA-VIP has been created (as 'grid' user):
$ $GRID_HOME/bin/
srvctl config havip HAVIP exists: /rhphavip/10.xxx.51.64, network number 1 Description: Home Node: HAVIP is enabled. HAVIP is individually enabled on nodes: HAVIP is individually disabled on nodes:
If it's not configured, you need to issue the following command (as root):
# $GRID_HOME/bin/srvctl add havip -id id -address {host_name | ip_address}
On RHP Server create the RHP client configuration file (as 'grid' user), RHPCTL creates the client data file in the directory path you specify after the -toclientdata flag. The name of the client data file is "client_cluster_name".xml:
example: [grid@oda101 ~]$ $GRID_HOME/bin/rhpctl add client -client odaremote1-c -toclientdata /home/grid oda101.: Creating client data ... oda101.: Client data created for user "odaremote-c".
[grid@oda101 ~]$ ls –al /home/grid/stormcloud-cluster.xml -rw-r—-r-- 1 grid oinstall 3158 Oct 31 21:59 /home/grid/odaremote-c.xml
Note: to get the (RHP Client) cluster name (as grid user) issue the following command:
$GRID_HOME/bin/cemutlo -n
move the RHP Client config file created above to the RHP Client:
12.1.0.2 target cluster cannot be an RHP client of an 12.2 RHP Server. For a 12.2 RHP server cluster, pre-12.2 targets can be accessed using one of the connectivity mechanisms - sudo or root or ssh. This situation applies only between 12.1 and 12.2
Perform the following actions on the RHP client cluster (as 'root' user):
Create the RHP client using the XML wallet created for this cluster
example: [root@odaremote1 ~]# $GRID_HOME/bin/srvctl add rhpclient -clientdata /home/grid/odaremote-c.xml [root@odaremote1 ~]# $GRID_HOME/bin/srvctl status rhpclient Rapid Home Provisioning Client is enabled Rapid Home Provisioning Client is not running [root@odaremote1 ~]# $GRID_HOME/bin/
srvctl start rhpclient
The command “
srvctl add rhpclient” has two optional arguments:
-diskgroup A disk group from which to create the ACFS file systems for image storage.
-storage A location which is available on every cluster node and is not necessarily shared.
"-diskgroup" and "-storage" are defined as a pair (you cannot define one without the other). They can be provided in "srvctl add rhpclient", and they may be modified or added with "srvctl modify rhpclient". The specified diskgroup and storage location are used in two scenarios:
1. when provisioning working copies of Oracle Database Homes in to RHP_MANAGED_STORAGE. RHP_MANAGED_STORAGE is the recommended storage type for Oracle Database Homes because it leverages ACFS features to minimize storage footprint.
2. If the client is a multi-node cluster. When a working copy of a software home (Grid, DB or generic) is provisioned to the client, the client will create a local cached copy in ACFS, and then copy it to the other nodes of the cluster locally. (Otherwise, the RHP Server itself must provision the working copy to each node.)
Scenarios to Use Rapid Home Provisioning
Adding Gold Images to the Rapid Home Provisioning Server
Rapid Home Provisioning Server stores and serves gold images of software homes. These gold images must be instantiated on the Rapid Home Provisioning Server. Gold images are read-only, preventing a client from running programs from them. Gold images are not used as software homes directly. Rather, the gold image is used to create working copies. These working copies are usable as software homes on RHP Client.
The DBA can import software to the Rapid Home Provisioning Server using any one of the following methods:
- Import a gold image from an installed home on the Rapid Home Provisioning Server using the following command:
- Import a gold image from an installed home on a Rapid Home Provisioning Client, using the following command run from the RHP Client:
rhpctl import image –image –path
- Promote an existing workingcopy to a gold image using the following command:
rhpctl add image –image -workingcopy
As example,
let's suppose you have created several Oracle homes (on RHP Server)
Oracle Home Name Oracle Home version Home Location ---------------- ------------------- ------------ OraDb11203_home1 11.2.0.3.14(20299017,17592127) OraDb11204_home1 11.2.0.4.6(20299013) OraDb12102_home1 12.1.0.2..4(20831110,20831113)
You could make them available as gold image issuing the following command (as 'grid' user):
example: $ $GRID_HOME/bin/
rhpctl import image -image OraDb11203_home1 -path -imagetype ORACLEDBSOFTWARE slcac464.: Creating a new ACFS file system for image "OraDb11203_home1" ... slcac464.: Copying files... slcac464.: Copying home contents... slcac464.: Changing the home ownership to user oracle... slcac464.: Transferring data to 1 nodes slcac464.: 10% complete slcac464.: 20% complete slcac464.: 30% complete slcac464.: 40% complete slcac464.: 50% complete slcac464.: 60% complete slcac464.: 70% complete slcac464.: 80% complete slcac464.: 90% complete slcac464.: 100% complete slcac464.: Changing the home ownership to user grid...
$ $GRID_HOME/bin/
rhpctl import image -image OraDb11204_home1 -path -imagetype ORACLEDBSOFTWARE slcac464.: Creating a new ACFS file system for image "OraDb11204_home1" ... slcac464.: Copying files... slcac464.: Copying home contents... slcac464.: Changing the home ownership to user oracle... slcac464.: Transferring data to 1 nodes slcac464.: 10% complete slcac464.: 20% complete slcac464.: 30% complete slcac464.: 40% complete slcac464.: 50% complete slcac464.: 60% complete slcac464.: 70% complete slcac464.: 80% complete slcac464.: 90% complete slcac464.: 100% complete slcac464.: Changing the home ownership to user grid...
$ $GRID_HOME/bin/
rhpctl import image -image OraDb12102_home1 -path -imagetype ORACLEDBSOFTWARE slcac464.: Creating a new ACFS file system for image "OraDb12102_home1" ... slcac464.: Copying files... slcac464.: Copying home contents... slcac464.: Changing the home ownership to user oracle... slcac464.: Transferring data to 1 nodes slcac464.: 10% complete slcac464.: 20% complete slcac464.: 30% complete slcac464.: 40% complete slcac464.: 50% complete slcac464.: 60% complete slcac464.: 70% complete slcac464.: 80% complete slcac464.: 90% complete slcac464.: 100% complete slcac464.: Changing the home ownership to user grid...
Note as at this time you will have three new ACFS filesystems (for every single
rhpctl import command, a new ACFS mount point will be created):
The command
rhpctl add workingcopy is used to provision an ORACLE_HOME for use by an Oracle database. This command can be run on the RHP Server or on the RHP Client. If run on the RHP Server, the command can provision an Oracle database either RHP-server-local or remotely on any RHP Client. When using the
rhpctl command on the RHP Server, use the
-client option to specify the remote cluster.
- Create the mount point for the working copy on all nodes of the RHP Client
-help [REMOTEPROVISIONING|STORAGETYPE|ADMINDB|POLICYDB|DBWITHPQPOOLS|DBTEMPLATE|PDB|GRIDHOMEPROV|SWONLYGRIDHOMEPROV|STANDALONEPROVISIONING] Typical options (context sensitive) for various use cases
-workingcopy Name of the working copy to be created
-image Image name from the configured images
-oraclebase ORACLE_BASE path for provisioning Oracle database home or Oracle Grid Infrastructure home
-inventory Location of Oracle Inventory
-path The absolute path for provisioning software home (For database images, this will be the ORACLE_HOME)
-storagetype {NFS | LOCAL | RHP_MANAGED} Type of storage for the home
-user Name of the user for whom the software home is being provisioned
-dbname Name of database (DB_UNIQUE_NAME) to be provisioned
-dbtype {RACONENODE | RAC | SINGLE} Type of database: RAC One Node or RAC or Single Instance
-datafileDestination Data file destination location or ASM disk group name
-dbtemplate | : Absolute file path for the template file or relative path to the image home directory on Rapid Home Provisioning Server
-node Comma-separated list of nodes on which database will be created
-serverpool Existing server pool name
-newpool Server pool name for pool to be created
-cardinality Cardinality for new server pool
-pqpool Existing PQ pool name
-newpqpool PQ pool name for pool to be created
-pqcardinality Cardinality for new PQ pool
-cdb To create database as container database
-pdbName The pdbName prefix if one or more PDBs need to be created
-numberOfPDBs Number of PDBs to be created
-client Client cluster name
-ignoreprereq To ignore the CVU pre-requisite checks
-fixup Execute fixup script. Valid only for Grid Infrastructure provisioning.
-responsefile response file to be used to perform Oracle Grid Infrastructure provisioning
-clusternodes :[:][,:[:]...] Comma-separated list of node information on which Oracle Clusterware will be provisioned
-groups "OSDBA|OSOPER|OSASM|OSBACKUP|OSDG|OSKM|OSRAC=[,...]" Comma-separated list of Oracle groups to be configured in the working copy.
-root Use root credentials to access the remote node
-sudouser perform super user operations as sudo user name
-sudopath location of sudo binary
-notify Send email notification
-cc List of users to whom email notifications will be sent, in addition to owner of working copy
-asmclientdata File that contains the ASM client data
-gnsclientdata File that contains the GNS client data
-clustermanifest Location of Cluster Manifest File
-local Perform Grid Infrastructure software-only provisioning on the local node.
-softwareonly Perform Grid Infrastructure software-only provisioning.
-targetnode Name of a node in a remote cluster with no Rapid Home Provisioning Client
-agpath Read-write path for OLFS-based Oracle home.
-aupath Gold image path for OLFS-based Oracle home
-setupssh sets up passwordless SSH user equivalence on the remote nodes for the provisioning user
-useractiondata Value to be passed to useractiondata parameter of useraction script
Provide an Oracle Home to RHP Client available as "oracle" user making it available "
locally" (as 'grid' user from RHP Server)
Using RHP 'rhpctl add workingcopy', you can provide also a (RAC, RacOne, Single Instance) database (non-CDB, CDB).
Note: In case your RHP Client is an ODA (Oracle Database Appliance), it is
not recommended to provision the database using "
rhpctl add workingcopy -dbname"
- there is no flexibilty on having the redologs file on REDO diskgroup (SSD) and the recovery files on RECO dg.
- the db template used is a generic template and not specialized for ODA
- special enhancements are done on ODA to store the database on ACFS filesystem
Switch to Managed Homes
A DBA can switch databases from an Oracle home that was not provisioned using Rapid Home Provisioning (unmanaged Oracle home) to an Oracle home provisioned and managed by Rapid Home Provisioning Server.
Assume the following:
Oracle Home installed at /u01/app/product/12.1.0/dbhome (currently an unmanaged home) on an RHP client cluster named odaremote. This home has one or more databases created from it.
A gold image named ORACLEDB12 is managed by the RHP server.
(Optionally) the gold image ORACLEDB12 has a working copy named myDB12HOME1 created on the client cluster odaremote from it.
Example:
RHP Client
Name Type Storage HomeName HomeLocation Version
----- ------ -------- -------------- ---------------- ----------
odasdb SINGLE ACFS
OraDb12102_home2 12.1.0.2.4(20831110,20831113)
----------------------------------------------------------------------------
RHP Client
Name Type Storage HomeName HomeLocation Version
----- ------ -------- -------------- ---------------- ----------
odasdb SINGLE ACFS
WC_OraDb12102_home1 12.1.0.2.4(20831110,20831113)
Note: Moving a RAC database from un-managed Oracle Home to an RHP managed Oracle Home by default will be done in rolling mode
Database Patching
Similar to the above scenario, patching involves moving a database from one workingcopy to a new patched workingcopy.
Workingcopies are independent and multiple workingcopies can be created from the same gold image.
A typical scenario would involve creating an initial workingcopy from a base release. As new patches, PSUs are a good example, get released create a new workingcopy from your current gold image and apply the PSU.
The latest workingcopy would be promoted as a new gold image from which new databases are created. Existing databases can then be moved to this latest gold image (which contains the current PSU).
This maintains a lineage of homes, allows for reverting to an older home if necessary, and keeps databases up to date, with regard to PSU application.
Assume you are moving all databases from a workingcopy named
wcDB12PSU1 to a workingcopy named
wcDB12PSU2, you would issue the command:
This command creates a new ORACLE_HOME based on the patched image, if it does not exist, and then switches all Oracle databases from their current ORACLE_HOME location to the new ORACLE_HOME.
By default, patching is performed in a
rolling mode. Use the
-nonrolling option to perform patching in non-rolling mode. The database is then completely stopped on the old ORACLE_HOME, and is then restarted using the newly patched ORACLE_HOME.
For databases versions 12.1.0.1 or higher, the command rhpctl move database also executes any SQL commands required for database patching. For database versions earlier than Oracle Database 12c Release 1, a message is displayed asking the user to run the SQL commands for database patching manually. If only a specific database is to have its ORACLE_HOME moved, include the
-dbnameswitch:
Similar to the above scenario, standby database patching involves moving a database oracle home from one workingcopy to a new patched workingcopy without applying any dictionary patch.
On RHP Client a standby database is running under 12.1.0.2.0:
Name Type Storage HomeName HomeLocation Version
----- ------ -------- -------------- ---------------- ----------
STDBY SINGLE ACFS OraDb12102_home1 12.1.0.2.0
From the RHP Server provide a new OH 12.1.0.2.3 (as 'grid' user):
Oracle Home Name Oracle Home version Home Location
---------------- ------------------- ------------
OraDb12102_home1 12.1.0.2.0 WC_OraDb121023_home1 12.1.0.2.3(20299023) /rhp_oracle_home/product/12.1.0.2/WC_OraDb121023_home1
At this point we can move the remote standby from binaries 12.1.0.2.0 to 12.1.0.2.3 (patch the standby OH), as 'grid' user:
The STDBY database is now attached to the new OH provided above:
Name Type Storage HomeName HomeLocation Version
----- ------ -------- -------------- ---------------- ----------
STDBY SINGLE ACFS WC_OraDb121023_home1 /rhp_oracle_home/product/12.1.0.2/WC_OraDb121023_home1 12.1.0.2.3(20299023)
Note: Rapid Home Provisioning is Standby Database aware. The standby database will be in mount state, no db dictionary patches are applied (as part of PSU) waiting them by redo shipping from the primary DB.
Troubleshooting
- The RHP Server logs are located at the following location:
/crsdata//rhp
- The RHP Client logs are located at the following location:
/oc4j/j2ee/home/log/gh*.log
- In order to investigate what an srvctl command is doing:
Depending on the nature of the error, enable additional tracing by defining SRVM_NATIVE_TRACE=TRUE and SRVM_JNI_TRACE=TRUE. This will produce additional jni trace information and is helpful to isolate issue in the jni layer.