How to Upgrade RAC Grid from 12.2 to 19.16

Hello Friend's,

In this post we will discuss about the upgrade of RAC Grid from version 12.2 (33583921) to patch 19.16

So Lets get started .

How to Upgrade RAC Grid from 12.2 to 19.16

Summary Steps to upgrade Grid.

1)Review the pre-upgrade checklist.

2)Download 19c Grid software.

3)Apply mandatory 19c patches.

4)Dry-run upgrade.

5)Upgrade Grid.

6)Verify Grid upgrade.

Enviornment


RAC nodes               : dm01db01, dm01db02
DB Name                 : CDBFINDB
DB Instances            : CDBFINDB1, CDBFINDB2
Current DB version      : 12.2.0.1 (33583921)
Grid upgraded to version: 19.16 patch (34130714)
Cluster Storage used    : ASM
Platform                : OEL 7.8
Current Grid HOME       : /Grid/app/gridwork/gr_12.2
New 19c Grid HOME       : /Grid/app/gridwork/gr_19.16

Download AHF that include both Orachk and EXAchk Doc ID 2550798.1

Apply Path on new 19c home before install


[grid@dm01db01 gr_19.16]$ ./gridSetup.sh -applyPSU /oracle/GR_Patch/34130714 -applyOneOffs /oracle/GR_Patch/34130714
Preparing the home to patch...
Applying the patch /oracle/GR_Patch/34130714...
Successfully applied the patch.
Applying the patch /oracle/GR_Patch/34130714...
Successfully applied the patch.
The log can be found at: /Grid/app/gridwork/oraInventory/logs/GridSetupActions2022-10-20_06-02-48PM/installerPatchActions_2022-10-20_06-02-48PM.log
Launching Oracle Grid Infrastructure Setup Wizard...

Start Dry run for upgrade.


[grid@dm01db01 gr_19.16]$ ./gridSetup.sh -dryRunForUpgrade
Launching Oracle Grid Infrastructure Setup Wizard...

File Copy Already completed in DRY run, So a task will be lesser in actual one

root.sh To be only run on local node after dry run :


[root@dm01db01 oracle]# /Grid/app/gridwork/gr_19.16/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /Grid/app/gridwork/gr_19.16

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Performing Dry run of the Grid Infrastructure upgrade.
Using configuration parameter file: /Grid/app/gridwork/gr_19.16/crs/install/crsconfig_params
The log of current session can be found at:
  /Grid/app/gridwork/gr_base/crsdata/dm01db01/crsconfig/crsupgrade_dryrun_dm01db01_2022-10-20_06-57-22PM.log
2022/10/20 18:57:48 CLSRSC-729: Checking whether CRS entities are ready for upgrade, cluster upgrade will not be attempted now. This operation may take a few minutes.
2022/10/20 18:59:55 CLSRSC-693: CRS entities validation completed successfully.

Now Starting with the actual upgrade


[grid@dm01db01 bin]$ ./crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3975995681].

rootupgrade node1 :


[root@dm01db01 ~]# /Grid/app/gridwork/gr_19.16/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /Grid/app/gridwork/gr_19.16

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /Grid/app/gridwork/gr_19.16/crs/install/crsconfig_params
The log of current session can be found at:
  /Grid/app/gridwork/gr_base/crsdata/dm01db01/crsconfig/crsupgrade_dm01db01_2022-10-20_07-23-22PM.log
2022/10/20 19:23:34 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2022/10/20 19:23:34 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2022/10/20 19:23:34 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2022/10/20 19:23:36 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2022/10/20 19:23:36 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.
2022/10/20 19:24:47 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2022/10/20 19:27:33 CLSRSC-693: CRS entities validation completed successfully.
2022/10/20 19:27:37 CLSRSC-464: Starting retrieval of the cluster configuration data
PRCT-1470 : failed to reset the Rapid Home Provisioning (RHP) repository
 PRCR-1172 : Failed to execute "srvmhelper" for -getmgmtdbnodename
CRS-4672: Successfully backed up the Voting File for Cluster Synchronization Service.
2022/10/20 19:27:59 CLSRSC-515: Starting OCR manual backup.
2022/10/20 19:28:08 CLSRSC-516: OCR manual backup successful.
2022/10/20 19:28:15 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2022/10/20 19:28:15 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2022/10/20 19:28:15 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2022/10/20 19:28:24 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2022/10/20 19:28:24 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2022/10/20 19:28:24 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2022/10/20 19:28:43 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
Redirecting to /bin/systemctl restart rsyslog.service
2022/10/20 19:28:43 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.
2022/10/20 19:29:34 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2022/10/20 19:29:34 CLSRSC-482: Running command: '/Grid/app/gridwork/gr_12.2/bin/crsctl start rollingupgrade 19.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2022/10/20 19:29:38 CLSRSC-482: Running command: '/Grid/app/gridwork/gr_19.16/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /Grid/app/gridwork/gr_12.2 -oldCRSVersion 12.2.0.1.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2022/10/20 19:29:44 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2022/10/20 19:29:51 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2022/10/20 19:30:30 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2022/10/20 19:30:36 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2022/10/20 19:30:37 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2022/10/20 19:30:52 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2022/10/20 19:31:23 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2022/10/20 19:31:32 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2022/10/20 19:31:43 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2022/10/20 19:31:43 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2022/10/20 19:32:20 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2022/10/20 19:33:22 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2022/10/20 19:33:32 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2022/10/20 19:36:33 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2022/10/20 19:37:55 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
2022/10/20 19:38:10 CLSRSC-474: Initiating upgrade of resource types
2022/10/20 19:41:13 CLSRSC-475: Upgrade of resource types successfully initiated.
2022/10/20 19:41:53 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
2022/10/20 19:42:23 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

alert log node 1 :


ALTER SYSTEM START ROLLING MIGRATION TO 19.0.0.0.0
Starting background process RMON
2022-10-20T19:29:35.051252+05:30
RMON started with pid=47, OS id=84999
2022-10-20T19:29:37.520247+05:30
NOTE: Cluster is in Rolling Migration from 12.2.0.1.0 to 19.0.0.0.0
2022-10-20T19:29:40.485134+05:30
Restarting dead background process PING
Starting background process PING
2022-10-20T19:29:40.660171+05:30
PING started with pid=11, OS id=85082
2022-10-20T19:29:57.772870+05:30
NOTE: ASM client CDBFINDB1:CDBFINDB:dm01clust disconnected unexpectedly.

---

SUCCESS: ALTER DISKGROUP ALL DISMOUNT /* asm agent *//* {0:0:2591} */
Shutting down archive processes
Archiving is disabled
2022-10-20T19:30:19.241666+05:30
Shutting down archive processes
2022-10-20T19:30:19.244806+05:30
Stopping background process VKTM
Archiving is disabled
2022-10-20T19:30:22.642494+05:30
freeing rdom 3
freeing rdom 2
freeing rdom 1
freeing rdom 0
2022-10-20T19:30:24.535501+05:30
Instance shutdown complete (OS id: 85819)

Node 1 got down as root upgrade was running on node1 , but database was running on node 2 as this is a Rolling enviornment




After Node1 rootupgrade.sh alert log node 1


NOTE: Flex client CDBFINDB1:CDBFINDB:dm01clust registered, osid 98095, mbr 0x0, asmb 98012 (reg:107951299)
2022-10-20T19:34:53.440707+05:30
NOTE: found stale ownerid 0x10001 for client CDBFINDB1:CDBFINDB:dm01clust
WARNING: giving up on client id 0x10001 [CDBFINDB1:CDBFINDB:dm01clust] which has not reconnected for 0 seconds (originally from ASM inst +ASM1, reg:3424240868) [stale]
NOTE: removing stale ownerid 0x10001 for client CDBFINDB1:CDBFINDB:dm01clust (reg:3424240868)
NOTE: removing stale cgid 0x10001 for client CDBFINDB1:CDBFINDB:dm01clust (clientid:0x10001 gn:0 startid:0)
NOTE: released resources held for CGID 0x10001 (gn: 0)
NOTE: released resources held for client id 0x10001 (reg:3424240868)
2022-10-20T19:34:54.360450+05:30
NOTE: client CDBFINDB1:CDBFINDB:dm01clust mounted group 1 (DATA)
NOTE: client CDBFINDB1:CDBFINDB:dm01clust mounted group 3 (RECO)
2022-10-20T19:35:45.408688+05:30
ALTER SYSTEM SET local_listener=' (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.4.51)(PORT=1521))' SCOPE=MEMORY SID='+ASM1';
2022-10-20T19:37:17.597671+05:30
NOTE: [clscfg.bin@dm01db01.database.com (TNS V1-V3) 99853] opening OCR file +OCR_VOTE.255.4294967295, osid 99936
NOTE: [clscfg.bin@dm01db01.database.com (TNS V1-V3) 99853] opened OCR file +OCR_VOTE.255.4294967295, osid 99936
2022-10-20T19:41:00.914879+05:30

Node2 : rootupgrade.sh


[root@dm01db02 ~]# /Grid/app/gridwork/gr_19.16/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /Grid/app/gridwork/gr_19.16

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /Grid/app/gridwork/gr_19.16/crs/install/crsconfig_params
The log of current session can be found at:
  /Grid/app/gridwork/gr_base/crsdata/dm01db02/crsconfig/crsupgrade_dm01db02_2022-10-20_07-46-54PM.log
2022/10/20 19:47:04 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2022/10/20 19:47:04 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2022/10/20 19:47:05 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2022/10/20 19:47:07 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2022/10/20 19:48:20 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2022/10/20 19:48:53 CLSRSC-464: Starting retrieval of the cluster configuration data
2022/10/20 19:49:12 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2022/10/20 19:49:13 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2022/10/20 19:49:13 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2022/10/20 19:49:17 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
Redirecting to /bin/systemctl restart rsyslog.service
2022/10/20 19:49:18 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.

ASM configuration upgraded in local node successfully.

2022/10/20 19:49:31 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2022/10/20 19:50:02 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2022/10/20 19:50:54 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2022/10/20 19:50:54 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2022/10/20 19:51:04 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2022/10/20 19:51:35 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2022/10/20 19:51:38 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2022/10/20 19:51:39 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2022/10/20 19:51:40 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2022/10/20 19:52:01 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2022/10/20 19:52:36 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2022/10/20 19:52:38 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2022/10/20 19:55:04 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 19 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2022/10/20 19:56:01 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
Start upgrade invoked..
2022/10/20 19:56:11 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2022/10/20 19:56:11 CLSRSC-482: Running command: '/Grid/app/gridwork/gr_19.16/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Started to upgrade Oracle ACFS.
Oracle ACFS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 19.0.0.0.0.
2022/10/20 19:57:21 CLSRSC-479: Successfully set Oracle Clusterware active version
2022/10/20 19:57:21 CLSRSC-476: Finishing upgrade of resource types
2022/10/20 19:57:37 CLSRSC-477: Successfully completed upgrade of resource types
2022/10/20 19:58:42 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
Successfully updated XAG resources.
2022/10/20 20:00:54 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

alert log node 2 :


Starting background process RMON
2022-10-20T19:29:35.255629+05:30
RMON started with pid=41, OS id=71631
2022-10-20T19:29:37.738194+05:30
NOTE: Cluster is in Rolling Migration from 12.2.0.1.0 to 19.0.0.0.0
2022-10-20T19:29:41.572453+05:30
Restarting dead background process PING
Starting background process PING
2022-10-20T19:29:41.597769+05:30
PING started with pid=11, OS id=71702

Starting rootupgrade on node 2 Alert log


2022-10-20T19:49:34.993340+05:30
NOTE: ASM client CDBFINDB2:CDBFINDB:dm01clust disconnected unexpectedly.
NOTE: check client alert log.
2022-10-20T19:49:36.226842+05:30
NOTE: cleaned up ASM client CDBFINDB2:CDBFINDB:dm01clust connection state (reg:2243621334)
2022-10-20T19:49:36.521693+05:30
Dumping diagnostic data in directory=[cdmp_20221020194936], requested by (instance=0, osid=6049), summary=[trace bucket dump request (kfnclDelete0)].
2022-10-20T19:49:37.922173+05:30
NOTE: detected orphaned client id 0x10000.
2022-10-20T19:49:43.938339+05:30
NOTE: client CDBFINDB2:CDBFINDB:dm01clust id 0x10000 has reconnected to ASM inst +ASM2 (reg:2243621334), or has been fenced
2022-10-20T19:49:45.515463+05:30
NOTE: client exited [4773]

-----

SUCCESS: ALTER DISKGROUP ALL DISMOUNT /* asm agent *//* {0:0:1607} */
Shutting down archive processes
Archiving is disabled
2022-10-20T19:49:53.614360+05:30
Shutting down archive processes
Archiving is disabled
2022-10-20T19:49:53.633293+05:30
Stopping background process VKTM
2022-10-20T19:49:55.549489+05:30
freeing rdom 3
freeing rdom 2
freeing rdom 1
freeing rdom 0
2022-10-20T19:49:56.954726+05:30
Instance shutdown complete (OS id: 93778)

Check the Active version through the upgrade


[root@dm01db01 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [ROLLING UPGRADE]. The cluster active patch level is [3975995681].
[root@dm01db01 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [UPGRADE AV UPDATED]. The cluster active patch level is [896235792].
[root@dm01db01 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [896235792].
[root@dm01db01 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [896235792].
[root@dm01db01 ~]#

check the software version


[root@dm01db01 ~]# crsctl query crs softwareversion -all
Oracle Clusterware version on node [dm01db01] is [19.0.0.0.0]
Oracle Clusterware version on node [dm01db02] is [19.0.0.0.0]

Version after the upgrade :










Hope This Helps

Regards

Sultan Khan

Previous
Next Post »