
I would like to share my experience regarding the new scenarios I encountered while patching up a two-node test cluster in Oracle RAC.
I had tried an upgrade to 23ai which was backed out and good to know why the patching failed.
After extensive troubleshooting, I was able to resolve the issue successfully.
However my both nodes are in same patch level but cluster upgrade state is [ROLLING PATCH DURING ROLLING UPGRADE]
Node1
[root@Node1 grid]# bin/kfod op=patches
List of Patches
34672698
36758186
37260974
37266638
37268031
37461387
[root@Node1 grid]# bin/kfod op=patchlvl
Current Patch level
1697270236
[root@Node1 grid]# bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]
[root@Node1 grid]# bin/crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH DURING ROLLING UPGRADE]. The cluster active patch level is [1287911304].
[root@Node1 grid]# bin/crsctl query crs softwareversion
Oracle Clusterware version on node [Node1] is [19.0.0.0.0]
[root@Node1 grid]# bin/crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [19.0.0.0.0]
[root@Node1 grid]# bin/crsctl query crs softwarepatch
Oracle Clusterware patch level on node Node1 is [1697270236].
[+ASM1] grid@Node1:~$ opatch lspatches -oh /u01/app/19.0.0/grid/
34672698;ORA-00800 SOFT EXTERNAL ERROR, ARGUMENTS [SET PRIORITY FAILED], [VKTM] , DISM(16)
37461387;TOMCAT RELEASE UPDATE 19.0.0.0.0 (37461387)
37268031;OCW RELEASE UPDATE 19.26.0.0.0 (37268031)
37266638;ACFS RELEASE UPDATE 19.26.0.0.0 (37266638)
37260974;Database Release Update : 19.26.0.0.250121 (37260974)
36758186;DBWLM RELEASE UPDATE 19.0.0.0.0 (36758186)
Node2
[root@Node2 grid]# bin/kfod op=patches
List of Patches
34672698
36758186
37260974
37266638
37268031
37461387
[root@Node2 grid]# bin/kfod op=patchlvl
Current Patch level
1697270236
[root@Node2 grid]# bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]
[root@Node2 grid]# bin/crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH DURING ROLLING UPGRADE]. The cluster active patch level is [1287911304].
[root@Node2 grid]# bin/crsctl query crs softwareversion
Oracle Clusterware version on node [Node2] is [19.0.0.0.0]
[root@Node2 grid]# bin/crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [19.0.0.0.0]
[root@Node2 grid]# bin/crsctl query crs softwarepatch
Oracle Clusterware patch level on node Node2 is [1697270236].
[+ASM2] grid@Node2:~$ opatch lspatches -oh /u01/app/19.0.0/grid/
34672698;ORA-00800 SOFT EXTERNAL ERROR, ARGUMENTS [SET PRIORITY FAILED], [VKTM] , DISM(16)
37461387;TOMCAT RELEASE UPDATE 19.0.0.0.0 (37461387)
37268031;OCW RELEASE UPDATE 19.26.0.0.0 (37268031)
37266638;ACFS RELEASE UPDATE 19.26.0.0.0 (37266638)
37260974;Database Release Update : 19.26.0.0.250121 (37260974)
36758186;DBWLM RELEASE UPDATE 19.0.0.0.0 (36758186)
Cause:
On checking from OCR dump I found the 23.0.0.0.0 version entry
[SYSTEM.version.activeversion.state.toversion]
ORATEXT : 23.0.0.0.0◄▬▬▬▬▬▬
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root}
Solution:
1) Check the OCR backup for restore before upgrade failed.
$/bin/ocrconfig -showbackup
2025/02/16 06:06:48 +DATA:/cluster_new/OCRBACKUP/backup00.ocr.753.2245765666 2233233444
2025/02/16 02:06:42 +DATA:/cluster_new/OCRBACKUP/backup01.ocr.754.2245765666 2233233444
2025/02/15 22:06:36 +DATA:/cluster_new/OCRBACKUP/backup02.ocr.725.2245765666 2233233444
2025/02/15 06:06:11 +DATA:/cluster_new/OCRBACKUP/day.ocr.764.2245765666 2233233444
2025/01/29 22:03:53 +DATA:/cluster_new/OCRBACKUP/week.ocr.765.2245765666 2233233444 ◄◄◄◄◄
2). Execute below steps to restore OCR from backup.
a) As root user Stop crs on all nodes
$/bin/crsctl stop crs -f
b) As root user start crs in excl mode on node1
$/bin/crsctl start crs -excl -nocrs
$/bin/crsctl stat res -t -init
c) As root user restore OCR from backup
$/bin/ocrconfig -restore ‘+DATA:/cluster_new/OCRBACKUP/week.ocr.765.2245765666 2233233444’
d) As root user stop crs with flag -f.
$/bin/crsctl stop crs -f
e) As root user start crs node1 and other node(s)
$/bin/crsctl start crs
On checking the active-version and i found that is in NORMAL state
Leave a comment