“ORA-15137: The ASM cluster is in rolling patch state” after applied 19c GI RU

PROBLEM

Just after applied the latest 19c GI July 2020 RU, any operations on diskgroups will cause the following errors :

ORA-15032: not all alterations performed
ORA-15137: The ASM cluster is in rolling patch state.

Subscribe to get access

Read more of this content when you subscribe today.

Check the cluster status back to Normal

-racnode1

[grid@racnode1 ~]$ kfod op=patches
List of Patches
31281355
31304218
31305087
31335188
[grid@racnode1 ~]$
ASMCMD> showclusterstate
Normal
ASMCMD>
[grid@racnode1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode1 is [441346801].
[grid@racnode1 ~]$
[grid@racnode1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [441346801] and the complete list of patches [31281355 31304218 31305087 31335188 ] have been applied on the local node. The release patch string is [19.8.0.0.0].
[grid@racnode1 ~]$

-racnode2

[grid@racnode2 ~]$ kfod op=patches
List of Patches
31281355
31304218
31305087
31335188
[grid@racnode2 ~]$
ASMCMD> showclusterstate
Normal
ASMCMD>
[grid@racnode2 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode2 is [441346801].
[grid@racnode2 ~]$
[grid@racnode2 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [441346801] and the complete list of patches [31281355 31304218 31305087 31335188 ] have been applied on the local node. The release patch string is [19.8.0.0.0].
[grid@racnode2 ~]$
Advertisement

ORA-15137: The ASM cluster is in rolling patch state

SYMPTOM

While adding a new disk into an existing diskgroup, below errors occurred:

ORA-15032: not all alterations performed
ORA-15137: The ASM cluster is in rolling patch state.

INVESTIGATION

1) On both nodes, it shows the cluster in “In Rolling Patch”, and the patch levels are all the same.

SQL> SELECT SYS_CONTEXT('SYS_CLUSTER_PROPERTIES', 'CLUSTER_STATE') 
     FROM DUAL;

SYS_CONTEXT('SYS_CLUSTER_PROPERTIES','CLUSTER_STATE')
--------------------------------------------------------------------
In Rolling Patch

SQL> SELECT SYS_CONTEXT('SYS_CLUSTER_PROPERTIES', 'CURRENT_PATCHLVL') 
     FROM DUAL;

SYS_CONTEXT('SYS_CLUSTER_PROPERTIES','CURRENT_PATCHLVL')
--------------------------------------------------------------------
3628626982

$ asmcmd
ASMCMD> showclusterstate
In Rolling Patch

ASMCMD> showpatches
---------------
List of Patches
===============
26609817
26609966
26839277
27105253
27128906
27144050
27335416
27458609
27464465
27674384

ASMCMD> showversion
ASM version         : 12.2.0.1.0

2) “crsctl query crs softwarepatch” shows the same results on both nodes:

$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode1 is [3628626982].

$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode2 is [3628626982].

3) “crsctl query crs releasepatch” shows the same results on both nodes:

— racnode1:

$crsctl query crs releasepatch
Oracle Clusterware release patch level is [3628626982] and the complete 
list of patches [26609817 26609966 26839277 27105253 27128906 27144050 
27335416 27458609 27464465 27674384 ] have been applied on the local node.

— racnode2:

$crsctl query crs releasepatch
Oracle Clusterware release patch level is [3628626982] and the complete 
list of patches [26609817 26609966 26839277 27105253 27128906 27144050 
27335416 27458609 27464465 27674384 ] have been applied on the local node.

4) kfod command shows the same results on both nodes:

— racnode1:

$ $ORACLE_HOME/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
3628626982

$ $ORACLE_HOME/bin/kfod op=patches
---------------
List of Patches
===============
26609817
26609966
26839277
27105253
27128906
27144050
27335416
27458609
27464465
27674384

— racnode2:

$ $ORACLE_HOME/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
3628626982

$ $ORACLE_HOME/bin/kfod op=patches
---------------
List of Patches
===============
26609817
26609966
26839277
27105253
27128906
27144050
27335416
27458609
27464465
27674384

5) lsinventory shows the same results on both nodes:

$ $ORACLE_HOME/OPatch/opatch lsinventory | grep -i desc
ARU platform description:: Linux x86-64
Patch description: "Database Apr 2018 Release Update : 12.2.0.1.180417 (27674384)"
Patch description: "OCW APR 2018 RELEASE UPDATE 12.2.0.1.0(180129) (27464465)"
Patch description: "ACFS APR 2018 RELEASE UPDATE 12.2.0.1.0(180129) (27458609)"
Patch description: "Tomcat Release Update 12.2.0.1.0(ID:171023.0830) (27144050)"
Patch description: "DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)"

SOLUTIONS

$crsctl stop rollingpatch
CRS-1161: The cluster was successfully patched to patch level [3628626982].

recheck with above commands, the cluster status is changed from “In Rolling Patch” to “Normal” now.

SQL> SELECT SYS_CONTEXT('SYS_CLUSTER_PROPERTIES', 'CLUSTER_STATE') 
     FROM DUAL;

SYS_CONTEXT('SYS_CLUSTER_PROPERTIES','CLUSTER_STATE')
--------------------------------------------------------------------------------
Normal

Generally speaking, For “ORA-15137: The ASM cluster is in rolling patch state” issue, the below steps can be followed one after another, until the issue is resolved:

a) stop rolling patch status one node by another:

SQL>Alter system stop rolling patch;

b) stop rolling patch for whole cluster:

$crsctl stop rollingpatch

c) In case of  postpatch not complete successfully for some reason, which can also cause this issue:

— As super root user :

 $GRID_HOME/crs/install/rootcrs.sh -prepatch 
 $GRID_HOME/crs/install/rootcrs.sh -postpatch

d) For some reason, OCR is not updated with right patchlevel:

$GRID_HOME/crs/install/rootcrs.sh -prepatch 
$GI_HOME/bin/clscfg -patch
$GRID_HOME/crs/install/rootcrs.sh -postpatch

e) For some reason,  patches are available from “opatch lsinventory”, but they are missing from kfod output:

-- as super user
$GRID_HOME/crs/install/rootcrs.sh -prepatch 

-- as grid owner,
$GRID_HOME/bin/patchgen commit -pi 12345678 
$GRID_HOME/bin/patchgen commit -pi 23456789 

-- as super user
$GRID_HOME/crs/install/rootcrs.sh -postpatch

CRS-6706: Oracle Clusterware Release patch level (‘nnnnnn’) does not match Software patch level (‘nnnnnn’)

“opatchauto” failed in the middle, try to rerun again, get “CRS-6706” error.

# /u01/app/12.1.0.2/grid/OPatch/opatchauto apply /tmp/12.1.0.2/27010872 -oh /u01/app/12.1.0.2/grid
...
..
.
Using configuration parameter file: /u01/app/12.1.0.2/grid/OPatch/auto/dbtmp/bootstrap_racnode1/patchwork/crs/install/crsconfig_params
CRS-6706: Oracle Clusterware Release patch level ('173535486') does not match Software patch level ('2039526626'). Oracle Clusterware cannot be started.
CRS-4000: Command Start failed, or completed with errors.
2018/01/23 16:29:02 CLSRSC-117: Failed to start Oracle Clusterware stack


After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Tue Jan 23 16:29:04 2018
Time taken to complete the session 1 minute, 39 seconds

opatchauto failed with error code 42

On racnode1:

$/u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
19941482
19941477
19694308
19245012
26925218   <---- Does not exist on node2

$/u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
2039526626

On racnode2:

$/u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
19941482
19941477
19694308
19245012

$/u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
2039526626

We can see patch 26925218 has been applied onto racnode 1, the solution is :

1) Rollback this patch ( 26925218), and run “opatchauto” again to finish the patching successfully. then everything should be fine.

OR

2) Manually complete all the left patches in the GI. After this everything is fine.

In 12c, GI home must have identical patches for the clusterware to start unless during rolling patching.
After applied the same patches on all nodes, GI started fine.

———-

Another situation you might meet is all nodes have same patches but ‘opatch lsinventory’ shows the different patch level:

For example , on racnode1:

$ /u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
11111111
22222222
33333333

$ /u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
8888888888

Node2

$ /u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
11111111
22222222
33333333

$ /u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
9999999999

However opatch lsinventory shows the different patch level:

Patch level status of Cluster nodes :

Patching Level Nodes
-------------- -----
8888888888 node1                    ====>> different patch level
9999999999 node2

For 12.1.0.1/2:

Execute”/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -patch” as root user on the problematic node and the patch level should be corrected.

For 12.2

Execute”<GI_HOME>/crs/install/rootcrs.pl -prepatch”  “<GI_HOME>/crs/install/rootcrs.pl -postpatch”and as <root_user> on the problematic node and the patch level should be corrected

REFERENCES:

RS-6706: Oracle Clusterware Release patch level (‘nnn’) does not match Software patch level (‘mmm’) (Doc ID 1639285.1)