Blog

CRS-6706: Oracle Clusterware Release patch level (‘nnnnnn’) does not match Software patch level (‘nnnnnn’)

“opatchauto” failed in the middle, try to rerun again, get “CRS-6706” error.

# /u01/app/12.1.0.2/grid/OPatch/opatchauto apply /tmp/12.1.0.2/27010872 -oh /u01/app/12.1.0.2/grid
...
..
.
Using configuration parameter file: /u01/app/12.1.0.2/grid/OPatch/auto/dbtmp/bootstrap_racnode1/patchwork/crs/install/crsconfig_params
CRS-6706: Oracle Clusterware Release patch level ('173535486') does not match Software patch level ('2039526626'). Oracle Clusterware cannot be started.
CRS-4000: Command Start failed, or completed with errors.
2018/01/23 16:29:02 CLSRSC-117: Failed to start Oracle Clusterware stack


After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Tue Jan 23 16:29:04 2018
Time taken to complete the session 1 minute, 39 seconds

opatchauto failed with error code 42

On racnode1:

$/u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
19941482
19941477
19694308
19245012
26925218   <---- Does not exist on node2

$/u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
2039526626

On racnode2:

$/u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
19941482
19941477
19694308
19245012

$/u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
2039526626

We can see patch 26925218 has been applied onto racnode 1, the solution is :

1) Rollback this patch ( 26925218), and run “opatchauto” again to finish the patching successfully. then everything should be fine.

OR

2) Manually complete all the left patches in the GI. After this everything is fine.

In 12c, GI home must have identical patches for the clusterware to start unless during rolling patching.
After applied the same patches on all nodes, GI started fine.

———-

Another situation you might meet is all nodes have same patches but ‘opatch lsinventory’ shows the different patch level:

For example , on racnode1:

$ /u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
11111111
22222222
33333333

$ /u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
8888888888

Node2

$ /u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
11111111
22222222
33333333

$ /u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
9999999999

However opatch lsinventory shows the different patch level:

Patch level status of Cluster nodes :

Patching Level Nodes
-------------- -----
8888888888 node1                    ====>> different patch level
9999999999 node2

For 12.1.0.1/2:

Execute”/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -patch” as root user on the problematic node and the patch level should be corrected.

For 12.2

Execute”<GI_HOME>/crs/install/rootcrs.pl -prepatch”  “<GI_HOME>/crs/install/rootcrs.pl -postpatch”and as <root_user> on the problematic node and the patch level should be corrected

REFERENCES:

RS-6706: Oracle Clusterware Release patch level (‘nnn’) does not match Software patch level (‘mmm’) (Doc ID 1639285.1)

12c opatchauto : Prerequisite check “CheckApplicable” failed

It is a good practice to copy and unzip GI patches with grid user

Using “opatchauto” to apply Jan2018 GI PSU onto 12R1 GI HOME, got “Prerequisite check “CheckApplicable” failed errors.

 # /u01/app/12.1.0.2/grid/OPatch/opatchauto apply /tmp/27010872 
   -oh /u01/app/12.1.0.2/grid
...
..
.
Bringing down CRS service on home /u01/app/12.1.0.2/grid
Prepatch operation log file location: /u01/app/12.1.0.2/grid/
 cfgtoollogs/crsconfig/crspatch_racnode1_2018-01-23_03-52-31PM.log
CRS service brought down successfully on home /u01/app/12.1.0.2/grid

Start applying binary patch on home /u01/app/12.1.0.2/grid
Failed while applying binary patches on home /u01/app/12.1.0.2/grid

Execution of [OPatchAutoBinaryAction] patch action failed, check log 
for more details. Failures:
Patch Target : racnode1->/u01/app/12.1.0.2/grid Type[crs]
Details: [
---------------------------Patching Failed--------------------------
Command execution failed during patching in home: /u01/app/12.1.0.2/
grid, host: racnode1.
Command failed: /u01/app/12.1.0.2/grid/OPatch/opatchauto apply 
 /tmp/27010872 -oh /u01/app/12.1.0.2/grid -target_type cluster 
 -binary -invPtrLoc /u01/app/12.1.0.2/grid/oraInst.loc -jre 
/u01/app/12.1.0.2/grid/OPatch/jre -persistresult /u01/app/12.1.0.2/
grid/OPatch/auto/dbsessioninfo/sessionresult_racnode1_crs.ser 
-analyzedresult /u01/app/12.1.0.2/grid/OPatch/auto/dbsessioninfo/
sessionresult_analyze_racnode1_crs.ser

Command failure output:
==Following patches FAILED in apply:

Patch: /tmp/27010872/26925218
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/
opatch2018-01-23_15-53-43PM_1.log
Reason:Failed during Patching: oracle.opatch.opatchsdk.OPatchException:
Prerequisite check "CheckApplicable" failed.

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Tue Jan 23 15:56:55 2018
Time taken to complete the session 6 minutes, 39 seconds

Chech opatch logfile:

[Jan 23, 2018 3:56:55 PM] [INFO] Space Needed : 3191.113MB
[Jan 23, 2018 3:56:55 PM] [INFO] Prereq checkPatchApplicableOnCurrentPlatform 
Passed for patch : 26925218
[Jan 23, 2018 3:56:55 PM] [INFO] Patch 26925218:
 onewaycopyAction : Source File "/tmp/27010872/26925218/files/crs/install
/dropdb.pl" does not exists or is not readable
 'oracle.crs, 12.1.0.2.0': Cannot copy file from 'dropdb.pl' to 
'/u01/app/12.1.0.2/grid/crs/install/dropdb.pl'
[Jan 23, 2018 3:56:55 PM] [INFO] Prerequisite check "CheckApplicable" failed.
 The details are:

Patch 26925218:
 onewaycopyAction : Source File "/tmp/27010872/26925218/files/crs/
install/dropdb.pl" does not exists or is not readable
 'oracle.crs, 12.1.0.2.0': Cannot copy file from 'dropdb.pl' to 
'/u01/app/12.1.0.2/grid/crs/install/dropdb.pl'
[Jan 23, 2018 3:56:55 PM] [SEVERE] OUI-67073:UtilSession failed:
 Prerequisite check "CheckApplicable" failed.
[Jan 23, 2018 3:56:55 PM] [INFO] Finishing UtilSession at Tue Jan 23 15:56:55 AEDT 2018
[Jan 23, 2018 3:56:55 PM] [INFO] Log file location: /u01/app/12.1.0.2
/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-23_15-53-43PM_1.log

Check “dropdb.pl” file from unziped patch. the owner is oracle.

# ls -ltr /tmp/27010872/26925218/files/crs/install/dropdb.pl
-rwx------ 1 oracle oinstall 3541 Jan 6 07:48 /tmp/27010872/26925218/
                                          files/crs/install/dropdb.pl

The patch file was unzipped by RAC  user ‘oracle’  instead of GI owner ‘grid’. Change this file owner to grid, and run “opatchauto resume” to continue the patching successfully.

#chown grid /tmp/27010872/26925218/files/crs/install/dropdb.pl
#/u01/app/12.1.0.2/grid/OPatch/opatchauto resume

REFERENCES:

12c opatchauto: Prerequisite check “CheckApplicable” failed (Doc ID 1937982.1)

High CPU By /usr/bin/ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 /sbin/ifconfig -a

SYMPTOM

In 12.1.0.2 GI/RAC environment, there are a couple of processes consuming high CPU.

$ ps -ef|grep ifconfig
root 18941 1 0 06:25 ? 00:00:00 sh -c /bin/su -l grid -c "/usr/bin/ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 RACTEST2 /sbin/ifconfig -a" 2>&1
root 18942 18941 99 06:25 ? 06:07:08 /bin/su -l grid -c /usr/bin/ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 RACTEST2 /sbin/ifconfig -a
grid 26928 23166 0 12:32 pts/1 00:00:00 grep ifconfig
root 62153 1 0 Jan23 ? 00:00:00 sh -c /bin/su -l grid -c "/usr/bin/ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 RACTEST2 /sbin/ifconfig -a" 2>&1
root 62154 62153 99 Jan23 ? 14:29:31 /bin/su -l grid -c /usr/bin/ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 RACTEST2 /sbin/ifconfig -a
root 77170 1 0 10:30 ? 00:00:00 sh -c /bin/su -l grid -c "/usr/bin/ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 RACTEST2 /sbin/ifconfig -a" 2>&1
root 77171 77170 99 10:30 ? 02:02:37 /bin/su -l grid -c /usr/bin/ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 RACTEST2 /sbin/ifconfig -a

$top
..
.
 PID   USER PR NI VIRT RES  SHR  S %CPU %MEM TIME+ COMMAND
 62154 root 25 0 98.8m 1392 1104 R 100.0 0.0 851:33.36 su
 18942 root 25 0 98.8m 1400 1104 R 99.9  0.0 349:10.86 su
 77171 root 25 0 98.8m 1404 1104 R 99.9  0.0 104:39.33 su
 ..
 .

CAUSES

As per Oracle ID 2340905.1, it is a Bug 24692439 : LNX64-12.2-DIAGSNAP: AUXILIARY CMDS GENERATED BY DIAGSNAP WOULD HOG CPU FOREVER.

It is fixed in 18.1.

WORKAROUND

1)as GI owner:

$ oclumon manage -disable diagsnap
Diagsnap option is successfully Disabled on RACTEST1
Diagsnap option is successfully Disabled on RACTEST2
Successfully Disabled diagsnap

2) kill the existing “su” processes.

#kill -9 77170

....

datapatch -verbose Fails with Error “patch xxxxxxxx: Archived patch directory is empty”

Always keeps ORACLE_HOME applied patches consistent in situations like cloning ORALE_HOME, switching database role in DataGuard

SYMPTOM

“datapatch -verbose” failes on 12.1.0.2 database with following errors:

$ ./datapatch -verbose -skip_upgrade_check
...
..
.
Error: prereq checks failed!
 patch 22139226: Archived patch directory is empty
Prereq check failed, exiting without installing any patches.
...
..
.

Check $ORACLE_HOME/sqlpatch, there is no files for patch 22139226 to be used for rollback.

$ ls -ltr $ORACLE_HOME/sqlpatch|grep 22139226
$

CAUSE

  1. The database is just migrated from old ORACLE_HOME to a new ORACLE_HOME.
  2. The database is just switched over or failed over. The applied patches are different between ORACLE_HOME of primary and standby.

SOLUTION

Copy missing patch files from the old ORACLE_HOME, or from standby ORACLE_HOME to primary ORACLE_HOME.

REFERENCE

datapatch -verbose Fails with Error :” Patch xxxxxx: Archived Patch Directory Is Empty” (Doc ID 2235541.1).

How to Remove PDB From Restricted Mode

Run “datapatch” might remove PDB from restricted mode

1) Check PDB  status.

SQL> select INST_ID,NAME,OPEN_MODE,RESTRICTED 
     from gv$pdbs order by 1,2;

INST_ID   NAME        OPEN_MODE  RES
-------- ------------ ---------- ---
 1       RACTESTPDB   READ WRITE YES
 1       PDB$SEED     READ ONLY  NO
 2       RACTESTPDB   READ WRITE YES
 2       PDB$SEED     READ ONLY  NO

2) Check PDB_PLUG_IN_VIOLATIONS.

SQL> select status, message, action 
     from   pdb_plug_in_violations 
     where  status !='RESOLVED';

STATUS      MESSAGE       ACTION
---------   ------------ ------------
PNDING     PSU bundle    Call datapatch.
           patch 180116 
           (DATABASE PATCH
           SET UPDATE
           12.1.0.2.180116):
           Installed in the 
           CDB but not in 
           the PDB.
PENDING     ...
            ..
            .

3) Rerun “datapatch” as per “run datapatch”.

4)Bounce all instances and pdbs.

$srvctl stop database -d RACTEST;

$srvctl start database -d RACTEST;

SQL> alter pluggable database RACTESTPDB open instances=all;

Pluggable database altered.

SQL> select INST_ID,NAME,OPEN_MODE,RESTRICTED
from gv$pdbs order by 1,2;

INST_ID NAME OPEN_MODE RES
-------- ------------ ---------- ---
1 RACTESTPDB READ WRITE NO
1 PDB$SEED READ ONLY NO
2 RACTESTPDB READ WRITE NO
2 PDB$SEED READ ONLY NO

SQL> alter pluggable database RACTESTPDB save state instances=all;

Pluggable database altered.

SQL> select CON_ID,CON_NAME,INSTANCE_NAME,state, restricted from DBA_PDB_SAVED_STATES;

-- make sure PDB' restricted=NO