root.sh Failed with “CLSRSC-670: No msg for has:clsrsc-670” When Adding a Node

While adding a node by running ‘root.sh’ in 12.1.0.2, the following errors occurred:

# /u01/app/12.1.0.2/grid/root.sh
Check /u01/app/12.1.0.2/grid/install/root_racnode2_2018-01-26_23-56-
01.log for the output of root script

#cat /u01/app/12.1.0.2/grid/install/root_racnode2_2018-01-26_23-56-01.log
Performing root user operation.

The following environment variables are set as:
 ORACLE_OWNER= grid
 ORACLE_HOME= /u01/app/12.1.0.2/grid
 Copying dbhome to /usr/local/bin ...
 Copying oraenv to /usr/local/bin ...
 Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/
install/crsconfig_params
2018/01/26 23:56:02 CLSRSC-4001: Installing Oracle Trace File 
Analyzer (TFA) Collector.

2018/01/26 23:56:02 CLSRSC-4002: Successfully installed Oracle Trace 
File Analyzer (TFA) Collector.

2018/01/26 23:56:02 CLSRSC-670: No msg for has:clsrsc-670
Died at /u01/app/12.1.0.2/grid/crs/install/crsinstall.pm line 3800.

The command '/u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2
/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2
/grid/crs/install/rootcrs.pl ' execution failed
#

Check rootcrs log under GI_HOME/cfgtoollogs/crsconfig:

$cd /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig $ tail -30 rootcrs_racnode2_2018-01-26_11-56-02PM.log > > Installed Build Version: 122120 Build Date: 201708280807 > > TFA-00022: TFA is already running latest version. No need to patch. > >End Command output 
2018-01-26 23:56:02: Executing cmd: /u01/app/12.1.0.2/grid/bin/clsecho -p has -f clsrsc -m 4002 
2018-01-26 23:56:02: Executing cmd: /u01/app/12.1.0.2/grid/bin/clsecho -p has -f clsrsc -m 4002 
2018-01-26 23:56:02: Command output: > CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. >End Command output 
2018-01-26 23:56:02: CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 
2018-01-26 23:56:02: The install script root.sh was run on an upgraded node. 
2018-01-26 23:56:02: Executing cmd: /u01/app/12.1.0.2/grid/bin/clsecho -p has -f clsrsc -m 670 
2018-01-26 23:56:02: Executing cmd: /u01/app/12.1.0.2/grid/bin/clsecho -p has -f clsrsc -m 670 
2018-01-2623:56:02: Command output: > CLSRSC-670: No msg for has:clsrsc-670 >End Command output 
2018-01-26 23:56:02: CLSRSC-670: No msg for has:clsrsc-670 
2018-01-26 23:56:02: ###### Begin DIE Stack Trace ###### 
2018-01-26 23:56:02: Package File Line Calling 
2018-01-26 23:56:02: --------------- -------------------- ---- ---------- 2018-01-26 23:56:02: 1: main rootcrs.pl 267 crsutils::dietrap 
2018-01-26 23:56:02: 2: crsinstall crsinstall.pm 3800 main::__ANON__ 
2018-01-26 23:56:02: 3: crsinstall crsinstall.pm 380 rsinstall::preInstallChecks 
2018-01-26 23:56:02: 4: crsinstall crsinstall.pm 318 crsinstall::CRSInstall 2018-01-26 23:56:02: 5: main rootcrs.pl 410 crsinstall::new 
2018-01-26 23:56:02: ####### End DIE Stack Trace ####### 
2018-01-26 23:56:02: checkpoint has failed

Raised a SR to Oracle Support who advised this is a bug — “Bug 26200970 : LNX64-12202-UD:FAILED TO ADD NODE AFTER UPGRADING FROM 11204 TO 12202”.

Bug 26200970 fixed in 18.1

WORKAROUND

Take backup of crsconfig_params file inside GI_HOME/crs/install,  then change the value for ASM_UPGRADE=false and re-run root.sh successfully.

How to Apply Grid Infrastructure Patches Before Upgrading or Installing Cluster

While upgrading GI from 12.1.0.2 to GI 12.2.0.1, in order to minimize the downtime and reduce the impact on PROD environment, PSU or one off patches can be applied onto 12.2.0.1 image before setting up 12.2.0.1 GI.

Here is the example of applying GI JAN 2018 Release Update ( RU) 12.2.0.1.180116.

1)As the owner of GI HOME, unzip the gold image under new 12.2.0.1 GI HOME.

$cd  /u00/app/12.2.0.1 
$unzip <SW-LOCATION>/linuxx64_12201_grid_home.zip -d ./

2)As the owner of GI HOME, unzip GI JAN 2018 RU patch.

$cd  /tmp
$unzip <SW-LOCATION>/p27100009_122010_GI_RU_180116_Linux-x86-64.zip

3) Apply the patch by using option “-applyPSU”, and confirm the successful patching.

$cd /u01/app/12.2.0.1/grid
$ ./gridSetup.sh -silent -applyPSU /tmp/27100009/
Preparing the home to patch...
Applying the patch /tmp/27100009/...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/
GridSetupActions2018-02-01_03-05-11PM/installerPatchActions_2018-02-
01_03-05-11PM.log

Check the log to confirm the patches are applied successfully.

$cd /u01/app/12.2.0.1/grid/cfgtoollogs/opatch
$ grep -i "successfully applied" *
opatch2018-02-01_15-05-36PM_1.log:[Feb 1, 2018 3:05:41 PM] [INFO] Patch 27144050 successfully applied.
opatch2018-02-01_15-05-41PM_1.log:[Feb 1, 2018 3:06:33 PM] [INFO] Patch 27128906 successfully applied.
opatch2018-02-01_15-06-34PM_1.log:[Feb 1, 2018 3:07:20 PM] [INFO] Patch 27335416 successfully applied.
opatch2018-02-01_15-07-20PM_1.log:[Feb 1, 2018 3:07:49 PM] [INFO] Patch 27105253 successfully applied.
opatch2018-02-01_15-07-49PM_1.log:[Feb 1, 2018 3:07:54 PM] [INFO] Patch 26839277 successfully applied.

Or apply the patch one by one by using “-applyOneOffs” option:

$cd /u01/app/12.2.0.1/grid
$./gridSetup.sh -silent -applyOneOffs /tmp/27100009/27144050
$./gridSetup.sh -silent -applyOneOffs /tmp/27100009/27128906
$./gridSetup.sh -silent -applyOneOffs /tmp/27100009/27335416 
$./gridSetup.sh -silent -applyOneOffs /tmp/27100009/27105253 
$./gridSetup.sh -silent -applyOneOffs /tmp/27100009/26839277

4) After the GI upgraded , we can confirm the patches are applied and listed successfully:

$ opatch lspatches
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)
27105253;Database Release Update : 12.2.0.1.180116 (27105253)
27335416;OCW JAN 2018 RELEASE UPDATE 12.2.0.1.180116 (27335416)
27128906;ACFS Release Update : 12.2.0.1.0 (27128906)
27144050;Tomcat Release Update 12.2.0.1.0(ID:171023.0830) (27144050)

OPatch succeeded.

Reference:

How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed? (Doc ID 1410202.1)

Deinstall 12.1.0.2 Grid Infrastructure Home After Being Upgraded to 12.2.0.1

Grid Infrastructure 12.1.0.2 has been upgraded to 12.2.0.1 successfully. So 12.1.0.2 GI_HOME needs to be de-installed.  There are two ways to uninstall the old GI_HOME ( 12.1.0.2).

Detach GI_HOME and Remove the GI_HOME Manually

$ export ORACLE_HOME=/u01/app/grid/12.1.0.2
$ $ORACLE_HOME/OPatch/opatch lsinventory -all
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -silent -detachHome ORACLE_HOME="/u01/app/grid/12.1.0.2"
$ unset ORACLE_HOME

-- as root user

# cd /u01/app/grid
# rm -fr 12.1.0.2

For any reason, if the above command fails, then on every node:

$ export ORACLE_HOME=/u01/app/grid/12.1.0.2 
$ $ORACLE_HOME/OPatch/opatch lsinventory -all 
$ cd $ORACLE_HOME/oui/bin 
$ ./runInstaller -silent -local -detachHome ORACLE_HOME="/u01/app/grid/12.1.0.2"
$ unset ORACLE_HOM
-- as root user
# cd /u01/app/grid
# rm -fr 12.1.0.2

Deinstall old GI_HOME by using “deinstall” tool

a) Log in as root, change the permission and ownership of the old GI_HOME ( 12.1.0.2)

# chmod -R 755 /u01/app/12.1.0.2/grid
# chown -R grid /u01/app/12.1.0.2/grid
# chown grid /u01/app/12.1.0.2

b) Run “deinstall” under the to be deleted GI_HOME( /u01/app/12.1.0.2/grid).

$ /u01/app/12.1.0.2/grid/deinstall/deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DECONFIG TOOL START ############
...
..
.

c) Refer and check the logs :

Log of Deinstall 12.1.0.2 GI_HOME

CRS-6706: Oracle Clusterware Release patch level (‘nnnnnn’) does not match Software patch level (‘nnnnnn’)

“opatchauto” failed in the middle, try to rerun again, get “CRS-6706” error.

# /u01/app/12.1.0.2/grid/OPatch/opatchauto apply /tmp/12.1.0.2/27010872 -oh /u01/app/12.1.0.2/grid
...
..
.
Using configuration parameter file: /u01/app/12.1.0.2/grid/OPatch/auto/dbtmp/bootstrap_racnode1/patchwork/crs/install/crsconfig_params
CRS-6706: Oracle Clusterware Release patch level ('173535486') does not match Software patch level ('2039526626'). Oracle Clusterware cannot be started.
CRS-4000: Command Start failed, or completed with errors.
2018/01/23 16:29:02 CLSRSC-117: Failed to start Oracle Clusterware stack


After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Tue Jan 23 16:29:04 2018
Time taken to complete the session 1 minute, 39 seconds

opatchauto failed with error code 42

On racnode1:

$/u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
19941482
19941477
19694308
19245012
26925218   <---- Does not exist on node2

$/u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
2039526626

On racnode2:

$/u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
19941482
19941477
19694308
19245012

$/u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
2039526626

We can see patch 26925218 has been applied onto racnode 1, the solution is :

1) Rollback this patch ( 26925218), and run “opatchauto” again to finish the patching successfully. then everything should be fine.

OR

2) Manually complete all the left patches in the GI. After this everything is fine.

In 12c, GI home must have identical patches for the clusterware to start unless during rolling patching.
After applied the same patches on all nodes, GI started fine.

———-

Another situation you might meet is all nodes have same patches but ‘opatch lsinventory’ shows the different patch level:

For example , on racnode1:

$ /u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
11111111
22222222
33333333

$ /u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
8888888888

Node2

$ /u01/app/12.1.0.2/grid/bin/kfod op=patches
---------------
List of Patches
===============
11111111
22222222
33333333

$ /u01/app/12.1.0.2/grid/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
9999999999

However opatch lsinventory shows the different patch level:

Patch level status of Cluster nodes :

Patching Level Nodes
-------------- -----
8888888888 node1                    ====>> different patch level
9999999999 node2

For 12.1.0.1/2:

Execute”/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -patch” as root user on the problematic node and the patch level should be corrected.

For 12.2

Execute”<GI_HOME>/crs/install/rootcrs.pl -prepatch”  “<GI_HOME>/crs/install/rootcrs.pl -postpatch”and as <root_user> on the problematic node and the patch level should be corrected

REFERENCES:

RS-6706: Oracle Clusterware Release patch level (‘nnn’) does not match Software patch level (‘mmm’) (Doc ID 1639285.1)

12c opatchauto : Prerequisite check “CheckApplicable” failed

It is a good practice to copy and unzip GI patches with grid user

Using “opatchauto” to apply Jan2018 GI PSU onto 12R1 GI HOME, got “Prerequisite check “CheckApplicable” failed errors.

 # /u01/app/12.1.0.2/grid/OPatch/opatchauto apply /tmp/27010872 
   -oh /u01/app/12.1.0.2/grid
...
..
.
Bringing down CRS service on home /u01/app/12.1.0.2/grid
Prepatch operation log file location: /u01/app/12.1.0.2/grid/
 cfgtoollogs/crsconfig/crspatch_racnode1_2018-01-23_03-52-31PM.log
CRS service brought down successfully on home /u01/app/12.1.0.2/grid

Start applying binary patch on home /u01/app/12.1.0.2/grid
Failed while applying binary patches on home /u01/app/12.1.0.2/grid

Execution of [OPatchAutoBinaryAction] patch action failed, check log 
for more details. Failures:
Patch Target : racnode1->/u01/app/12.1.0.2/grid Type[crs]
Details: [
---------------------------Patching Failed--------------------------
Command execution failed during patching in home: /u01/app/12.1.0.2/
grid, host: racnode1.
Command failed: /u01/app/12.1.0.2/grid/OPatch/opatchauto apply 
 /tmp/27010872 -oh /u01/app/12.1.0.2/grid -target_type cluster 
 -binary -invPtrLoc /u01/app/12.1.0.2/grid/oraInst.loc -jre 
/u01/app/12.1.0.2/grid/OPatch/jre -persistresult /u01/app/12.1.0.2/
grid/OPatch/auto/dbsessioninfo/sessionresult_racnode1_crs.ser 
-analyzedresult /u01/app/12.1.0.2/grid/OPatch/auto/dbsessioninfo/
sessionresult_analyze_racnode1_crs.ser

Command failure output:
==Following patches FAILED in apply:

Patch: /tmp/27010872/26925218
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/
opatch2018-01-23_15-53-43PM_1.log
Reason:Failed during Patching: oracle.opatch.opatchsdk.OPatchException:
Prerequisite check "CheckApplicable" failed.

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Tue Jan 23 15:56:55 2018
Time taken to complete the session 6 minutes, 39 seconds

Chech opatch logfile:

[Jan 23, 2018 3:56:55 PM] [INFO] Space Needed : 3191.113MB
[Jan 23, 2018 3:56:55 PM] [INFO] Prereq checkPatchApplicableOnCurrentPlatform 
Passed for patch : 26925218
[Jan 23, 2018 3:56:55 PM] [INFO] Patch 26925218:
 onewaycopyAction : Source File "/tmp/27010872/26925218/files/crs/install
/dropdb.pl" does not exists or is not readable
 'oracle.crs, 12.1.0.2.0': Cannot copy file from 'dropdb.pl' to 
'/u01/app/12.1.0.2/grid/crs/install/dropdb.pl'
[Jan 23, 2018 3:56:55 PM] [INFO] Prerequisite check "CheckApplicable" failed.
 The details are:

Patch 26925218:
 onewaycopyAction : Source File "/tmp/27010872/26925218/files/crs/
install/dropdb.pl" does not exists or is not readable
 'oracle.crs, 12.1.0.2.0': Cannot copy file from 'dropdb.pl' to 
'/u01/app/12.1.0.2/grid/crs/install/dropdb.pl'
[Jan 23, 2018 3:56:55 PM] [SEVERE] OUI-67073:UtilSession failed:
 Prerequisite check "CheckApplicable" failed.
[Jan 23, 2018 3:56:55 PM] [INFO] Finishing UtilSession at Tue Jan 23 15:56:55 AEDT 2018
[Jan 23, 2018 3:56:55 PM] [INFO] Log file location: /u01/app/12.1.0.2
/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-23_15-53-43PM_1.log

Check “dropdb.pl” file from unziped patch. the owner is oracle.

# ls -ltr /tmp/27010872/26925218/files/crs/install/dropdb.pl
-rwx------ 1 oracle oinstall 3541 Jan 6 07:48 /tmp/27010872/26925218/
                                          files/crs/install/dropdb.pl

The patch file was unzipped by RAC  user ‘oracle’  instead of GI owner ‘grid’. Change this file owner to grid, and run “opatchauto resume” to continue the patching successfully.

#chown grid /tmp/27010872/26925218/files/crs/install/dropdb.pl
#/u01/app/12.1.0.2/grid/OPatch/opatchauto resume

REFERENCES:

12c opatchauto: Prerequisite check “CheckApplicable” failed (Doc ID 1937982.1)