The size of in-memory file system mounted at /dev/shm is “259072” megabytes which does not match the size in /etc/fstab as “0” megabytes

It is a bug which can be ignored. Attention is required to be paid if “runcluvfy.sh” and “cluvfy” have different results.

“runcluvfy.sh” pre-check running  shows WARNING about /dev/shm,  but “cluvfy” does not complain.

$ runcluvfy.sh stage  -pre crsinst -n racnode1,racnode2 -verbose
....
...
..
Daemon not running check passed for process "avahi-daemon"

Starting check for /dev/shm mounted as temporary file system ...

WARNING:

The size of in-memory file system mounted at /dev/shm is "259072" megabytes which does not match the size in /etc/fstab as "0" megabytes
The size of in-memory file system mounted at /dev/shm is "259072" megabytes which does not match the size in /etc/fstab as "0" megabytes

Check for /dev/shm mounted as temporary file system passed
....
...
..
.
Pre-check for cluster services setup was unsuccessful on all the nodes.
$cluvfy stage  -pre crsinst -n racnode1,racnode2 -verbose
....
...
..
.
Daemon not running check passed for process "avahi-daemon"
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passed
Starting check for /boot mount ...
Check for /boot mount passed
Starting check for zeroconf check ...
Check for zeroconf check passed
Pre-check for cluster services setup was successful on all the nodes.

According to Doc (ID 1918620.1) , this WARNING can be ignored.

The size of in-memory file system mounted at /dev/shm is “24576” megabytes which does not match the size in /etc/fstab as “0” megabytes (Doc ID 1918620.1)

 

APPLIES TO:

Oracle Database – Enterprise Edition – Version 12.1.0.2 to 12.1.0.2 [Release 12.1]
Oracle Database – Enterprise Edition – Version 11.2.0.3 to 11.2.0.3 [Release 11.2]
Information in this document applies to any platform.

SYMPTOMS

12.1.0.2 OUI/CVU (cluvfy or runcluvfy.sh) reports the following warning:

WARNING:

The size of in-memory file system mounted at /dev/shm is “24576” megabytes which does not match the
size in /etc/fstab as “0” megabytes

 

OR

 

PRVE-0426 : The size of in-memory file system mounted as /dev/shm is “74158080k” megabytes which is less than the required size of “2048” megabytes on node “”

 

/dev/shm setting is default so no size specified is in /etc/fstab:

$ grep shm /etc/fstab
tmpfs /dev/shm tmpfs defaults 0 0

 

/dev/shm has OS default setting:

$ df -m | grep shm
tmpfs 7975 646 7330 9% /dev/shm

 

CAUSE

Due to unpublished bug 19031737

SOLUTION

Since the size of /dev/shm is bigger than 2GB, the warning can be ignored.

PRVE-10077 : NOZEROCONF parameter was not specified or was not set to ‘yes’ in file “/etc/sysconfig/network”

For any GI/RAC deployment, It is a good practice to have cluster pre-installation checks by running cluvfy or runcluvfy.sh.

When doing cluster pre-installation health checks, got  the following error messages :

grid@racnode1:/u01/app/software/CVU/bin$ ./cluvfy stage -pre crsinst -n racnode1,racnode2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "racnode1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
....
...
..
Starting check for zeroconf check ...

ERROR:
PRVE-10077 : NOZEROCONF parameter was not specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "racnode2"
PRVE-10077 : NOZEROCONF parameter was not specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "racnode1"
Check for zeroconf check failed
Pre-check for cluster services setup was unsuccessful on all the nodes.

After adding “NOZEROCONF=yes” into  “/etc/sysconfig/network” on both servers, then rerun it again successfully without errors any more.

grid@racnode1:/u01/app/software/grid$ ./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "racnode1"
Checking user equivalence...
User equivalence check passed for user "grid"
....
...
..
Starting check for /boot mount ...
Check for /boot mount passed
Starting check for zeroconf check ...
Check for zeroconf check passed
Pre-check for cluster services setup was successful.

The pre-check tool can be obtained from either GI binary media, that is called “runcluvfy.sh “, or from downloading “cvupack_Linux_x86_64.zip”, the name is “cluvfy”.

So the command will be either :

$runcluvfy.sh stage  -pre crsinst -n racnode1,racnode2 -verbose

$cluvfy stage  -pre crsinst -n racnode1,racnode2 -verbose

How to Enable or Disable CRS Auto Start Up

It is a good practice to temporarily disable CRS auto start up when OS upgrading or patching.

Subscribe to get access

Read more of this content when you subscribe today.

CLSU-00100,CLSU-00101 and CLSU-00103 Errors when run “srvctl status”

In GI/RAC environment, directories and files permission should be identical on all nodes.

SYMPTOMS

When non-oracle user ran “srvctl status service -d RACTEST” on second node, get the following errors. But it did not happen on the first node.

$ srvctl status service -d RACTEST
sclsdglnam failed
path: /u01/app/oracle/product/11.2.0/dbhome_2/log/racnode2/client
filename: clsc
2016-08-31 11:57:42.820:
CLSD: An error occurred while attempting to generate a full name. Logging may not be active for this process
Additional diagnostics: CLSU-00100: Operating System function: access failed failed with error data: 13
CLSU-00101: Operating System error message: Permission denied
CLSU-00103: error location: SlfAccess
(:CLSD00183:)
category: -3
operation name: access failed
location: SlfAccess
dependent info: 13

CAUSE

There is a file/directory permission issue on the second node.

$cd $ORACLE_HOME/log
$ls -ltr
..
..
drwxr----T 3 oracle oinstall 4096 Aug 16 16:34 racnode2
$cd racnode2
$ls -l
drwxr----T 2 oracle oinstall 4096 Aug 31 09:58 client

While on node 1, the file permission is different.

$cd ORACLE_HOME/log
$ls -ltr
drwxr-xr-t 3 oracle oinstall 4096 Aug 16 14:13 racnode1
drwxr-xr-x 3 oracle oinstall 4096 Aug 20 17:10 diag 
$cd racnode1
$ls -ltr
total 4
drwxr-xr-t 2 oracle oinstall 4096 Aug 26 12:46 client

The solution is to change the directory/file permissions on node 2 to make it same as on node 1.

$cd $ORACLE_HOME/log
$chmod 1755 racnode2
$cd racnode2
$ls -ltr
total 4
drwxr----T 2 oracle oinstall 4096 Aug 31 09:58 client
$chmod 1755 client
$ ls -ltr
total 4
drwxr-xr-t 2 oracle oinstall 4096 Aug 31 09:58 client

How to Relink 12c Oracle GI / RAC Binaries after OS Upgrade

It is recommended to relink GI and RAC Home binaries after OS upgrading or patching.

This post demonstrates how to relink Oracle Grid Infrastructure ( GI ) and RAC Home binaries after OS upgrading or patching.

Subscribe to get access

Read more of this content when you subscribe today.