The size of in-memory file system mounted at /dev/shm is “259072” megabytes which does not match the size in /etc/fstab as “0” megabytes

It is a bug which can be ignored. Attention is required to be paid if “runcluvfy.sh” and “cluvfy” have different results.

“runcluvfy.sh” pre-check running  shows WARNING about /dev/shm,  but “cluvfy” does not complain.

$ runcluvfy.sh stage  -pre crsinst -n racnode1,racnode2 -verbose
....
...
..
Daemon not running check passed for process "avahi-daemon"

Starting check for /dev/shm mounted as temporary file system ...

WARNING:

The size of in-memory file system mounted at /dev/shm is "259072" megabytes which does not match the size in /etc/fstab as "0" megabytes
The size of in-memory file system mounted at /dev/shm is "259072" megabytes which does not match the size in /etc/fstab as "0" megabytes

Check for /dev/shm mounted as temporary file system passed
....
...
..
.
Pre-check for cluster services setup was unsuccessful on all the nodes.
$cluvfy stage  -pre crsinst -n racnode1,racnode2 -verbose
....
...
..
.
Daemon not running check passed for process "avahi-daemon"
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passed
Starting check for /boot mount ...
Check for /boot mount passed
Starting check for zeroconf check ...
Check for zeroconf check passed
Pre-check for cluster services setup was successful on all the nodes.

According to Doc (ID 1918620.1) , this WARNING can be ignored.

The size of in-memory file system mounted at /dev/shm is “24576” megabytes which does not match the size in /etc/fstab as “0” megabytes (Doc ID 1918620.1)

 

APPLIES TO:

Oracle Database – Enterprise Edition – Version 12.1.0.2 to 12.1.0.2 [Release 12.1]
Oracle Database – Enterprise Edition – Version 11.2.0.3 to 11.2.0.3 [Release 11.2]
Information in this document applies to any platform.

SYMPTOMS

12.1.0.2 OUI/CVU (cluvfy or runcluvfy.sh) reports the following warning:

WARNING:

The size of in-memory file system mounted at /dev/shm is “24576” megabytes which does not match the
size in /etc/fstab as “0” megabytes

 

OR

 

PRVE-0426 : The size of in-memory file system mounted as /dev/shm is “74158080k” megabytes which is less than the required size of “2048” megabytes on node “”

 

/dev/shm setting is default so no size specified is in /etc/fstab:

$ grep shm /etc/fstab
tmpfs /dev/shm tmpfs defaults 0 0

 

/dev/shm has OS default setting:

$ df -m | grep shm
tmpfs 7975 646 7330 9% /dev/shm

 

CAUSE

Due to unpublished bug 19031737

SOLUTION

Since the size of /dev/shm is bigger than 2GB, the warning can be ignored.

PRVE-10077 : NOZEROCONF parameter was not specified or was not set to ‘yes’ in file “/etc/sysconfig/network”

For any GI/RAC deployment, It is a good practice to have cluster pre-installation checks by running cluvfy or runcluvfy.sh.

When doing cluster pre-installation health checks, got  the following error messages :

grid@racnode1:/u01/app/software/CVU/bin$ ./cluvfy stage -pre crsinst -n racnode1,racnode2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "racnode1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
....
...
..
Starting check for zeroconf check ...

ERROR:
PRVE-10077 : NOZEROCONF parameter was not specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "racnode2"
PRVE-10077 : NOZEROCONF parameter was not specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "racnode1"
Check for zeroconf check failed
Pre-check for cluster services setup was unsuccessful on all the nodes.

After adding “NOZEROCONF=yes” into  “/etc/sysconfig/network” on both servers, then rerun it again successfully without errors any more.

grid@racnode1:/u01/app/software/grid$ ./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "racnode1"
Checking user equivalence...
User equivalence check passed for user "grid"
....
...
..
Starting check for /boot mount ...
Check for /boot mount passed
Starting check for zeroconf check ...
Check for zeroconf check passed
Pre-check for cluster services setup was successful.

The pre-check tool can be obtained from either GI binary media, that is called “runcluvfy.sh “, or from downloading “cvupack_Linux_x86_64.zip”, the name is “cluvfy”.

So the command will be either :

$runcluvfy.sh stage  -pre crsinst -n racnode1,racnode2 -verbose

$cluvfy stage  -pre crsinst -n racnode1,racnode2 -verbose

OEM Cluster Database and Database System Targets are in Pending Status

In Oracle Enterprise Manager( OEM ) 12c and 13c, there are a couple of newly added cluster databases targets and database system targets are in “PENDING” status, though all the cluster database instances are showing up. 

Subscribe to get access

Read more of this content when you subscribe today.

What Is the File “_rm_dup_$ORACLE_SID.dat” Under ORACLE_HOME/dbs of Auxiliary Host

“_rm_dup_.dat” file is for “restore optimization” purpose to record which datafiles have been restored/copied onto auxiliary host, when running RMAN “duplicate target database for standby from active database”.

There is a file with name  “_rm_dup_$ORACLE_SID.dat” under $ORACLE_HOME/dbs of the clone database.  This file is from executing “duplicate database ….”

“_rm_dup_<dup_db>.dat” stores the names of datafilecopy already created by duplicate. Inside this file,  rman can find the name of the datafiles already copied to auxiliary host. if, for some reasons, the duplicate failed, the second duplicate will check this file. If the datafilecopy has been created on auxilary host, and the vital information of the datafile (file number, database id, creation scn, database name) and its checkpoint scn is behind the until scn, then the datafilecopy can be used by this new duplicate and restore/copy is not necessary.

So this file helps for “restore optimization” purpose when running RMAN “duplicate database …..”. 

How to Enable or Disable CRS Auto Start Up

It is a good practice to temporarily disable CRS auto start up when OS upgrading or patching.

Subscribe to get access

Read more of this content when you subscribe today.