Blog

PRVG-11551 : Required version of package “cvuqdisk” was not found

Run runcluvfy.sh for pre-upgrading GI from 12.1.0.2 to 12.2.0.1, and get “PRVG-11551 : Required version of package “cvuqdisk” was not found” warning.

$pwd
/u01/app/12.2.0.1/grid

$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling \ 
              -src_crshome /u01/app/12.1.0/grid     \
              -dest_crshome /u01/app/12.2.0.1/grid  \
              -dest_version 12.2.0.1 -fixup -verbose


Verifying Package: cvuqdisk-1.0.10-1 ...FAILED
racnode2: PRVG-11551 : Required version of package "cvuqdisk" was 
not found on node "racnode2" [Required = "cvuqdisk-1.0.10-1" ; Found =
"cvuqdisk-1.0.9-1"].

racnode1: PRVG-11551 : Required version of package "cvuqdisk" was not found on
node "racnode1" [Required = "cvuqdisk-1.0.10-1" ; Found =
"cvuqdisk-1.0.9-1"].

The old version package was installed.

[root@racnode2 grid]# rpm -qa | grep -i cvu
cvuqdisk-1.0.9-1.x86_64

Find the new cvuqdisk package from new GI HOME binary , and install ( -i ) or upgrade it ( -U ).

[root@racnode2 ~]# cd /u01/app/12.2.0.1
[root@racnode2 12.2.0.1]# cd grid

[root@racnode2 grid]# pwd
/u01/app/12.2.0.1/grid

[root@racnode2 grid]# find ./ -name cvuqdisk*
./cv/rpm/cvuqdisk-1.0.10-1.rpm
./cv/remenv/cvuqdisk-1.0.10-1.rpm

[root@racnode2 grid]# rpm -Uv ./cv/rpm/cvuqdisk-1.0.10-1.rpm
Preparing packages...
cvuqdisk-1.0.10-1.x86_64
cvuqdisk-1.0.9-1.x86_64
[root@racnode2 grid]# rpm -qa | grep cvuqdisk
cvuqdisk-1.0.10-1.x86_64
[root@racnode2 grid]#

How to Get and Install 32 bit Linux Packages

For Oracle 12c, both 32bit and 64bit packages are required to be installed, but it seems 32bit one is missing :

[root@racnode2 ~]# rpm -qa | grep glibc-devel
glibc-devel-2.17-292.0.1.el7.x86_64
[root@racnode2 ~]#

When try to install by yum, it says it has been installed already.

[root@racnode2 ~]# yum install glibc-devel
Package glibc-devel-2.17-292.0.1.el7.x86_64 already installed and latest version
Nothing to do
[root@racnode2 ~]#

Search the repository.

[root@racnode2 ~]# yum search glibc-devel
Loaded plugins: ulninfo
=================== N/S matched: glibc-devel ====================
glibc-devel.i686 : Object files for development using standard C libraries.
glibc-devel.x86_64 : Object files for development using standard C libraries.

Name and summary matches only, use "search all" for everything.

Install the 32 bit package.

[root@racnode2 ~]# yum install glibc-devel.i686
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package glibc-devel.i686 0:2.17-292.0.1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==================================================================
Package Arch Version Repository Size
==================================================================
Installing:
glibc-devel i686 2.17-292.0.1.el7 ol7_latest 1.1 M

Transaction Summary
=================================================================
Install 1 Package

Total download size: 1.1 M
Installed size: 1.0 M
Is this ok [y/d/N]: y
Downloading packages:
glibc-devel-2.17-292.0.1.el7.i686.rpm | 1.1 MB 00:00:02
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : glibc-devel-2.17-292.0.1.el7.i686 1/1
Verifying : glibc-devel-2.17-292.0.1.el7.i686 1/1

Installed:
glibc-devel.i686 0:2.17-292.0.1.el7

Complete!
[root@racnode2 ~]#

Oracle GI 12.1.0.2 and Chronyd Service

After installed or upgraded to Oracle GI 12.1.0.2, chrony is not supported for some reason.

[grid@racnode2 bin]$ systemctl status chronyd

● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; 
                                             vendor preset: enabled)
   Active: active (running) since Fri 2020-01-10 16:28:40 AEDT; 
                                                         1h 19min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 1184 ExecStartPost=/usr/libexec/chrony-helper 
                        update-daemon (code=exited, status=0/SUCCESS)
  Process: 1152 ExecStart=/usr/sbin/chronyd $OPTIONS 
                                    (code=exited, status=0/SUCCESS)
 Main PID: 1166 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─1166 /usr/sbin/chronyd
[root@racnode2 bin]# ./crsctl check css
CRS-4529: Cluster Synchronization Services is online
[root@racnode2 bin]# ./crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0
[grid@racnode2 bin]$ crsctl stat res -t -init
-------------------------------------------------------------------
Name           Target  State        Server             State details
-------------------------------------------------------------------
Cluster Resources
-------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       racnode2          Started,STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crf
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crsd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.ctssd
      1        ONLINE  ONLINE       racnode2          ACTIVE:0,STABLE
...
..
.

octssd.trc

2020-01-10 18:47:45.647193 : CTSS:1717679872: sclsctss_gvss2: 
                                      NTP default pid file not found
2020-01-10 18:47:45.647198 : CTSS:1717679872: ctss_check_vendor_sw: 
              Vendor time sync software is not detected. status [1].

CAUSE

It is a bug according oracle.

Workaround

Apply patch 20936562, which is included in majority GI PSUs since GI PSU 160719 (July 2016)

[grid@racnode2 tmp]$ opatch lsinventory | grep 20936562
19471836, 24445255, 20936562, 28805158, 25037011, 22144696, 18750781

[grid@racnode2 tmp]$ $ORACLE_HOME/OPatch/opatch lsinventory
...
..
.
Unique Patch ID: 22886676
Patch description: "OCW PATCH SET UPDATE 12.1.0.2.190716 (29509318)"
Created on 27 Jun 2019, 08:01:08 hrs PST8PDT
...
..
.
[grid@racnode2 trace]$ crsctl check css
CRS-4529: Cluster Synchronization Services is online

[grid@racnode2 trace]$ crsctl check ctss
CRS-4700:The Cluster Time Synchronization Service is in Observer mode
[grid@racnode2 trace]$ crsctl stat res -t -init
-------------------------------------------------------------------
Name           Target  State        Server             State details
-------------------------------------------------------------------
Cluster Resources
------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       racnode2           Started,STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crf
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crsd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.ctssd
      1        ONLINE  ONLINE       racnode2         OBSERVER,STABLE
...
..
.

octssd.trc

2020-01-10 20:52:58.219654 :  CTSS:1530910464: sclsctss_gvss5: 
                              Chrony active, forcing observer mode
2020-01-10 20:52:58.219657 :  CTSS:1530910464: ctss_check_vendor_sw: 
                   Vendor time sync software is detected. status [2].

Just keep in mind cluvfy still checks NTPD “/etc/ntp.conf” configurations. But it is not a big deal.

[grid@racnode2]$ cluvfy comp clocksync -n all -verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...
Oracle Clusterware is installed on all nodes.

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status
  ------------------------------------  ------------------------
  racnode2                              passed
  racnode1                              passed
CTSS resource check passed

Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State
  ------------------------------------  ------------------------
  racnode2                              Observer
  racnode1                              Observer
CTSS is in Observer state. Switching over to clock synchronization 
                                                  checks using NTP


Starting Clock synchronization checks using Network Time Protocol(NTP)...

Checking existence of NTP configuration file "/etc/ntp.conf" across 
nodes:
  Node Name                             File exists?
  ------------------------------------  ------------------------
  racnode2                              no
  racnode1                              no
PRVG-1019 : The NTP configuration file "/etc/ntp.conf" does not exist 
            on nodes "racnode2,racnode1"
PRVF-5414 : Check of NTP Config file failed on all nodes. 
            Cannot proceed further for the NTP tests

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?
  ------------------------------------  ------------------------
  racnode2                              no
  racnode1                              no
PRVF-7590 : "ntpd" is not running on node "racnode2"
PRVF-7590 : "ntpd" is not running on node "racnode1"
PRVG-1024 : The NTP Daemon or Service was not running on any of the 
            cluster nodes.
PRVF-5415 : Check to see if NTP daemon or service is running failed
Result: Clock synchronization check using Network Time Protocol(NTP) 
        failed

PRVF-9652 : Cluster Time Synchronization Services check failed

Verification of Clock Synchronization across the cluster nodes was 
unsuccessful on all the specified nodes.

xfs_growfs: /dev/ol/root is not a mounted XFS filesystem

On Oracle Linux 7.7, trying to run “xfs_growfs’, and get errors.

[root@racnode1 ~]# xfs_growfs /dev/ol/root
xfs_growfs: /dev/ol/root is not a mounted XFS filesystem

It seems the syntax has been changed for this release. The mount point is required now instead of LV, like /dev/ol/root.

[root@racnode1 ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             4.8G     0  4.8G   0% /dev
tmpfs                2.0G  4.0K  2.0G   1% /dev/shm
tmpfs                4.8G  8.7M  4.8G   1% /run
tmpfs                4.8G     0  4.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root   27G   17G   11G  62% /
/dev/sda1            497M  120M  377M  25% /boot
tmpfs                973M     0  973M   0% /run/user/0
tmpfs                973M     0  973M   0% /run/user/54321
[root@racnode1 ~]# xfs_info /
meta-data=/dev/mapper/ol-root    isize=256    agcount=4, agsize=1734656 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=6938624, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=3388, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@racnode1 ~]#  xfs_growfs /
meta-data=/dev/mapper/ol-root    isize=256    agcount=4, agsize=1734656 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=6938624, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=3388, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 6938624 to 15974400

We can confirm root file system (/) size has been increased from 27GB to 61GB.

[root@racnode1 ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             4.8G     0  4.8G   0% /dev
tmpfs                2.0G  4.0K  2.0G   1% /dev/shm
tmpfs                4.8G  8.7M  4.8G   1% /run
tmpfs                4.8G     0  4.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root   61G   17G   45G  27% /
/dev/sda1            497M  120M  377M  25% /boot
tmpfs                973M     0  973M   0% /run/user/0
tmpfs                973M     0  973M   0% /run/user/54321

How to Increase Linux Swap Logical Volume ( LVM ) Size

Check which Volume Group ( VG) the swap LV is using.

[root@racnode1 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/ol/swap
LV Name swap
VG Name ol
LV UUID epJhR9-0sM6-bK6L-qubv-lrJb-0P0X-2Ab5fD
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2015-11-24 00:34:14 +1100
LV Status available
# open 2
LV Size 3.00 GiB
Current LE 768
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 252:1

Make sure VG ol has enough free space for swap LV to extend.

[root@racnode1 ~]# vgdisplay
--- Volume group ---
VG Name ol
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 95.50 GiB
PE Size 4.00 MiB
Total PE 24449
Alloc PE / Size 7544 / <29.47 GiB
Free PE / Size 16905 / <66.04 GiB
VG UUID 55lr8l-8d38-GGIX-j0Cm-InKP-Jiyb-refLmt

Disable swapping for the associated logical volume.

[root@racnode1 ~]# swapoff -v  /dev/ol/swap
swapoff /dev/ol/swap

[root@racnode1 ~]# free -g
              total   used      free   shared  buff/cache   available
Mem:              9      0         9        0           0           9
Swap:             0      0         0
[root@racnode1 ~]#

Resize the logical volume. I’m going to increase the swap volume from 3G to 8G .

[root@racnode1 ~]# lvresize /dev/ol/swap -L +5G --test
  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
  Size of logical volume ol/swap changed from 3.00 GiB (768 extents) to 8.00 GiB (2048 extents).
  Logical volume ol/swap successfully resized.
[root@racnode1 ~]# lvresize /dev/ol/swap -L +5G
  Size of logical volume ol/swap changed from 3.00 GiB (768 extents) to 8.00 GiB (2048 extents).
  Logical volume ol/swap successfully resized.

Format the new swap space.

[root@racnode1 ~]# mkswap /dev/ol/swap
mkswap: /dev/ol/swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 8388604 KiB
no label, UUID=3bfcff72-20c3-4e1e-ad38-20de7cca2050
[root@racnode1 ~]#

Enable the extended logical volume.

[root@racnode1 ~]# swapon -va
swapon /dev/mapper/ol-swap
swapon: /dev/mapper/ol-swap: found swap signature: version 1, page-size 4, same byte order
swapon: /dev/mapper/ol-swap: pagesize=4096, swapsize=8589934592, devsize=8589934592
[root@racnode1 ~]#

Test that the logical volume has been extended properly.

[root@racnode1 ~]# free -g
              total        used        free      shared  buff/cache   available
Mem:              9           0           9           0           0           9
Swap:             7           0           7
[root@racnode1 ~]# cat /proc/swaps
Filename                  Type            Size    Used    Priority
/dev/dm-1                 partition       8388604 0       -2
[root@racnode1 ~]#

Only one VG ( ol ).

[root@racnode1 ~]# vgdisplay
--- Volume group ---
VG Name ol
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 95.50 GiB
PE Size 4.00 MiB
Total PE 24449
Alloc PE / Size 8824 / <34.47 GiB
Free PE / Size 15625 / <61.04 GiB
VG UUID 55lr8l-8d38-GGIX-j0Cm-InKP-Jiyb-refLmt

Two LVs ( swap, root)

[root@racnode1 ~]# lvdisplay

--- Logical volume ---
LV Path /dev/ol/swap
LV Name swap
VG Name ol
LV UUID epJhR9-0sM6-bK6L-qubv-lrJb-0P0X-2Ab5fD
LV Write Access read/write
LV Creation host,time localhost.localdomain,2015-11-24 00:34:14 +1100
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 252:1

--- Logical volume ---
LV Path /dev/ol/root
LV Name root
VG Name ol
LV UUID N0D1Hr-FbhE-R7rt-WuUX-9PQU-jVfH-eF5UtL
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2015-11-24 00:34:14 +1100
LV Status available
# open 1
LV Size <26.47 GiB
Current LE 6776
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 252:0

VG ( ol) combines with two PVs ( /dev/sda2, /dev/sda3)

[root@racnode1 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name ol
PV Size 29.51 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 7554
Free PE 0
Allocated PE 7554
PV UUID ifffmk-GSXU-NcKC-vww1-HMF6-Nc36-A6sewK

--- Physical volume ---
PV Name /dev/sda3
VG Name ol
PV Size 66.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 16895
Free PE 15625
Allocated PE 1270
PV UUID DG6gQe-VQki-1oT2-V1wk-1etX-6qkp-7QP3cX

[root@racnode1 ~]#