Drop ASM Diskgroup

Here is an example of how to drop an ASM disk group.

On racnode1:

SQL> connect / as sysasm
Connected.
SQL> drop diskgroup ocr;
drop diskgroup ocr
*
ERROR at line 1:
ORA-15039: diskgroup not dropped
ORA-15073: diskgroup OCR is mounted by another ASM instance

On racnode2 ASM instance.

SQL> alter diskgroup ocr dismount;

alter diskgroup OCR dismount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15260: permission denied on ASM disk group

$ sqlplus / as sysasm

SQL> alter diskgroup ocr dismount;

Diskgroup altered.

Back to racnode1 ASM instance, and drop the diskgroup ocr, which is not longer used and required. There are options:

INCLUDING CONTENTS:You must specify this clause if the disk group contains any files.
EXCLUDING CONTENTS: To ensure that Oracle ASM drops the disk group only when the disk group is empty. This is the default. If the disk group is not empty, then an error will be returned.
FORCE: Clears the headers on the disk belonging to a disk group that cannot be mounted by the Oracle ASM instance. The disk group cannot be mounted by any instance of the database.

SQL> drop diskgroup ocr;

drop diskgroup ocr
*
ERROR at line 1:
ORA-15039: diskgroup not dropped
ORA-15053: diskgroup "OCR" contains existing files
SQL>  drop diskgroup OCR INCLUDING CONTENTS;

Diskgroup dropped.

Find the ASM disk label:

[root@racnode2 ~]# /sbin/blkid
...
..
.
/dev/sdi1: LABEL="ASM_OCR_VOTE" TYPE="oracleasm"
...
..
..

Delete the asm disk “ASM_OCR_VOTE” on one node only:

[root@racnode1 ~]# oracleasm deletedisk ASM_OCR_VOTE
Clearing disk header: done
Dropping disk: done
[root@racnode1 ~]#

[root@racnode1 bin]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Cleaning disk "ASM_OCR_VOTE"
Scanning system for ASM disks...

Remove vdi file from VirtualBox.

  1. go to every VM –>settings –>storage, choose the right vdi file and removes seleted storage attachment.
  2. go to Virtual Media Manager –>hard disks Tab –> choose the right one, and  click ‘Remove’.

PRVG-11551 : Required version of package “cvuqdisk” was not found

Run runcluvfy.sh for pre-upgrading GI from 12.1.0.2 to 12.2.0.1, and get “PRVG-11551 : Required version of package “cvuqdisk” was not found” warning.

$pwd
/u01/app/12.2.0.1/grid

$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling \ 
              -src_crshome /u01/app/12.1.0/grid     \
              -dest_crshome /u01/app/12.2.0.1/grid  \
              -dest_version 12.2.0.1 -fixup -verbose


Verifying Package: cvuqdisk-1.0.10-1 ...FAILED
racnode2: PRVG-11551 : Required version of package "cvuqdisk" was 
not found on node "racnode2" [Required = "cvuqdisk-1.0.10-1" ; Found =
"cvuqdisk-1.0.9-1"].

racnode1: PRVG-11551 : Required version of package "cvuqdisk" was not found on
node "racnode1" [Required = "cvuqdisk-1.0.10-1" ; Found =
"cvuqdisk-1.0.9-1"].

The old version package was installed.

[root@racnode2 grid]# rpm -qa | grep -i cvu
cvuqdisk-1.0.9-1.x86_64

Find the new cvuqdisk package from new GI HOME binary , and install ( -i ) or upgrade it ( -U ).

[root@racnode2 ~]# cd /u01/app/12.2.0.1
[root@racnode2 12.2.0.1]# cd grid

[root@racnode2 grid]# pwd
/u01/app/12.2.0.1/grid

[root@racnode2 grid]# find ./ -name cvuqdisk*
./cv/rpm/cvuqdisk-1.0.10-1.rpm
./cv/remenv/cvuqdisk-1.0.10-1.rpm

[root@racnode2 grid]# rpm -Uv ./cv/rpm/cvuqdisk-1.0.10-1.rpm
Preparing packages...
cvuqdisk-1.0.10-1.x86_64
cvuqdisk-1.0.9-1.x86_64
[root@racnode2 grid]# rpm -qa | grep cvuqdisk
cvuqdisk-1.0.10-1.x86_64
[root@racnode2 grid]#

Oracle GI 12.1.0.2 and Chronyd Service

After installed or upgraded to Oracle GI 12.1.0.2, chrony is not supported for some reason.

[grid@racnode2 bin]$ systemctl status chronyd

● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; 
                                             vendor preset: enabled)
   Active: active (running) since Fri 2020-01-10 16:28:40 AEDT; 
                                                         1h 19min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 1184 ExecStartPost=/usr/libexec/chrony-helper 
                        update-daemon (code=exited, status=0/SUCCESS)
  Process: 1152 ExecStart=/usr/sbin/chronyd $OPTIONS 
                                    (code=exited, status=0/SUCCESS)
 Main PID: 1166 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─1166 /usr/sbin/chronyd
[root@racnode2 bin]# ./crsctl check css
CRS-4529: Cluster Synchronization Services is online
[root@racnode2 bin]# ./crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0
[grid@racnode2 bin]$ crsctl stat res -t -init
-------------------------------------------------------------------
Name           Target  State        Server             State details
-------------------------------------------------------------------
Cluster Resources
-------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       racnode2          Started,STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crf
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crsd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.ctssd
      1        ONLINE  ONLINE       racnode2          ACTIVE:0,STABLE
...
..
.

octssd.trc

2020-01-10 18:47:45.647193 : CTSS:1717679872: sclsctss_gvss2: 
                                      NTP default pid file not found
2020-01-10 18:47:45.647198 : CTSS:1717679872: ctss_check_vendor_sw: 
              Vendor time sync software is not detected. status [1].

CAUSE

It is a bug according oracle.

Workaround

Apply patch 20936562, which is included in majority GI PSUs since GI PSU 160719 (July 2016)

[grid@racnode2 tmp]$ opatch lsinventory | grep 20936562
19471836, 24445255, 20936562, 28805158, 25037011, 22144696, 18750781

[grid@racnode2 tmp]$ $ORACLE_HOME/OPatch/opatch lsinventory
...
..
.
Unique Patch ID: 22886676
Patch description: "OCW PATCH SET UPDATE 12.1.0.2.190716 (29509318)"
Created on 27 Jun 2019, 08:01:08 hrs PST8PDT
...
..
.
[grid@racnode2 trace]$ crsctl check css
CRS-4529: Cluster Synchronization Services is online

[grid@racnode2 trace]$ crsctl check ctss
CRS-4700:The Cluster Time Synchronization Service is in Observer mode
[grid@racnode2 trace]$ crsctl stat res -t -init
-------------------------------------------------------------------
Name           Target  State        Server             State details
-------------------------------------------------------------------
Cluster Resources
------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       racnode2           Started,STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crf
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crsd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.ctssd
      1        ONLINE  ONLINE       racnode2         OBSERVER,STABLE
...
..
.

octssd.trc

2020-01-10 20:52:58.219654 :  CTSS:1530910464: sclsctss_gvss5: 
                              Chrony active, forcing observer mode
2020-01-10 20:52:58.219657 :  CTSS:1530910464: ctss_check_vendor_sw: 
                   Vendor time sync software is detected. status [2].

Just keep in mind cluvfy still checks NTPD “/etc/ntp.conf” configurations. But it is not a big deal.

[grid@racnode2]$ cluvfy comp clocksync -n all -verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...
Oracle Clusterware is installed on all nodes.

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status
  ------------------------------------  ------------------------
  racnode2                              passed
  racnode1                              passed
CTSS resource check passed

Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State
  ------------------------------------  ------------------------
  racnode2                              Observer
  racnode1                              Observer
CTSS is in Observer state. Switching over to clock synchronization 
                                                  checks using NTP


Starting Clock synchronization checks using Network Time Protocol(NTP)...

Checking existence of NTP configuration file "/etc/ntp.conf" across 
nodes:
  Node Name                             File exists?
  ------------------------------------  ------------------------
  racnode2                              no
  racnode1                              no
PRVG-1019 : The NTP configuration file "/etc/ntp.conf" does not exist 
            on nodes "racnode2,racnode1"
PRVF-5414 : Check of NTP Config file failed on all nodes. 
            Cannot proceed further for the NTP tests

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?
  ------------------------------------  ------------------------
  racnode2                              no
  racnode1                              no
PRVF-7590 : "ntpd" is not running on node "racnode2"
PRVF-7590 : "ntpd" is not running on node "racnode1"
PRVG-1024 : The NTP Daemon or Service was not running on any of the 
            cluster nodes.
PRVF-5415 : Check to see if NTP daemon or service is running failed
Result: Clock synchronization check using Network Time Protocol(NTP) 
        failed

PRVF-9652 : Cluster Time Synchronization Services check failed

Verification of Clock Synchronization across the cluster nodes was 
unsuccessful on all the specified nodes.

Reconfigure ASMLIB after Unbreakable Enterprise Kernel (UEK) Upgraded from UEKR3 to UEKR5 on Oracle Linux 7

After unbreakable enterprise kernel (UEK) is upgraded from release 3 to release 5 on Oracle Linux 7,  the oracle asm is not working well:

[root@racnode1 ~]# oracleasm listdisks
[root@racnode1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ASM_DISK1"
Unable to instantiate disk "ASM_DISK1"
Instantiating disk "ASM_DISK2"
Unable to instantiate disk "ASM_DISK2"
Instantiating disk "ASM_DISK3"
Unable to instantiate disk "ASM_DISK3"
Instantiating disk "ASM_DISK4"
Unable to instantiate disk "ASM_DISK4"
Instantiating disk "ASM_DISK5"
Unable to instantiate disk "ASM_DISK5"
Instantiating disk "ASM_DISK6"
Unable to instantiate disk "ASM_DISK6"
Instantiating disk "ASM_DISK7"
Unable to instantiate disk "ASM_DISK7"
Instantiating disk "ASM_OCR_VOTE"
Unable to instantiate disk "ASM_OCR_VOTE"
[root@racnode1 ~]# oracleasm listdisks

Check oracleasm packages , and found there are one or two are missing:

[root@racnode1 ~]# rpm -qa|grep -i oracleasm
oracleasm-support-2.1.11-2.el7.x86_64
[root@racnode1 ~]#

The Oracle ASMLib kernel driver is now included in the Unbreakable Enterprise Kernel. No driver package needs to be installed when using this kernel.

Check Oracle ASMLib kernel driver has been installed.

[root@racnode1 ~]# lsmod|grep asm
oracleasm              61440  1

[root@racnode1 ~]# modinfo oracleasm
filename:       /lib/modules/4.14.35-1902.8.4.el7uek.x86_64/kernel/drivers/block/oracleasm/oracleasm.ko.xz
description:    Kernel driver backing the Generic Linux ASM Library.
author:         Joel Becker, Martin K. Petersen <martin.petersen@oracle.com>
version:        2.0.8
license:        GPL
srcversion:     DF8809442FB655948FEC227
depends:
retpoline:      Y
intree:         Y
name:           oracleasm
vermagic:       4.14.35-1902.8.4.el7uek.x86_64 SMP mod_unload modversions
signat:         PKCS#7
signer:
sig_key:
sig_hashalgo:   md4
parm:           use_logical_block_size:Prefer logical block size over physical (Y=logical, N=physical [default]) (bool)
[root@racnode1 ~]#

For RHEL linux, install  kmod-oracleasm package:

[root@racnode1 ~]# yum install kmod-oracleasm

Download and install oracleasmlib-2.0.12-1.el7.x86_64.rpm package from below url, if it is unavailable.

https://www.oracle.com/linux/downloads/linux-asmlib-v7-downloads.html

Check the required packages are all available now:

[root@racnode1 ~]# rpm -qa|grep -i oracleasm
oracleasmlib-2.0.12-1.el7.x86_64
oracleasm-support-2.1.11-2.el7.x86_64
kmod-oracleasm-2.0.8-26.0.1.el7.x86_64 ( for RHEL )

Configure ASM

[root@racnode1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface [grid]:
Default group to own the driver interface [oinstall]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
[root@racnode1 ~]#

Re-initialize oracleasm:

[root@racnode1 sysconfig]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm

Scan and list ASM disks:

[root@racnode1 sysconfig]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes


[root@racnode1 sysconfig]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ASM_DISK1"
Instantiating disk "ASM_DISK2"
Instantiating disk "ASM_DISK3"
Instantiating disk "ASM_DISK4"
Instantiating disk "ASM_DISK5"
Instantiating disk "ASM_DISK6"
Instantiating disk "ASM_DISK7"
Instantiating disk "ASM_OCR_VOTE"


[root@racnode1 sysconfig]# oracleasm listdisks
ASM_DISK1
ASM_DISK2
ASM_DISK3
ASM_DISK4
ASM_DISK5
ASM_DISK6
ASM_DISK7
ASM_OCR_VOTE

PRVF-7590 PRVG-1024 PRVF-5415 PRVF-9652 While Running cluvfy comp clocksync

$ cluvfy comp clocksync -n all -verbose

The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed

Checking daemon liveness...

Check: Liveness for "ntpd"
Node Name   Running?
----------  ------------------------
racnode2    no
racnode1    no
PRVF-7590 : "ntpd" is not running on node "racnode2"
PRVF-7590 : "ntpd" is not running on node "racnode1"
PRVG-1024 : The NTP Daemon or Service was not running on any of the 
            cluster nodes.
PRVF-5415 : Check to see if NTP daemon or service is running failed
Result: Clock synchronization check using Network Time Protocol(NTP) 
        failed

PRVF-9652 : Cluster Time Synchronization Services check failed

Verification of Clock Synchronization across the cluster nodes was 
unsuccessful on all the specified nodes.

But ntpd demon process is running :

#systemctl status ntpd
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2019-09-08 21:06:46 AEST; 58min ago
Process: 2755 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 2756 (ntpd)
CGroup: /system.slice/ntpd.service
└─2756 /usr/sbin/ntpd -u ntp:ntp -g

Debug “cluvfy” or “runcluvfy.sh”:

$ rm -rf /tmp/cvutrace
$ mkdir /tmp/cvutrace
$ export CV_TRACELOC=/tmp/cvutrace
$ export SRVM_TRACE=true
$ export SRVM_TRACE_LEVEL=1

$ cluvfy comp clocksync -n all -verbose

$ ls -ltr  /tmp/cvutrace
total 1960
-rw-r--r-- 1 grid oinstall       0 Sep  8 21:46 cvutrace.log.0.lck
-rw-r--r-- 1 grid oinstall       0 Sep  8 21:47 cvuhelper.log.0.lck
-rw-r--r-- 1 grid oinstall    1586 Sep  8 21:47 cvuhelper.log.0
-rw-r--r-- 1 grid oinstall 2000962 Sep  8 21:47 cvutrace.log.0

From the trace file, it complaints “file check failed”  for file “/var/run/ntpd.pid”.

$]$ tail -20  /tmp/cvutrace/cvutrace.log.0
[main] [ 2019-09-08 21:47:03.312 EST ] [VerificationLogData.traceLogData:251]  FINE: [Task.perform:514]
m_nodeList='racnode2,racnode1'
[main] [ 2019-09-08 21:47:03.312 EST ] [VerificationLogData.traceLogData:251]  INFO: [sVerificationUtil.getUniqueDistributionID:494]  DistributionID[0]:7.2
[main] [ 2019-09-08 21:47:03.312 EST ] [VerificationLogData.traceLogData:251]  INFO: [sVerificationUtil.getUniqueDistributionID:559]  ==== Distribution Id determined to be OL7
[main] [ 2019-09-08 21:47:03.312 EST ] [VerificationLogData.traceLogData:251]  FINE: [VerificationCommand.execute:297]
Output: '<CV_VRES>1</CV_VRES><CV_LOG>Exectask: file check failed</CV_LOG><CV_CMDLOG><CV_INITCMD>/tmp/CVU_12.1.0.2.0_grid/exectask -chkfile /var/run/ntpd.pid </CV_INITCMD><CV_CMD>access() /var/run/ntpd.pid F_OK</CV_CMD><CV_CMDOUT></CV_CMDOUT><CV_CMDSTAT>2</CV_CMDSTAT></CV_CMDLOG><CV_ERES>0</CV_ERES>'
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251]  FINE: [VerificationCommand.execute:297]
Output: '<CV_VRES>1</CV_VRES><CV_LOG>Exectask: file check failed</CV_LOG><CV_CMDLOG><CV_INITCMD>/tmp/CVU_12.1.0.2.0_grid/exectask -chkfile /var/run/ntpd.pid </CV_INITCMD><CV_CMD>access() /var/run/ntpd.pid F_OK</CV_CMD><CV_CMDOUT></CV_CMDOUT><CV_CMDSTAT>2</CV_CMDSTAT></CV_CMDLOG><CV_ERES>0</CV_ERES>'
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251]  ERROR: [Result.addErrorDescription:624]  PRVF-7590 : "ntpd" is not running on node "racnode2"
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251]  ERROR: [Result.addErrorDescription:624]  PRVF-7590 : "ntpd" is not running on node "racnode1"
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251]  FINE: [Task.perform:594]
TaskDaemonLiveliness:Daemon Liveness[TASKDAEMONLIVELINESS]:TASK_SUMMARY:FAILED:CRITICAL:VERIFICATION_FAILED
          ERRORMSG(racnode2): PRVF-7590 : "ntpd" is not running on node "racnode2"
          ERRORMSG(racnode1): PRVF-7590 : "ntpd" is not running on node "racnode1"
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251]  ERROR: [ResultSet.addErrorDescription:1102]  PRVG-1024 : The NTP Daemon or Service was not running on any of the cluster nodes.
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251]  ERROR: [ResultSet.addErrorDescription:1102]  PRVF-5415 : Check to see if NTP daemon or service is running failed
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251]  FINE: [Task.perform:594]
TaskCTSSIntegrity:Clock Synchronization[TASKCTSSINTEGRITY]:TASK_SUMMARY:FAILED:CRITICAL:VERIFICATION_FAILED
          ERRORMSG(GLOBAL): PRVF-5415 : Check to see if NTP daemon or service is running failed
[main] [ 2019-09-08 21:47:03.313 EST ] [CluvfyDriver.main:360]  ==== cluvfy exiting normally.

SOLUTION

As per Oracle GI installation documentation, configure ntpd service to start with a pidfile . Edit “/etc/sysconfig/ntpd” and modify the below line

OPTIONS="-g"

to

OPTIONS="-g -p /var/run/ntpd.pid"
# systemctl restart ntpd
# ls -l /var/run/ntpd*
-rw-r--r-- 1 root root 4 Sep 8 22:21 /var/run/ntpd.pid

$ cluvfy comp clocksync -n all -verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...
Oracle Clusterware is installed on all nodes.

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
Node Name  Status
---------  ------------------------
racnode1   passed
racnode2   passed
CTSS resource check passed

Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
Node Name    State
------------ ------------------------
racnode2     Observer
racnode1     Observer
CTSS is in Observer state. Switching over to clock synchronization 
     checks using NTP


Starting Clock synchronization checks using Network Time Protocol(NTP)

Checking existence of NTP configuration file "/etc/ntp.conf" across 
nodes

Node Name File exists?
--------- -----------------------
racnode2  yes
racnode1  yes
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed

Checking daemon liveness...

Check: Liveness for "ntpd"
Node Name   Running?
---------- ------------------------
racnode2    yes
racnode1    yes
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

Checking whether NTP daemon or service is using UDP port 123 
on all nodes

Check for NTP daemon or service using UDP port 123
Node Name  Port Open?
---------- -----------------------
racnode2   yes
racnode1   yes
Check for synchronization of NTP daemon with at least one external 
time source passed on all nodes.

Result: Clock synchronization check using Network Time Protocol(NTP) 
        passed


Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was 
successful.