ORA-15014: path ‘/dev/oracleasm/disks/ASM_FRA01’ is not in the discovery set

Setting asm_diskstring parameter helps creating a successful ASM diskgroup.

When tried to create diskgroup FRA, and got errors as below:

SQL> CREATE DISKGROUP FRA EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/ASM_FRA01' NAME ASM_FRA01;
CREATE DISKGROUP FRA EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/ASM_FRA01' NAME ASM_FRA01
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/oracleasm/disks/ASM_FRA01' matches no disks
ORA-15014: path '/dev/oracleasm/disks/ASM_FRA01' is not in the discovery set

Checked ASM disk already existed:

# oracleasm listdisks
...
...
ASM_FRA01
OCR_VOTE01
...
...

SQL> ! ls -ltr /dev/oracleasm/disks/ASM_FRA01
brw-rw---- 1 grid dba 253, 2 Sep 26 15:32 /dev/oracleasm/disks/ASM_FRA01

Checked parameter asm_diskstring empty:

SQL> show parameter string

NAME             TYPE      VALUE
---------------- --------- -----
asm_diskstring   string

Could not change parameter asm_diskstring=’/dev/oracleasm/disks/*’

SQL> alter system set asm_diskstring='/dev/oracleasm/disks/*';
 alter system set asm_diskstring='/dev/oracleasm/disks/*'
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-15014: path 'ORCL:OCR_VOTE01' is not in the discovery set

Change parameter asm_diskstring=’/dev/oracleasm/disks/*’ in spfile only :

SQL> alter system set asm_diskstring='/dev/oracleasm/disks/*' scope=spfile;

System altered.

Stop and then start CRS again :

# /u01/app/12.1.0.2/grid/bin/crsctl stop crs
...
...
...
# /u01/app/12.1.0.2/grid/bin/crsctl start crs

Check current parameter asm_diskstring :

SQL> show parameter asm

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_diskgroups string
asm_diskstring string /dev/oracleasm/disks/*
asm_power_limit integer 10
asm_preferred_read_failure_groups string

Check disks path :

SQL> col PATH format a60

SQL>  select NAME,LABEL,PATH from v$asm_disk;

NAME LABEL PATH
------------------------------ ------------------------------- ------------------------------------------------------------
 /dev/oracleasm/disks/ASM_disk09
 /dev/oracleasm/disks/ASM_DISK05
 /dev/oracleasm/disks/ASM_DISK08
 /dev/oracleasm/disks/ASM_DISK01
 /dev/oracleasm/disks/ASM_DISK06
 /dev/oracleasm/disks/ASM_DISK07
 /dev/oracleasm/disks/ASM_FRA01
 /dev/oracleasm/disks/ASM_DISK04
 /dev/oracleasm/disks/ASM_DISK02
 /dev/oracleasm/disks/ASM_DISK03
OCR_VOTE01 /dev/oracleasm/disks/OCR_VOTE01

11 rows selected.

Now create diskgroup FRA successfully  :

SQL> CREATE DISKGROUP FRA EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/ASM_FRA01' NAME ASM_FRA01;

Diskgroup created.
Advertisement

ORA-15031: disk specification ‘/dev/oracleasm/disks/ASM_FRA01’ matches no disks

When tried to create diskgroup FRA, and got the following errors:

SQL> CREATE DISKGROUP FRA EXTERNAL REDUNDANCY 
            DISK '/dev/oracleasm/disks/ASM_FRA01' NAME ASM_FRA01;
CREATE DISKGROUP FRA EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/ASM_FRA01' NAME ASM_FRA01
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/oracleasm/disks/ASM_FRA01' matches no disks
ORA-15014: path '/dev/oracleasm/disks/ASM_FRA01' is not in the discovery set

Checked ASM disk already existed:

# oracleasm listdisks
...
...
ASM_FRA01
OCR_VOTE01
...
...

SQL> ! ls -ltr /dev/oracleasm/disks/ASM_FRA01
brw-rw---- 1 grid dba 253, 2 Sep 26 15:32 /dev/oracleasm/disks/ASM_FRA01

Subscribe to get access

Read more of this content when you subscribe today.

Is MGMTDB Patched While Applying GI PSU/RU/RUR ?

MGMTDB will be patched as well at the same time when GI is upgraded or patched ( PSU, RU/RUR, etc).

Yes, while applying PSU patch onto GI, we can see GIMR database “-MGMTDB” is patched as well. It can be confirmed from logs.

a) Logs from “opatchauto apply” shows MGMTDB is patched while applying GI PSU:

System initialization log file is /u01/app/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2016-09-19_12-46-41PM.log.

Session log file is /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/opatchauto2016-09-19_12-46-55PM.log
The id for this session is 2G1F
[init:init] Executing OPatchAutoBinaryAction action on home /u01/app/12.1.0.2/grid

Executing OPatch prereq operations to verify patch applicability on CRS Home........

[init:init] OPatchAutoBinaryAction action completed on home /u01/app/12.1.0.2/grid successfully
[init:init] Executing GIRACPrereqAction action on home /u01/app/12.1.0.2/grid

Executing prereq operations before applying on CRS Home........

[init:init] GIRACPrereqAction action completed on home /u01/app/12.1.0.2/grid successfully
[shutdown:shutdown] Executing GIShutDownAction action on home /u01/app/12.1.0.2/grid

Performing prepatch operations on CRS Home........

Prepatch operation log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_racnode2_2016-09-19_12-47-24AM.log

[shutdown:shutdown] GIShutDownAction action completed on home /u01/app/12.1.0.2/grid successfully
[offline:binary-patching] Executing OPatchAutoBinaryAction action on home /u01/app/12.1.0.2/grid

Start applying binary patches on CRS Home........

[offline:binary-patching] OPatchAutoBinaryAction action completed on home /u01/app/12.1.0.2/grid successfully
[startup:startup] Executing GIStartupAction action on home /u01/app/12.1.0.2/grid

Performing postpatch operations on CRS Home........

Postpatch operation log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_racnode2_2016-09-19_12-50-36AM.log

[startup:startup] GIStartupAction action completed on home /u01/app/12.1.0.2/grid successfully
[finalize:finalize] Executing OracleHomeLSInventoryGrepAction action on home /u01/app/12.1.0.2/grid

Verifying patches applied on CRS Home.

[finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /u01/app/12.1.0.2/grid successfully
OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode2
CRS Home:/u01/app/12.1.0.2/grid
Summary:

==Following patches were SKIPPED:

Patch: /u01/app/software/23615308/23177536
Reason: This patch is not applicable to this specified target type - "cluster"


==Following patches were SUCCESSFULLY applied:

Patch: /u01/app/software/23615308/21436941
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-09-19_12-47-55PM_1.log

Patch: /u01/app/software/23615308/23054246
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-09-19_12-47-55PM_1.log

Patch: /u01/app/software/23615308/23054327
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-09-19_12-47-55PM_1.log

Patch: /u01/app/software/23615308/23054341
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-09-19_12-47-55PM_1.log

b) Check Postpatch operation log file /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_racnode2_2016-09-19_12-50-36AM.log

$ tail -120 /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_racnode2_2016-09-19_12-50-36AM.log
...

....


2016-09-19 12:52:30: Executing cmd: /u01/app/12.1.0.2/grid/bin/srvctl config mgmtdb
2016-09-19 12:52:31: Command output:
> Database unique name: _mgmtdb
> Database name:
> Oracle home: <CRS home>
> Oracle user: grid
> Spfile: +OCR_VOTE/_MGMTDB/PARAMETERFILE/spfile.268.922884353
> Password file:
> Domain:
> Start options: open
> Stop options: immediate
> Database role: PRIMARY
> Management policy: AUTOMATIC
> Type: Management
> PDB name: racnode-cluster
> PDB service: racnode-cluster
> Cluster name: racnode-cluster
> Database instance: -MGMTDB
>End Command output
2016-09-19 12:52:31: isMgmtdbConfigured: 1
2016-09-19 12:52:31: setting ORAASM_UPGRADE to 1
2016-09-19 12:52:31: Executing cmd: /u01/app/12.1.0.2/grid/bin/crsctl query crs softwarepatch racnode2
2016-09-19 12:52:31: Command output:
> Oracle Clusterware patch level on node racnode2 is [3696455212].
>End Command output
2016-09-19 12:52:31: Oracle Clusterware patch level on node 'racnode2' is [3696455212]
2016-09-19 12:52:31: setting ORAASM_UPGRADE to 1
2016-09-19 12:52:31: Executing cmd: /u01/app/12.1.0.2/grid/bin/crsctl query crs softwarepatch racnode1
2016-09-19 12:52:31: Command output:
> Oracle Clusterware patch level on node racnode1 is [3696455212].
>End Command output
2016-09-19 12:52:31: Oracle Clusterware patch level on node 'racnode1' is [3696455212]
2016-09-19 12:52:31: The local node has the same software patch level [3696455212] as remote node 'racnode1'
2016-09-19 12:52:31: Executing cmd: /u01/app/12.1.0.2/grid/bin/crsctl query crs releasepatch
2016-09-19 12:52:31: Command output:
> Oracle Clusterware release patch level is [3696455212] and the complete list of patches [19769480 20299023 20831110 21359755 21436941 21948354 22291127 23054246 23054327 23054341 ] have been applied on the local node.
>End Command output
2016-09-19 12:52:31: Oracle Clusterware release patch level is [3696455212]
2016-09-19 12:52:31: setting ORAASM_UPGRADE to 1
2016-09-19 12:52:31: Executing cmd: /u01/app/12.1.0.2/grid/bin/crsctl query crs activeversion -f
2016-09-19 12:52:31: Command output:
> Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3696455212].
>End Command output
2016-09-19 12:52:31: Oracle Clusterware active patch level is [3696455212]
2016-09-19 12:52:31: The Clusterware active patch level [3696455212] has been updated to [3696455212].
2016-09-19 12:52:31: Postpatch: isLastNode is 1
2016-09-19 12:52:31: Last node: enable Mgmt DB globally
2016-09-19 12:52:31: Invoking "/u01/app/12.1.0.2/grid/bin/srvctl enable mgmtdb"
2016-09-19 12:52:31: trace file=/u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/srvmcfg1.log
2016-09-19 12:52:31: Running as user grid: /u01/app/12.1.0.2/grid/bin/srvctl enable mgmtdb
2016-09-19 12:52:31: Invoking "/u01/app/12.1.0.2/grid/bin/srvctl enable mgmtdb" as user "grid"
2016-09-19 12:52:31: Executing /bin/su grid -c "/u01/app/12.1.0.2/grid/bin/srvctl enable mgmtdb"
2016-09-19 12:52:31: Executing cmd: /bin/su grid -c "/u01/app/12.1.0.2/grid/bin/srvctl enable mgmtdb"
2016-09-19 12:52:32: Modifying 10.2 resources
2016-09-19 12:52:32: Invoking "/u01/app/12.1.0.2/grid/bin/srvctl upgrade model -pretb"
2016-09-19 12:52:32: trace file=/u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/srvmcfg2.log
2016-09-19 12:52:32: Executing /u01/app/12.1.0.2/grid/bin/srvctl upgrade model -pretb
2016-09-19 12:52:32: Executing cmd: /u01/app/12.1.0.2/grid/bin/srvctl upgrade model -pretb
2016-09-19 12:52:32: 'srvctl upgrade model -pretb' ... success
2016-09-19 12:52:32: Executing cmd: /u01/app/12.1.0.2/grid/bin/srvctl status mgmtdb -S 1
2016-09-19 12:52:32: Command output:
> #@=result[0]: dbunique_name={_mgmtdb} inst_name={-MGMTDB} node_name={racnode2} up={true} state_details={Open} internal_state={STABLE}
>End Command output
2016-09-19 12:52:32: Mgmtdb is running on node: racnode2; local node: racnode2
2016-09-19 12:52:32: Mgmtdb is running on the local node
2016-09-19 12:52:32: Starting to patch Mgmt DB ...
2016-09-19 12:52:32: Invoking "/u01/app/12.1.0.2/grid/sqlpatch/sqlpatch -db -MGMTDB"
2016-09-19 12:52:32: Running as user grid: /u01/app/12.1.0.2/grid/sqlpatch/sqlpatch -db -MGMTDB
2016-09-19 12:52:32: Invoking "/u01/app/12.1.0.2/grid/sqlpatch/sqlpatch -db -MGMTDB" as user "grid"
2016-09-19 12:52:32: Executing /bin/su grid -c "/u01/app/12.1.0.2/grid/sqlpatch/sqlpatch -db -MGMTDB"
2016-09-19 12:52:32: Executing cmd: /bin/su grid -c "/u01/app/12.1.0.2/grid/sqlpatch/sqlpatch -db -MGMTDB"
2016-09-19 12:53:39: Command output:
> SQL Patching tool version 12.1.0.2.0 on Mon Sep 19 12:52:32 2016
> Copyright (c) 2016, Oracle. All rights reserved.
>
> Connecting to database...OK
> Note: Datapatch will only apply or rollback SQL fixes for PDBs
> that are in an open state, no patches will be applied to closed PDBs.
> Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
> (Doc ID 1585822.1)
> Determining current state...done
> Adding patches to installation queue and performing prereq checks...done
> Installation queue:
> For the following PDBs: CDB$ROOT PDB$SEED RACNODE-CLUSTER
> Nothing to roll back
> The following patches will be applied:
> 23054246 (Database Patch Set Update : 12.1.0.2.160719 (23054246))
>
> Installing patches...
> Patch installation complete. Total patches installed: 3
>
> Validating logfiles...done
> SQL Patching tool complete on Mon Sep 19 12:53:39 2016
>End Command output
2016-09-19 12:53:39: Successfully patched Mgmt DB
2016-09-19 12:53:39: Invoking "/u01/app/12.1.0.2/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS"
2016-09-19 12:53:39: trace file=/u01/app/grid/crsdata/racnode2/crsconfig/cluutil7.log
2016-09-19 12:53:39: Running as user grid: /u01/app/12.1.0.2/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS
2016-09-19 12:53:39: s_run_as_user2: Running /bin/su grid -c ' echo CLSRSC_START; /u01/app/12.1.0.2/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS '
2016-09-19 12:53:39: Removing file /tmp/filei2kicY
2016-09-19 12:53:39: Successfully removed file: /tmp/filei2kicY
2016-09-19 12:53:39: pipe exit code: 0
2016-09-19 12:53:39: /bin/su successfully executed

2016-09-19 12:53:39: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:SUCCESS
[+ASM2] grid@racnode2:~$

c) Querying PDB shows MGMTDB has been patched as well.

-- check CDB$ROOT

SQL> set pagesize 120
SQL> set linesize 180
SQL> select * from dba_registry_sqlpatch;

PATCH_ID PATCH_UID VERSION FLAGS ACTION STATUS ACTION_TIME
---------- ---------- -------------------- ---------- --------------- --------------- ---------------------------------------------------------------------------
DESCRIPTION BUNDLE_SERIES BUNDLE_ID
---------------------------------------------------------------------------------------------------- ------------------------------ ----------
BUNDLE_DATA
--------------------------------------------------------------------------------
LOGFILE
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

23054246 20213895 12.1.0.2 NB APPLY SUCCESS 19-SEP-16 12.53.38.034518 PM
Database Patch Set Update : 12.1.0.2.160719 (23054246) PSU 160719
<bundledata version="12.1.0.2.1" series="Patch Set Update">
 <bundle id="1" des
/u01/app/grid/cfgtoollogs/sqlpatch/23054246/20213895/23054246_apply__MGMTDB_CDBROOT_2016Sep19_12_53_18.log



-- check PDB RACNODE-CLUSTER

SQL> alter session set container=racnode-cluster;

SQL> select * from dba_registry_sqlpatch;

PATCH_ID PATCH_UID VERSION FLAGS ACTION STATUS ACTION_TIME
---------- ---------- -------------------- ---------- --------------- --------------- ---------------------------------------------------------------------------
DESCRIPTION BUNDLE_SERIES BUNDLE_ID
---------------------------------------------------------------------------------------------------- ------------------------------ ----------
BUNDLE_DATA
--------------------------------------------------------------------------------
LOGFILE
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
23054246 20213895 12.1.0.2 NB APPLY SUCCESS 19-SEP-16 12.53.38.457190 PM
Database Patch Set Update : 12.1.0.2.160719 (23054246) PSU 160719
<bundledata version="12.1.0.2.1" series="Patch Set Update">
 <bundle id="1" des
/u01/app/grid/cfgtoollogs/sqlpatch/23054246/20213895/23054246_apply__MGMTDB_RACNODE-CLUSTER_2016Sep19_12_53_32.log

Is MGMTDB Using HugePages ?

GIMR database “-MGMTDB” uses HugePages if Linux server is configured with HugePages.

The HugePages was configured for newly built Linix servers, and we did not change any configurations of GIMR database “-MGMTDB”. Just found MGMTDB is using HugePages automatically if HugePages is configured according to the alert log.

**********************************************************************
Mon Sep 19 12:52:15 2016
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)

Mon Sep 19 12:52:15 2016
 Per process system memlock (soft) limit = UNLIMITED
Mon Sep 19 12:52:15 2016
 Expected per process system memlock (soft) limit to lock
 SHARED GLOBAL AREA (SGA) into memory: 754M
Mon Sep 19 12:52:15 2016
 Available system pagesizes:
 4K, 2048K
Mon Sep 19 12:52:15 2016
 Supported system pagesize(s):
Mon Sep 19 12:52:15 2016
 PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s)
Mon Sep 19 12:52:15 2016
 4K Configured 4 4 NONE
Mon Sep 19 12:52:15 2016
 2048K 59398 377 377 NONE
Mon Sep 19 12:52:15 2016
**********************************************************************

System parameters with non-default values:
 processes = 300
 cpu_count = 2
 sga_target = 752M
 control_files = "+OCR_VOTE/_MGMTDB/CONTROLFILE/current.260.922884271"
 db_block_size = 8192
 compatible = "12.1.0.2.0"
 cluster_database = FALSE
 db_create_file_dest = "+OCR_VOTE"
 _catalog_foreign_restore = FALSE
 undo_tablespace = "UNDOTBS1"
 _disable_txn_alert = 3
 _partition_large_extents = "false"
 remote_login_passwordfile= "EXCLUSIVE"
 db_domain = ""
 dispatchers = "(PROTOCOL=TCP) (SERVICE=-MGMTDBXDB)"
 job_queue_processes = 0
 _kolfuseslf = TRUE
 parallel_min_servers = 0
 audit_trail = "NONE"
 db_name = "_mgmtdb"
 open_cursors = 100
 pga_aggregate_target = 325M
 statistics_level = "TYPICAL"
 diagnostic_dest = "/u01/app/grid"
 enable_pluggable_database= TRUE

Interface (‘VirtualBox Host-Only Ethernet Adapter’) is not a Host-Only Adapter interface (VERR_INTERNAL_ERROR).

Refresh MAC address and get a new random MAC address helps this Host-Only Adapter interface issue.

The laptop was upgraded from Windows 8 to Windows 10 . After that ,VirtualBox 5.0 software failed to start up and there was no response at all. The software is neither up nor with any error messages.

Then uninstalled VirtualBox 5.0 and installed latest VirtualBox 5.1.6. It was better this time. But when I tried to start up one Linux virtual server, got this message:

capture_1
Interface ('VirtualBox Host-Only Ethernet Adapter') is not a Host-Only Adapter interface (VERR_INTERNAL_ERROR).


Result Code: 
E_FAIL (0x80004005)
Component: 
ConsoleWrap
Interface: 
IConsole {872da645-4a9b-1727-bee2-5585105b9eed}

Checked a couple of items, all look fine:

a) Run “VBoxManage list hostonlyifs”

c:\Program Files\Oracle\VirtualBox>VBoxManage list hostonlyifs
Name: VirtualBox Host-Only Ethernet Adapter #2
GUID: be24ca57-3d51-4b39-bb51-5675a36f571b
DHCP: Disabled
IPAddress: 192.168.56.1
NetworkMask: 255.255.255.0
IPV6Address: fe80:0000:0000:0000:2561:106d:eb4c:7410
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 0a:00:27:00:00:18
MediumType: Ethernet
Status: Up
VBoxNetworkName: HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter #2

b) Go to File->Preference->Network->Host-only Network Details, looks good.

capture_2

c) “VirtualBox NDIS6 Briged Networking Driver” is checked:

capture_3

d)  Click refresh button to get a new “MAC ADDRESS”.

capture_4

e) After the new “MAC ADDRESS”, then everything is working fine. Linux virtual machine server started up successfully.

So it is working by giving a new “MAC address”. Amazing !