When tried to create diskgroup FRA, and got the following errors:
SQL> CREATE DISKGROUP FRA EXTERNAL REDUNDANCY
DISK '/dev/oracleasm/disks/ASM_FRA01' NAME ASM_FRA01;
CREATE DISKGROUP FRA EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/ASM_FRA01' NAME ASM_FRA01
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/oracleasm/disks/ASM_FRA01' matches no disks
ORA-15014: path '/dev/oracleasm/disks/ASM_FRA01' is not in the discovery set
MGMTDB will be patched as well at the same time when GI is upgraded or patched ( PSU, RU/RUR, etc).
Yes, while applying PSU patch onto GI, we can see GIMR database “-MGMTDB” is patched as well. It can be confirmed from logs.
a) Logs from “opatchauto apply” shows MGMTDB is patched while applying GI PSU:
System initialization log file is /u01/app/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2016-09-19_12-46-41PM.log.
Session log file is /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/opatchauto2016-09-19_12-46-55PM.log
The id for this session is 2G1F
[init:init] Executing OPatchAutoBinaryAction action on home /u01/app/12.1.0.2/grid
Executing OPatch prereq operations to verify patch applicability on CRS Home........
[init:init] OPatchAutoBinaryAction action completed on home /u01/app/12.1.0.2/grid successfully
[init:init] Executing GIRACPrereqAction action on home /u01/app/12.1.0.2/grid
Executing prereq operations before applying on CRS Home........
[init:init] GIRACPrereqAction action completed on home /u01/app/12.1.0.2/grid successfully
[shutdown:shutdown] Executing GIShutDownAction action on home /u01/app/12.1.0.2/grid
Performing prepatch operations on CRS Home........
Prepatch operation log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_racnode2_2016-09-19_12-47-24AM.log
[shutdown:shutdown] GIShutDownAction action completed on home /u01/app/12.1.0.2/grid successfully
[offline:binary-patching] Executing OPatchAutoBinaryAction action on home /u01/app/12.1.0.2/grid
Start applying binary patches on CRS Home........
[offline:binary-patching] OPatchAutoBinaryAction action completed on home /u01/app/12.1.0.2/grid successfully
[startup:startup] Executing GIStartupAction action on home /u01/app/12.1.0.2/grid
Performing postpatch operations on CRS Home........
Postpatch operation log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_racnode2_2016-09-19_12-50-36AM.log
[startup:startup] GIStartupAction action completed on home /u01/app/12.1.0.2/grid successfully
[finalize:finalize] Executing OracleHomeLSInventoryGrepAction action on home /u01/app/12.1.0.2/grid
Verifying patches applied on CRS Home.
[finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /u01/app/12.1.0.2/grid successfully
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:racnode2
CRS Home:/u01/app/12.1.0.2/grid
Summary:
==Following patches were SKIPPED:
Patch: /u01/app/software/23615308/23177536
Reason: This patch is not applicable to this specified target type - "cluster"
==Following patches were SUCCESSFULLY applied:
Patch: /u01/app/software/23615308/21436941
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-09-19_12-47-55PM_1.log
Patch: /u01/app/software/23615308/23054246
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-09-19_12-47-55PM_1.log
Patch: /u01/app/software/23615308/23054327
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-09-19_12-47-55PM_1.log
Patch: /u01/app/software/23615308/23054341
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-09-19_12-47-55PM_1.log
b) Check Postpatch operation log file /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_racnode2_2016-09-19_12-50-36AM.log
$ tail -120 /u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_racnode2_2016-09-19_12-50-36AM.log
...
....
2016-09-19 12:52:30: Executing cmd: /u01/app/12.1.0.2/grid/bin/srvctl config mgmtdb
2016-09-19 12:52:31: Command output:
> Database unique name: _mgmtdb
> Database name:
> Oracle home: <CRS home>
> Oracle user: grid
> Spfile: +OCR_VOTE/_MGMTDB/PARAMETERFILE/spfile.268.922884353
> Password file:
> Domain:
> Start options: open
> Stop options: immediate
> Database role: PRIMARY
> Management policy: AUTOMATIC
> Type: Management
> PDB name: racnode-cluster
> PDB service: racnode-cluster
> Cluster name: racnode-cluster
> Database instance: -MGMTDB
>End Command output
2016-09-19 12:52:31: isMgmtdbConfigured: 1
2016-09-19 12:52:31: setting ORAASM_UPGRADE to 1
2016-09-19 12:52:31: Executing cmd: /u01/app/12.1.0.2/grid/bin/crsctl query crs softwarepatch racnode2
2016-09-19 12:52:31: Command output:
> Oracle Clusterware patch level on node racnode2 is [3696455212].
>End Command output
2016-09-19 12:52:31: Oracle Clusterware patch level on node 'racnode2' is [3696455212]
2016-09-19 12:52:31: setting ORAASM_UPGRADE to 1
2016-09-19 12:52:31: Executing cmd: /u01/app/12.1.0.2/grid/bin/crsctl query crs softwarepatch racnode1
2016-09-19 12:52:31: Command output:
> Oracle Clusterware patch level on node racnode1 is [3696455212].
>End Command output
2016-09-19 12:52:31: Oracle Clusterware patch level on node 'racnode1' is [3696455212]
2016-09-19 12:52:31: The local node has the same software patch level [3696455212] as remote node 'racnode1'
2016-09-19 12:52:31: Executing cmd: /u01/app/12.1.0.2/grid/bin/crsctl query crs releasepatch
2016-09-19 12:52:31: Command output:
> Oracle Clusterware release patch level is [3696455212] and the complete list of patches [19769480 20299023 20831110 21359755 21436941 21948354 22291127 23054246 23054327 23054341 ] have been applied on the local node.
>End Command output
2016-09-19 12:52:31: Oracle Clusterware release patch level is [3696455212]
2016-09-19 12:52:31: setting ORAASM_UPGRADE to 1
2016-09-19 12:52:31: Executing cmd: /u01/app/12.1.0.2/grid/bin/crsctl query crs activeversion -f
2016-09-19 12:52:31: Command output:
> Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3696455212].
>End Command output
2016-09-19 12:52:31: Oracle Clusterware active patch level is [3696455212]
2016-09-19 12:52:31: The Clusterware active patch level [3696455212] has been updated to [3696455212].
2016-09-19 12:52:31: Postpatch: isLastNode is 1
2016-09-19 12:52:31: Last node: enable Mgmt DB globally
2016-09-19 12:52:31: Invoking "/u01/app/12.1.0.2/grid/bin/srvctl enable mgmtdb"
2016-09-19 12:52:31: trace file=/u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/srvmcfg1.log
2016-09-19 12:52:31: Running as user grid: /u01/app/12.1.0.2/grid/bin/srvctl enable mgmtdb
2016-09-19 12:52:31: Invoking "/u01/app/12.1.0.2/grid/bin/srvctl enable mgmtdb" as user "grid"
2016-09-19 12:52:31: Executing /bin/su grid -c "/u01/app/12.1.0.2/grid/bin/srvctl enable mgmtdb"
2016-09-19 12:52:31: Executing cmd: /bin/su grid -c "/u01/app/12.1.0.2/grid/bin/srvctl enable mgmtdb"
2016-09-19 12:52:32: Modifying 10.2 resources
2016-09-19 12:52:32: Invoking "/u01/app/12.1.0.2/grid/bin/srvctl upgrade model -pretb"
2016-09-19 12:52:32: trace file=/u01/app/12.1.0.2/grid/cfgtoollogs/crsconfig/srvmcfg2.log
2016-09-19 12:52:32: Executing /u01/app/12.1.0.2/grid/bin/srvctl upgrade model -pretb
2016-09-19 12:52:32: Executing cmd: /u01/app/12.1.0.2/grid/bin/srvctl upgrade model -pretb
2016-09-19 12:52:32: 'srvctl upgrade model -pretb' ... success
2016-09-19 12:52:32: Executing cmd: /u01/app/12.1.0.2/grid/bin/srvctl status mgmtdb -S 1
2016-09-19 12:52:32: Command output:
> #@=result[0]: dbunique_name={_mgmtdb} inst_name={-MGMTDB} node_name={racnode2} up={true} state_details={Open} internal_state={STABLE}
>End Command output
2016-09-19 12:52:32: Mgmtdb is running on node: racnode2; local node: racnode2
2016-09-19 12:52:32: Mgmtdb is running on the local node
2016-09-19 12:52:32: Starting to patch Mgmt DB ...
2016-09-19 12:52:32: Invoking "/u01/app/12.1.0.2/grid/sqlpatch/sqlpatch -db -MGMTDB"
2016-09-19 12:52:32: Running as user grid: /u01/app/12.1.0.2/grid/sqlpatch/sqlpatch -db -MGMTDB
2016-09-19 12:52:32: Invoking "/u01/app/12.1.0.2/grid/sqlpatch/sqlpatch -db -MGMTDB" as user "grid"
2016-09-19 12:52:32: Executing /bin/su grid -c "/u01/app/12.1.0.2/grid/sqlpatch/sqlpatch -db -MGMTDB"
2016-09-19 12:52:32: Executing cmd: /bin/su grid -c "/u01/app/12.1.0.2/grid/sqlpatch/sqlpatch -db -MGMTDB"
2016-09-19 12:53:39: Command output:
> SQL Patching tool version 12.1.0.2.0 on Mon Sep 19 12:52:32 2016
> Copyright (c) 2016, Oracle. All rights reserved.
>
> Connecting to database...OK> Note: Datapatch will only apply or rollback SQL fixes for PDBs> that are in an open state, no patches will be applied to closed PDBs.> Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation> (Doc ID 1585822.1)> Determining current state...done> Adding patches to installation queue and performing prereq checks...done> Installation queue:> For the following PDBs: CDB$ROOT PDB$SEED RACNODE-CLUSTER> Nothing to roll back> The following patches will be applied:> 23054246 (Database Patch Set Update : 12.1.0.2.160719 (23054246))>> Installing patches...> Patch installation complete. Total patches installed: 3>> Validating logfiles...done> SQL Patching tool complete on Mon Sep 19 12:53:39 2016>End Command output2016-09-19 12:53:39: Successfully patched Mgmt DB
2016-09-19 12:53:39: Invoking "/u01/app/12.1.0.2/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS"
2016-09-19 12:53:39: trace file=/u01/app/grid/crsdata/racnode2/crsconfig/cluutil7.log
2016-09-19 12:53:39: Running as user grid: /u01/app/12.1.0.2/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS
2016-09-19 12:53:39: s_run_as_user2: Running /bin/su grid -c ' echo CLSRSC_START; /u01/app/12.1.0.2/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS '
2016-09-19 12:53:39: Removing file /tmp/filei2kicY
2016-09-19 12:53:39: Successfully removed file: /tmp/filei2kicY
2016-09-19 12:53:39: pipe exit code: 0
2016-09-19 12:53:39: /bin/su successfully executed
2016-09-19 12:53:39: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:SUCCESS
[+ASM2] grid@racnode2:~$
c) Querying PDB shows MGMTDB has been patched as well.
-- check CDB$ROOT
SQL> set pagesize 120
SQL> set linesize 180
SQL> select * from dba_registry_sqlpatch;
PATCH_ID PATCH_UID VERSION FLAGS ACTION STATUS ACTION_TIME
---------- ---------- -------------------- ---------- --------------- --------------- ---------------------------------------------------------------------------
DESCRIPTION BUNDLE_SERIES BUNDLE_ID
---------------------------------------------------------------------------------------------------- ------------------------------ ----------
BUNDLE_DATA
--------------------------------------------------------------------------------
LOGFILE
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
23054246 20213895 12.1.0.2 NB APPLY SUCCESS 19-SEP-16 12.53.38.034518 PM
Database Patch Set Update : 12.1.0.2.160719 (23054246) PSU 160719
<bundledata version="12.1.0.2.1" series="Patch Set Update">
<bundle id="1" des
/u01/app/grid/cfgtoollogs/sqlpatch/23054246/20213895/23054246_apply__MGMTDB_CDBROOT_2016Sep19_12_53_18.log
-- check PDB RACNODE-CLUSTER
SQL> alter session set container=racnode-cluster;
SQL> select * from dba_registry_sqlpatch;
PATCH_ID PATCH_UID VERSION FLAGS ACTION STATUS ACTION_TIME
---------- ---------- -------------------- ---------- --------------- --------------- ---------------------------------------------------------------------------
DESCRIPTION BUNDLE_SERIES BUNDLE_ID
---------------------------------------------------------------------------------------------------- ------------------------------ ----------
BUNDLE_DATA
--------------------------------------------------------------------------------
LOGFILE
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
23054246 20213895 12.1.0.2 NB APPLY SUCCESS 19-SEP-16 12.53.38.457190 PM
Database Patch Set Update : 12.1.0.2.160719 (23054246) PSU 160719
<bundledata version="12.1.0.2.1" series="Patch Set Update">
<bundle id="1" des
/u01/app/grid/cfgtoollogs/sqlpatch/23054246/20213895/23054246_apply__MGMTDB_RACNODE-CLUSTER_2016Sep19_12_53_32.log
GIMR database “-MGMTDB” uses HugePages if Linux server is configured with HugePages.
The HugePages was configured for newly built Linix servers, and we did not change any configurations of GIMR database “-MGMTDB”. Just found MGMTDB is using HugePages automatically if HugePages is configured according to the alert log.
**********************************************************************
Mon Sep 19 12:52:15 2016
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)
Mon Sep 19 12:52:15 2016
Per process system memlock (soft) limit = UNLIMITED
Mon Sep 19 12:52:15 2016
Expected per process system memlock (soft) limit to lock
SHARED GLOBAL AREA (SGA) into memory: 754M
Mon Sep 19 12:52:15 2016
Available system pagesizes:
4K, 2048K
Mon Sep 19 12:52:15 2016
Supported system pagesize(s):
Mon Sep 19 12:52:15 2016
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s)
Mon Sep 19 12:52:15 2016
4K Configured 4 4 NONE
Mon Sep 19 12:52:15 2016
2048K 59398 377 377 NONE
Mon Sep 19 12:52:15 2016
**********************************************************************
System parameters with non-default values:
processes = 300
cpu_count = 2
sga_target = 752M
control_files = "+OCR_VOTE/_MGMTDB/CONTROLFILE/current.260.922884271"
db_block_size = 8192
compatible = "12.1.0.2.0"
cluster_database = FALSE
db_create_file_dest = "+OCR_VOTE"
_catalog_foreign_restore = FALSE
undo_tablespace = "UNDOTBS1"
_disable_txn_alert = 3
_partition_large_extents = "false"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=-MGMTDBXDB)"
job_queue_processes = 0
_kolfuseslf = TRUE
parallel_min_servers = 0
audit_trail = "NONE"
db_name = "_mgmtdb"
open_cursors = 100
pga_aggregate_target = 325M
statistics_level = "TYPICAL"
diagnostic_dest = "/u01/app/grid"
enable_pluggable_database= TRUE
Refresh MAC address and get a new random MAC address helps this Host-Only Adapter interface issue.
The laptop was upgraded from Windows 8 to Windows 10 . After that ,VirtualBox 5.0 software failed to start up and there was no response at all. The software is neither up nor with any error messages.
Then uninstalled VirtualBox 5.0 and installed latest VirtualBox 5.1.6. It was better this time. But when I tried to start up one Linux virtual server, got this message:
Interface ('VirtualBox Host-Only Ethernet Adapter') is not a Host-Only Adapter interface (VERR_INTERNAL_ERROR).
Result Code:
E_FAIL (0x80004005)
Component:
ConsoleWrap
Interface:
IConsole {872da645-4a9b-1727-bee2-5585105b9eed}
It is a good practice to recycle listener logs which grows in size day by day.
The listener logs including SCAN listeners are growing very fast and big. So we need purge them periodically. We can stop listener, rename/delete/zip the listener logs, but it is not a good practice to stop listener at any time, because it will impact the business.
Here is a simple way to purge listener logs online:
$ lsnrctl
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2016 10:17:10
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL> set
The following operations are available after set
An asterisk (*) denotes a modifier or extended command:
rawmode displaymode
trc_file trc_directory
trc_level log_file
log_directory log_status
current_listener inbound_connect_timeout
startup_waittime save_config_on_stop
dynamic_registration enable_global_dynamic_endpoint
connection_rate_limit valid_node_checking_registration
registration_invited_nodes registration_excluded_nodes
LSNRCTL> set current_listener listener
Current Listener is listener
LSNRCTL> show log_status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
listener parameter "log_status" set to ON
The command completed successfully
LSNRCTL> set log_status off
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
listener parameter "log_status" set to OFF
The command completed successfully
LSNRCTL> exit
$ cd /u01/app/grid/diag/tnslsnr/racnode1/listener/trace
$ ls -ltr
total 223388
...
..
.
-rw-r----- 1 grid oinstall 146333508 Sep 21 10:17 listener.log
...
..
.
$cp listener.log listener_old.log
$cat /dev/null > listener.log
$ lsnrctl
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 21-SEP-2016 10:17:52
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL> set current_listener listener
Current Listener is listener
LSNRCTL> show log_status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
listener parameter "log_status" set to OFF
The command completed successfully
LSNRCTL> set log_status on
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
listener parameter "log_status" set to ON
The command completed successfully
LSNRCTL> exit
$ ls -ltr
total 224816
...
..
.
-rw-r----- 1 grid oinstall 146333508 Sep 21 10:17 listener_old.log
-rw-r----- 1 grid oinstall 2834 Sep 21 10:18 listener.log
...
..
.