SQL> select object_type, object_name
from dba_objects
where owner='SYS' and status!='VALID';
OBJECT_TYPE OBJECT_NAME
------------------- --------------------------
RULE SET SYS$SERVICE_METRICS_N
Recompile the invalid objects.
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql
SQL> select object_type, object_name from dba_objects where owner='SYS' and status!='VALID';
-- get all index names of table testuser.test
testdb=>\d testuser.test
...
..
.
-- get size of index
testdb=> select pg_size_pretty( pg_relation_size('testuser.ix_test_id'));
pg_size_pretty
----------------
942 MB
(1 row)
In Oracle Enterprise Manager 13c ( 13.2.0.0.0), auto extending tablespace corrective action is not working as expected, specially when calculating space usage.
*** BEGIN Parameters ***
Increase Bigfile Size (%): 5
Maximum Disk Usage (%): 95
How to Increase Space: Increase by %
...
..
.
ASM disk group is: DATA
Last created datafile: +DATA/RACTEST/3F9C860784456287E053530F040ADB20
/DATAFILE/test.372.1018794017
Tablespace total size (MB): 899964.00
Largest numeric suffix: 0
Datafile filename: test.372
Datafile directory: +DATA/RACTEST/3F9C860784456287E053530F040ADB20/
DATAFILE/
Datafile suffix: .1018794017
ASM space usage for disk group DATA is free(MB): 5817030.000,
total(MB): 28311415.000, required
for mirroring(MB): 0.000, redundancy: 1
ASM projected safely usable free space (MB) is: 1317210.000
ASM projected Space Used (%) is: 95.35
Not enough disk space left in disk group DATA to extend datafile,
95.35% > 95%
Disconnected from database
From the CA job trace file, we can see:
Increase Bigfile Size (%): 5
Maximum Disk Usage (%): 95
How to Increase Space: Increase by %
ASM space usage for disk group DATA is free(MB): 5817030.000,
total(MB): 28311415.000
Current big tablespace size : 899964.00
ASM projected safely usable free space (MB) is: 1317210.000
ASM projected Space Used (%) is: 95.35
Not enough disk space left in disk group DATA to extend datafile,
95.35% > 95%
Disconnected from database
ASM projected safely usable free space (MB) should be :
Current disk group free space - Current Tablespace Size * Increase %
= 5817030.000 - 899964*5/100 = 5772031.8 MB
While the Corrective Action trace file shows below ASM projected safely usable free space (MB):
ASM projected safely usable free space (MB) is: 1317210.000
Let’s check the source code from Oracle Enterprise Agent, we can see the code missed Percentage ( %) for tablespace increase rate.
$cd /u01/app/oracle/product/agent/agent_13.2.0.0.0/plugins/
oracle.sysman.db.agent.plugin_13.2.1.0.0/scripts/db
$ls -l dbAddSpaceTS.pl
...
..
.
# Case 1: Increase existing datafile by %
if ($bIncrByPct)
{
$diskFreeMB = $dirAvailMB - ($tsSizeMB * $incrPct);
}
...
..
.
# Case 1: Increase by %
if ($bIncrByPct)
{
$safelyUsableMB = ($dirAvailMB - ($tsSizeMB * $incrPct)
- $dgReqMirror) / $dgRedundancy;
}
...
..
.
so the right code should be :
# Case 1: Increase existing datafile by %
if ($bIncrByPct)
{
$diskFreeMB = $dirAvailMB - ($tsSizeMB * $incrPct)/100;
}
...
..
.
# Case 1: Increase by %
if ($bIncrByPct)
{
$safelyUsableMB = ($dirAvailMB - ($tsSizeMB * $incrPct)/100
- $dgReqMirror) / $dgRedundancy;
}
...
..
.
After modifying the code, everything works fine for Corrective Action job.
$ cluvfy comp clocksync -n all -verbose
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
Node Name Running?
---------- ------------------------
racnode2 no
racnode1 no
PRVF-7590 : "ntpd" is not running on node "racnode2"
PRVF-7590 : "ntpd" is not running on node "racnode1"
PRVG-1024 : The NTP Daemon or Service was not running on any of the
cluster nodes.
PRVF-5415 : Check to see if NTP daemon or service is running failed
Result: Clock synchronization check using Network Time Protocol(NTP)
failed
PRVF-9652 : Cluster Time Synchronization Services check failed
Verification of Clock Synchronization across the cluster nodes was
unsuccessful on all the specified nodes.
But ntpd demon process is running :
#systemctl status ntpd
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2019-09-08 21:06:46 AEST; 58min ago
Process: 2755 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 2756 (ntpd)
CGroup: /system.slice/ntpd.service
└─2756 /usr/sbin/ntpd -u ntp:ntp -g
From the trace file, it complaints “file check failed” for file “/var/run/ntpd.pid”.
$]$ tail -20 /tmp/cvutrace/cvutrace.log.0
[main] [ 2019-09-08 21:47:03.312 EST ] [VerificationLogData.traceLogData:251] FINE: [Task.perform:514]
m_nodeList='racnode2,racnode1'
[main] [ 2019-09-08 21:47:03.312 EST ] [VerificationLogData.traceLogData:251] INFO: [sVerificationUtil.getUniqueDistributionID:494] DistributionID[0]:7.2
[main] [ 2019-09-08 21:47:03.312 EST ] [VerificationLogData.traceLogData:251] INFO: [sVerificationUtil.getUniqueDistributionID:559] ==== Distribution Id determined to be OL7
[main] [ 2019-09-08 21:47:03.312 EST ] [VerificationLogData.traceLogData:251] FINE: [VerificationCommand.execute:297]
Output: '<CV_VRES>1</CV_VRES><CV_LOG>Exectask: file check failed</CV_LOG><CV_CMDLOG><CV_INITCMD>/tmp/CVU_12.1.0.2.0_grid/exectask -chkfile /var/run/ntpd.pid</CV_INITCMD><CV_CMD>access() /var/run/ntpd.pid F_OK</CV_CMD><CV_CMDOUT></CV_CMDOUT><CV_CMDSTAT>2</CV_CMDSTAT></CV_CMDLOG><CV_ERES>0</CV_ERES>'
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251] FINE: [VerificationCommand.execute:297]
Output: '<CV_VRES>1</CV_VRES><CV_LOG>Exectask: file check failed</CV_LOG><CV_CMDLOG><CV_INITCMD>/tmp/CVU_12.1.0.2.0_grid/exectask -chkfile /var/run/ntpd.pid </CV_INITCMD><CV_CMD>access() /var/run/ntpd.pid F_OK</CV_CMD><CV_CMDOUT></CV_CMDOUT><CV_CMDSTAT>2</CV_CMDSTAT></CV_CMDLOG><CV_ERES>0</CV_ERES>'
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251] ERROR: [Result.addErrorDescription:624] PRVF-7590 : "ntpd" is not running on node "racnode2"
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251] ERROR: [Result.addErrorDescription:624] PRVF-7590 : "ntpd" is not running on node "racnode1"
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251] FINE: [Task.perform:594]
TaskDaemonLiveliness:Daemon Liveness[TASKDAEMONLIVELINESS]:TASK_SUMMARY:FAILED:CRITICAL:VERIFICATION_FAILED
ERRORMSG(racnode2): PRVF-7590 : "ntpd" is not running on node "racnode2"
ERRORMSG(racnode1): PRVF-7590 : "ntpd" is not running on node "racnode1"
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251] ERROR: [ResultSet.addErrorDescription:1102] PRVG-1024 : The NTP Daemon or Service was not running on any of the cluster nodes.
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251] ERROR: [ResultSet.addErrorDescription:1102] PRVF-5415 : Check to see if NTP daemon or service is running failed
[main] [ 2019-09-08 21:47:03.313 EST ] [VerificationLogData.traceLogData:251] FINE: [Task.perform:594]
TaskCTSSIntegrity:Clock Synchronization[TASKCTSSINTEGRITY]:TASK_SUMMARY:FAILED:CRITICAL:VERIFICATION_FAILED
ERRORMSG(GLOBAL): PRVF-5415 : Check to see if NTP daemon or service is running failed
[main] [ 2019-09-08 21:47:03.313 EST ] [CluvfyDriver.main:360] ==== cluvfy exiting normally.
SOLUTION
As per Oracle GI installation documentation, configure ntpd service to start with a pidfile . Edit “/etc/sysconfig/ntpd” and modify the below line
$ cluvfy comp clocksync -n all -verbose
Verifying Clock Synchronization across the cluster nodes
Checking if Clusterware is installed on all nodes...
Oracle Clusterware is installed on all nodes.
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
Node Name Status
--------- ------------------------
racnode1 passed
racnode2 passed
CTSS resource check passed
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
Check: CTSS state
Node Name State
------------ ------------------------
racnode2 Observer
racnode1 Observer
CTSS is in Observer state. Switching over to clock synchronization
checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)
Checking existence of NTP configuration file "/etc/ntp.conf" across
nodes
Node Name File exists?
--------- -----------------------
racnode2 yes
racnode1 yes
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
Node Name Running?
---------- ------------------------
racnode2 yes
racnode1 yes
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
Checking whether NTP daemon or service is using UDP port 123
on all nodes
Check for NTP daemon or service using UDP port 123
Node Name Port Open?
---------- -----------------------
racnode2 yes
racnode1 yes
Check for synchronization of NTP daemon with at least one external
time source passed on all nodes.
Result: Clock synchronization check using Network Time Protocol(NTP)
passed
Oracle Cluster Time Synchronization Services check passed
Verification of Clock Synchronization across the cluster nodes was
successful.