EM 13c: The agent has been blocked by the OMS

OEM agent was blocked with the following messages in agent log:

...
..
.
2023-12-05 20:16:59,448 [158:905DEBE9] INFO - The agent has been blocked by the OMS.
2023-12-05 20:16:59,448 [158:905DEBE9] INFO - Reason the OMS blocked the agent: Resync
...
..
.

SOLUTION

  1. Perform agent resynchronization from OEM console.

    AND / OR
  2. Modify bounceCtr value in agntstmp.txt file to one less than repository value and save it.
Get Bounce_CTR value from repository database as SYSMAN:


SQL>select BOUNCE_CTR from MGMT_EMD_PING where TARGET_GUID=(select target_guid from mgmt_targets where target_name ='<agent_home>:<port>');
BOUNCE_CTR
29

For a health agent, bounceCtr value in agntstmp.txt file must be at 119.

$emctl stop agent

2. Take backup of file and edit agntstmp.txt from <AGENT_HOME>/agent_inst/sysman/emd/


From
bounceCtr=88
To
bounceCtr=28

3. Save file and start agent

$emctl start agent

PRVG-11138 PRVG-11352 When Running cluvfy or runcluvfy.sh

When running runcluvfy.sh or cluvfy, the following messages appear:

Multicast or broadcast check - This task checks that network interfaces in subnet are able to communicate over multicast group or broadcast IP address Details: 

 - 

PRVG-11138 : Interface "ens8" on node "racnode1" is not able to communicate with interface "ens8" on node "racnode1" over multicast group "224.0.0.251"  - Cause:  The specified interfaces were not able to communicate using a multicast address.  - Action:  Ensure that multicast is enabled on the specified interfaces and that a network path exists between the interfaces.

 - 

PRVG-11138 : Interface "ens8" on node "racnode1" is not able to communicate with interface "ens8" on node "racnode1" over multicast group "230.0.1.0"  - Cause:  The specified interfaces were not able to communicate using a multicast address.  - Action:  Ensure that multicast is enabled on the specified interfaces and that a network path exists between the interfaces.

 - 

PRVG-11352 : Interface "192.168.112.11" on node "racnode1" is not able to communicate with interface "192.168.112.11" on node "racnode1" with broadcast address "255.255.255.255"  - Cause:  The specified interfaces were not able to communicate using the broadcast address.  - Action:  Ensure that broadcast is enabled on the specified interfaces and that the network path allows broadcast.

SOLUTION

Stop and disable firewalld.

# systemctl status firewalld

# systemctl stop firewalld

# systemctl disable firewalld

# systemctl status firewalld

gridSetup.sh ERROR: Unable to verify the graphical display setup. This application requires X display. Make sure that xdpyinfo exist under PATH variable.

When setting up 19c GI by running “gridSetup.sh”, the following errors occur:

 $ ./gridSetup.sh

ERROR: Unable to verify the graphical display setup. This application requires X display. Make sure that xdpyinfo exist under PATH variable.

No X11 DISPLAY variable was set, but this program performed an operation which requires it.

SOLUTION

Enable X11forwarding in /etc/ssh/sshd_config.

# ls -ltr /etc/ssh/sshd_config
-rw——-. 1 root root 4131 Apr 4 2023 /etc/ssh/sshd_config

# vi /etc/ssh/sshd_config

X11Forwarding yes

# systemctl restart sshd

OPatch Error: This Java instance does not support a 64-bit JVM.opatch

When trying to run opatch, the following error occurs:

$ opatch

Error: This Java instance does not support a 64-bit JVM.
Please install the desired version.

OPatch failed with error code 1

SOLUTION

Download the right opatch.

For 19c opatch on Linux x86-64 download “p6880880_190000_Linux-x86-64.zip” instead of “p6880880_190000_Linux.zip”.

How to Add a Disk into ASM DiskGroup Safely

For a client, there are situations when adding a disk into ASM disk group failed, finally rebooting the server is required to resolve this issue.

According to Oracle support suggestion, we can create a test diskgroup by using this new disk, if everything is successful, then we can drop this test diskgroup, and add the new disk into target diskgroup.

It is a good practice specially for critical production environment.

Check the new disk is a “CANDIDATE”

$sqlplus / as sysasm 


SQL>select INST_ID,GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,
MODE_STATUS,STATE,REDUNDANCY,OS_MB,OS_MB,TOTAL_MB,NAME,
FAILGROUP,PATH
from gv$asm_disk
where PATH like '/dev/mapper/prod_data_303p1'
order by 1,2,3;

Create a Test Diskgroup

Create a test diskgroup “TEMP_TEST”, and check disk status is “MEMBER”

SQL> CREATE DISKGROUP TEMP_TEST EXTERNAL REDUNDANCY disk '/dev/mapper/prod_data_303p1' NAME TEMP_TEST_0001;


Diskgroup created.
SQL>select INST_ID,GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,
MODE_STATUS,STATE,REDUNDANCY,OS_MB,OS_MB,TOTAL_MB,NAME,
FAILGROUP,PATH
from gv$asm_disk
where PATH like '/dev/mapper/prod_data_303p1'
order by 1,2,3;

Mount the Test Diskgroup on Other Nodes

Mount the test diskgrop on all the rest ASM nodes to make sure all working fine.

$sqlplus / as sysasm

SQL>set pagesize 100
SQL>set linesize 300
SQL>select INST_ID,GROUP_NUMBER,NAME,STATE,TOTAL_MB,FREE_MB,
COMPATIBILITY,DATABASE_COMPATIBILITY
from gv$asm_diskgroup
where name='TEMP_TEST'
order by 1,2;
SQL> alter diskgroup TEMP_TEST mount;


Diskgroup altered.
SQL>select INST_ID,GROUP_NUMBER,NAME,STATE,TOTAL_MB,FREE_MB,
COMPATIBILITY,DATABASE_COMPATIBILITY
from gv$asm_diskgroup
where name='TEMP_TEST'
order by 1,2;

Dismount the Test Diskgroup on Other Nodes

Dismount the test diskgroup on all the other nodes except the first node for dropping in the next step.

SQL> alter diskgroup TEMP_TEST dismount;


Diskgroup altered.

SQL>select INST_ID,GROUP_NUMBER,NAME,STATE,TOTAL_MB,FREE_MB,
COMPATIBILITY,DATABASE_COMPATIBILITY
from gv$asm_diskgroup
where name='TEMP_TEST'
order by 1,2;

Drop the Test Diskgroup

Drop the test diskgroup on first node where the test diskgroup is still mounted.

SQL> DROP DISKGROUP  TEMP_TEST ;


Diskgroup dropped.

And check the new disk header status is “FORMER”, which is available for adding into target diskgroup now.

SQL>select INST_ID,GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,
MODE_STATUS,STATE,REDUNDANCY,OS_MB,OS_MB,TOTAL_MB,NAME,
FAILGROUP,PATH
from gv$asm_disk
where PATH like '/dev/mapper/prod_data_303p1'
order by 1,2,3;

Add Disk into Target Diskgroup

Now it is time to add the new disk into target diskgroup.

SQL> set time on

08:08:17 SQL> ALTER DISKGROUP PROD_DATA ADD DISK '/dev/mapper/prod_data_303p1' NAME PROD_DATA_0012;

Diskgroup altered.

Then check disk and diskgroup status

09:09:01 SQL>select INST_ID,GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,
HEADER_STATUS,MODE_STATUS,STATE,REDUNDANCY,
OS_MB,OS_MB,TOTAL_MB,NAME,FAILGROUP,PATH
from gv$asm_disk
where PATH like '%prod_data%'
order by 1,2,3;
09:09:14 SQL>  select INST_ID,GROUP_NUMBER,NAME,STATE,TOTAL_MB,FREE_MB,
COMPATIBILITY,DATABASE_COMPATIBILITY
from gv$asm_diskgroup
where name='PROD_DATA' ;

Monitor and Check Rebalance

00:10:35 SQL> select GROUP_NUMBER,OPERATION,STATE,POWER,ACTUAL,
SOFAR,EST_WORK,EST_RATE,EST_MINUTES
from v$asm_operation;