Fix Corrupt block by Using RMAN

The following errors appear in alert log:

Thu Sep 23 20:06:34 2021
Hex dump of (file 23, block 774506) in trace file /u01/app/oracle/diag/rdbms/testdb/TETSDB/trace/TETSDB_ora_7558.trc
Corrupt block relative dba: 0x000bd16a (file 23, block 774506)
Bad check value found during backing up datafile
Data in bad block:
 type: 6 format: 2 rdba: 0x000bd16a
 last change scn: 0x071e.81e8b798 seq: 0x2 flg: 0x04
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0xb7980602
 check value in block header: 0xe202
 computed block checksum: 0x82fe
Reread of blocknum=774506, file=/u05/oradata/TETSDB/cdr_01.dbf. found same corrupt data
Reread of blocknum=774506, file=/u05/oradata/TETSDB/cdr_01.dbf. found same corrupt data
Reread of blocknum=774506, file=/u05/oradata/TETSDB/cdr_01.dbf. found same corrupt data
Reread of blocknum=774506, file=/u05/oradata/TETSDB/ctl_01.dbf. found same corrupt data
Reread of blocknum=774506, file=/u05/oradata/TETSDB/cdr_01.dbf. found same corrupt data

SOLUTION

The corrupted block can be fixed online without offline datafile or tablespace required.

double confirm the data corrution

Run dbverify utility

$dbv blocksize=8192 file=/u05/oradata/TETSDB/cdr_01.dbf
...
..
.
Corrupt block relative dba: 0x000bd16a (file 23, block 774506)
...
..
.
Total Pages Marked Corrupt : 1

Run RMAN utility

RMAN> connect target /
RMAN> backup validate check logical database;

Check block corruption from database view

SQL> select * from  v$database_block_corruption;

     FILE#     BLOCK#     BLOCKS CORRUPTION_CHANGE# CORRUPTIO
---------- ---------- ---------- ------------------ ---------
        23     774506          1                  0 CHECKSUM

fix the corruption

$ rman target / catalog rman/passwd@rman
connected to target database: TESTDB (DBID=1234567890)
connected to recovery catalog database

RMAN> run {
allocate channel ch1 TYPE 'SBT_TAPE';
SEND 'NSR_ENV=(NSR_SERVER=BKP_SERVER,NSR_CLIENT=DB_SERVER)';
BLOCKRECOVER DATAFILE 23 block 774506;
}
...
..
.
archived log for thread 1 with sequence 51992 is already on disk as file /fra/TESTDB/archivelog/2021_09_23/o1_mf_1_51992_jnrmdprf_.arc
archived log for thread 1 with sequence 51993 is already on disk as file /fra/TESTDB/archivelog/2021_09_23/o1_mf_1_51993_jnrpkppq_.arc
media recovery complete, elapsed time: 00:05:08
Finished recover at 23-SEP-21
released channel: ch1

RMAN>

Check the progressing:

SQL> select 
           username,sid, opname, target, sofar, totalwork, units, 
           to_char(start_time,'YYYYMMDD-HH24:MI:SS') StartTime, 
           time_remaining, message  
      from 
           V$SESSION_LONGOPS 
      where 
           time_remaining>60;

Finally check and confirm the corrupted block(s) have been fixed.

SQL> select * from  v$database_block_corruption;

no rows selected                                            

ORA-02449: unique/primary keys in table referenced by foreign keys

SQL> DROP TABLE CDR;
DROP TABLE CDR;
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys

SOLUTION

Check offending reference(s)

SQL> SELECT  TABLE_NAME,CONSTRAINT_NAME,CONSTRAINT_TYPE,R_CONSTRAINT_NAME,STATUS
     FROM    DBA_CONSTRAINTS
     WHERE   OWNER = 'TEST'
     AND     CONSTRAINT_TYPE='R'
     AND     R_CONSTRAINT_NAME IN (SELECT CONSTRAINT_NAME FROM DBA_CONSTRAINTS WHERE TABLE_NAME='CDR');

TABLE_NAME     CONSTRAINT_NAME  C R_CONSTRAINT_NAME   STATUS
-------------- ---------------- - ------------------- --------
TRANSACTION    FK_TRANSACTION   R SYS_C0096900        ENABLED

Drop offecdning reference(s)

By just disabling reference(s), it is still not working.

SQL>  alter table TRANSACTION disable constraint FK_TRANSACTION;

Table altered.

SQL> DROP TABLE CDR;
DROP TABLE CDR;
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys

Having to drop offending reference(s), then it is working.

SQL>  alter table TRANSACTION drop constraint FK_TRANSACTION;

Table altered.

SQL> DROP TABLE CDR;

Table dropped.

OR

Drop the table with “CASCADE CONSTRAINTS” option.

SQL> DROP TABLE CDR CASCADE CONSTRAINTS;

Table dropped.

ERROR: com.oracle.cie.gdr.utils.GdrException: Unable to find distribution xml file in distributions directory /tmp/sfx_TbqwSX/Disk1/stage

While upgrading OMS by running the following command, and the errors occur:

$ ./em13400_linux64.bin
Launcher log file is /tmp/OraInstall2021-09-14_10-33-04AM/launcher2021-09-14_10-33-04AM.log.
Extracting the installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Done
Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 4095 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2021-09-14_10-33-04AM
====Prereq Config Location main===
...
..
.
ERROR: com.oracle.cie.gdr.utils.GdrException: Unable to find distribution xml file in distributions directory /tmp/sfx_TbqwSX/Disk1/stage



The log(s) can be found here: /tmp/OraInstall2021-09-14_10-33-04AM.
ERROR: Installer execution failed (102).

WORKAROUND

Quit the current installer, and rerun it again:

$ ./em13400_linux64.bin
Launcher log file is /tmp/OraInstall2021-09-14_11-24-24AM/launcher2021-09-14_11-24-24AM.log.
Extracting the installer . . . . . . . . . . . . . . . . . . . . . . . Done
Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 4095 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2021-09-14_11-24-24AM
====Prereq Config Location main===
...
..
.
***
update handshake state: finished[20]
upcoming handshake states: server change_cipher_spec[-1]
upcoming handshake states: server finished[20]
Tunneling Response Thread, WRITE: TLSv1.2 Handshake, length = 96
Tunneling Response Thread, READ: TLSv1.2 Change Cipher Spec, length = 1
update handshake state: change_cipher_spec
upcoming handshake states: server finished[20]
Tunneling Response Thread, READ: TLSv1.2 Handshake, length = 96
check handshake state: finished[20]
update handshake state: finished[20]
*** Finished
verify_data:  { 76, 49, 93, 34, 108, 97, 161, 214, 175, 4, 221, 224 }
***
%% Cached client session: [Session-5, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384]
Thread-84, WRITE: TLSv1.2 Application Data, length = 384
Tunneling Response Thread, READ: TLSv1.2 Application Data, length = 1904
Thread-84, WRITE: TLSv1.2 Application Data, length = 384
Tunneling Response Thread, READ: TLSv1.2 Application Data, length = 736
Thread-84, WRITE: TLSv1.2 Application Data, length = 352
Tunneling Response Thread, READ: TLSv1.2 Application Data, length = 96
Thread-84, WRITE: TLSv1.2 Application Data, length = 560
Tunneling Response Thread, READ: TLSv1.2 Application Data, length = 848
Thread-84, WRITE: TLSv1.2 Application Data, length = 832
Tunneling Response Thread, READ: TLSv1.2 Application Data, length = 800
Thread-84, WRITE: TLSv1.2 Application Data, length = 384
Tunneling Response Thread, READ: TLSv1.2 Application Data, length = 176
detachHome was successful.
Logs successfully copied to /u01/app/oraInventory/logs.

After OMS 13.4 Upgrade :Your browser sent a request that this server could not understand

After upgraded OEM from 13.2 to 13.4, the following errors appear on console GUI:

Bad Request
Your browser sent a request that this server could not understand.
Additionally, a 400 Bad Request error was encountered while trying to use an ErrorDocument to handle the request.

SYMPTOMS

emcli errors:

$emcli login -username=sysman
Enter password

Error: Already logged in as user "SYSMAN". Use "emcli logout" to logout the current user.
$ emcli logout
Error: The connection to the OMS is broken or has been actively interrupted by the OMS (usually due to a communication timeout). Check the log files for further details

After upgrading the OMS to version 13.4, all Agents fail to upload with the following error:

$emctl upload agent
Oracle Enterprise Manager Cloud Control 13c Release 3
Copyright (c) 1996, 2018 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD upload error:full upload has failed: uploadXMLFiles skipped :: OMS version not checked yet. If this issue persists check trace files for ping to OMS related errors. (OMS_DOWN)

Securing the Agent reports the following errors:

$ emctl secure agent
Oracle Enterprise Manager Cloud Control 13c Release 3
Copyright (c) 1996, 2018 Oracle Corporation. All rights reserved.
Agent successfully stopped... Done.
Securing agent... Started.
Enter Agent Registration Password :
Agent successfully restarted... Done.
Securing agent... Failed.

$AGENT_INST_HOME/sysman/log/secure.log reports following errors:

[13-09-2021 22:11:49] USERINFO ::Securing agent...   Started.
2021-09-13 22:12:04,154 [main] INFO  agent.SecureAgentCmd main.309 - emdWalletSrcUrl: https://oem_slb:4900/empbs/wallets/emd
2021-09-13 22:12:04,160 [main] INFO  agent.SecureAgentCmd secureAgent.392 - Querying HTTPS Upload Port of the OMS
2021-09-13 22:12:04,160 [main] INFO  agent.SecureAgentCmd openPage.987 - Opening: https://oem_slb:4900/empbs/genwallet
2021-09-13 22:12:04,457 [main] INFO  agent.SecureAgentCmd openPage.1014 - Response Status Code: 400
2021-09-13 22:12:04,458 [main] ERROR agent.SecureAgentCmd main.368 - Failed to secure the Agent:
java.lang.Exception: Didn't get a OK response from OMS
        at oracle.sysman.emctl.secure.agent.SecureAgentCmd.openPage(SecureAgentCmd.java:1017)
        at oracle.sysman.emctl.secure.agent.SecureAgentCmd.getOMSSecurePort(SecureAgentCmd.java:933)
        at oracle.sysman.emctl.secure.agent.SecureAgentCmd.secureAgent(SecureAgentCmd.java:393)
        at oracle.sysman.emctl.secure.agent.SecureAgentCmd.main(SecureAgentCmd.java:346)

$<gc_inst>/user_projects/domains/GCDomain/servers/ohs1/logs/ohs1.log reports following errors:

[2021-09-13T22:24:28.4358+10:00] [OHS] [ERROR:32] [OH99999] [ossl] [client_id: xx.xx.xx.xx] [host_id: oms_host1] [host_addr: xx.xx.xx.xx] [pid: 30107] [tid: 140213158373120] [user: oracle] [VirtualHost: oem_slb:0] OHS:2079 Client SSL handshake error, nzos_Handshake returned 28860(server oem_slb:443)
[2021-09-13T22:24:28.4358+10:00] [OHS] [ERROR:32] [OH99999] [ossl] [host_id: oms_host1] [host_addr: xx.xx.xx.xx] [pid: 30107] [tid: 140213158373120] [user: oracle] [VirtualHost: oem_slb:0] OHS:2171 NZ Library Error: SSL fatal alert

CAUSE and SOLUTION

Subscribe to get access

Read more of this content when you subscribe today.

Oracle BI Server ‘obis1’ Status DOWN After OMS Upgrade

We just upgraded OMS from 13.2 to 13.4, and found an Oracle BI Server target ‘obis1’ is listed as a part of the GCDomain, and it is shown with status DOWN in EM Console.

CAUSE

It is a bug. The target ‘obis1’, shown with Down status, is a an unwanted target that got introduced as part of the upgrade to Version 13.4.

SOLUTION

Ignore the target or just delete it using EMCLI.

$ emcli login -username=sysman

$ emcli sync

$ emcli modify_target -name="/GCDomain_GCDomain/GCDomain/obis1" -type="oracle_bi_server" -properties="DisabledTargetTypes:oracle_bi_server"
Target "/GCDomain_GCDomain/GCDomain/obis1:oracle_bi_server" modified successfully

OR just delete this target.

$ emcli delete_target -name="/GCDomain_GCDomain/GCDomain/obis1" -type="oracle_bi_server"
Target "/GCDomain_GCDomain/GCDomain/obis1:oracle_bi_server" deleted successfully