Opatch or Opatchauto takes long time Due to High Number of Inactive Patches


“opatch lsinventory -inactive” shows many inactive patches.
Large number of inactive patches can slow down the opatch apply process.
Issue still reproduce after executing “opatch util deleteinactivepatches” in this case opatch.properties:retain was set to retain=2.
There is also known issue with certain oracle homes where there are more than 1 inactive PSU or RU, a user needs to run deleteinactivepatches multiple times.

SOLUTION

Execute deleteinactivepatches without setting opatch.properties to retain=2 (default is retain=1)

$ cat $ORACLE_HOME/dbhome_1/OPatch/config/opatch.properties

OPATCH_HEAP_MEMORY=3072
PS_OBFUSCATION=true
RETAIN_INACTIVE_PATCHES=1
SKIP_FUSER_WARNINGS=true

$ opatch util deleteinactivepatches -silent

Display inactive patches:

$ opatch lsinventory -inactive

Repeat previous steps until only 1 inactive patch (for each product) is left.

“ORA-16642: DB_UNIQUE_NAME mismatch” while adding a Standby Database into DG Broker

The following error occurs while trying to add standby database into data guard broker.

DGMGRL>  ADD DATABASE TESTDB  AS CONNECT IDENTIFIER is TESTDB_DR MAINTAINED AS PHYSICAL;
Error: ORA-16642: DB_UNIQUE_NAME mismatch

REASON

Here primary and standby database DB_UNIQUE_NAME is same as “TESTDB”. Make sure DB_UNIQUE_NAME for primary and all standby databases are different.

How to Manually Apply OCW Release Update onto Database Home

Found OCW patch was not applied onto database home after applied 19.24 RU, so manual applying is required.

$ORACLE_HOME/OPatch/opatch lspatches
...
..
.
30159782;OCW Interim patch for 30159782

For 19.24, the OCW patch id should be “36587798;OCW RELEASE UPDATE 19.24.0.0.0 (36587798)”. Let’s manually apply it.

$ cd 36582629/36587798
$ /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatch apply
Oracle Interim Patch Installer version 12.2.0.1.44
Copyright (c) 2024, Oracle Corporation. All rights reserved.


Oracle Home : /u01/app/oracle/product/19.0.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/19.0.0/dbhome_1/oraInst.loc
OPatch version : 12.2.0.1.44
OUI version : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2024-11-26_11-01-26AM_1.log

Verifying environment and performing prerequisite checks...

--------------------------------------------------------------------------------
Start OOP by Prereq process.
Launch OOP...

Oracle Interim Patch Installer version 12.2.0.1.44
Copyright (c) 2024, Oracle Corporation. All rights reserved.


Oracle Home : /u01/app/oracle/product/19.0.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/19.0.0/dbhome_1/oraInst.loc
OPatch version : 12.2.0.1.44
OUI version : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2024-11-26_11-01-58AM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 36587798

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/19.0.0/dbhome_1')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '36587798' to OH '/u01/app/oracle/product/19.0.0/dbhome_1'
ApplySession: Optional component(s) [ oracle.has.crs, 19.0.0.0.0 ] , [ oracle.rhp.crs, 19.0.0.0.0 ] , [ oracle.xag, 19.0.0.0.0 ] , [ oracle.has.cvu, 19.0.0.0.0 ] , [ oracle.has.crs.cvu, 19.0.0.0.0 ] not present in the Oracle Home or a higher version is found.

Patching component oracle.rdbms, 19.0.0.0.0...

Patching component oracle.rhp.common, 19.0.0.0.0...

Patching component oracle.has.common, 19.0.0.0.0...

Patching component oracle.has.common.cvu, 19.0.0.0.0...

Patching component oracle.rhp.db, 19.0.0.0.0...

Patching component oracle.has.db, 19.0.0.0.0...

Patching component oracle.has.db.cvu, 19.0.0.0.0...

Patching component oracle.has.rsf, 19.0.0.0.0...
Patch 36587798 successfully applied.
Sub-set patch [30159782] has become inactive due to the application of a super-set patch [36587798].
Please refer to Doc ID 2161861.1 for any possible further required actions.
Log file location: /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/opatch2024-11-26_11-01-58AM_1.log

OPatch succeeded.
$ /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatch lspatches

...
..
.
36587798;OCW RELEASE UPDATE 19.24.0.0.0 (36587798)

How to Connect Oracle GoldenGate Veridata Agent to Autonomous Data Warehouse

This post demonstrate how to connect the Oracle GoldenGate Veridata to Autonomous Data Warehouse (ADW) and Autonomous Transaction Processing (ATP) by using the Oracle Wallet.

Downloaded the Wallet_ADWTESTDB.zip from the ADW/ATP console.
Unzip the Wallet in the location
$ unzip Wallet_ADWTESTDB.zip

Archive: Wallet_ADWTESTDB.zip

inflating: cwallet.sso

inflating: tnsnames.ora

inflating: truststore.jks

inflating: ojdbc.properties

inflating: sqlnet.ora

inflating: ewallet.p12
inflating: keystore.jks
Add the following into sqlnet.ora
WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/home/oracle/cert/Wallet_ADWTESTDB")))

SSL_SERVER_DN_MATCH=no
Remove the comment for the following line in ojdbc.properties
oracle.net.wallet_location=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=${TNS_ADMIN})))
Copy the jdbc jars into Oracle GoldenGate Veridata Agent installed location
$ ls -ltr /u01/app/oracle/product/veridata23c_agent/agent/drivers/ojdbc8.jar
-rw-r----- 1 oracle oinstall 4535290 Nov 2 2:20 /u01/app/oracle/product/veridata23c_agent/agent/drivers/ojdbc8.jar
Point the Veridata Agent to pick the jdbc 18.3 jars with the following entries in agent.properties
server.jdbcDriver=ojdbc8.jar 
server.driversLocation = /u01/app/oracle/product/veridata23c_agent/agent/drivers
Change the database url
database.url=jdbc:oracle:thin:@adwtestdb_low?TNS_ADMIN=/home/oracle/cert/Wallet_ADWTESTDB
Start the agent
$ ./agent.sh stop
$ ./agent.sh start agent.properties

GI Postpatch Command Fails Due To rpm Database Corruption At The OS Level

The following errors occur when running “$ORACLE_HOME/crs/install/rootcrs.sh -postpatch”:

2024-11-09 13:31:20: Executing cmd: /bin/rpm -q sles-release
2024-11-09 13:31:20: Command output:
>  error: rpmdb: BDB0113 Thread/process 128527/140400711669824 failed: BDB1507 Thread died in Berkeley DB library
>  error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
>  error: cannot open Packages index using db5 -  (-30973)
>  error: cannot open Packages database in /var/lib/rpm
>  error: rpmdb: BDB0113 Thread/process 128527/140400711669824 failed: BDB1507 Thread died in Berkeley DB library
>  error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
>  error: cannot open Packages database in /var/lib/rpm
>  package sles-release is not installed
>End Command output
2024-11-09 13:31:20: Check if the startup mechanism systemd is being used
2024-11-09 13:31:20: Executing cmd: /bin/rpm -qf /sbin/init
2024-11-09 13:31:20: Command output:
>  error: rpmdb: BDB0113 Thread/process 128527/140400711669824 failed: BDB1507 Thread died in Berkeley DB library
>  error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
>  error: cannot open Packages index using db5 -  (-30973)
>  error: cannot open Packages database in /var/lib/rpm
>  error: rpmdb: BDB0113 Thread/process 128527/140400711669824 failed: BDB1507 Thread died in Berkeley DB library
>  error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
>  error: cannot open Packages database in /var/lib/rpm
>  error: rpmdb: BDB0113 Thread/process 128527/140400711669824 failed: BDB1507 Thread died in Berkeley DB library
>  error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
>  error: cannot open Packages database in /var/lib/rpm
>  file /sbin/init is not owned by any package
>End Command output
2024-11-09 13:31:20: Executing cmd: /u01/app/19.0.0/grid/bin/clsecho -p has -f clsrsc -m 180 '/bin/rpm -qf /sbin/init' '1'
2024-11-09 13:31:20: Executing cmd: /u01/app/19.0.0/grid/bin/clsecho -p has -f clsrsc -m 180 '/bin/rpm -qf /sbin/init' '1'
2024-11-09 13:31:20: Command output:
>  CLSRSC-180: An error occurred while executing the command '/bin/rpm -qf /sbin/init'
>End Command output
2024-11-09 13:31:20: CLSRSC-180: An error occurred while executing the command '/bin/rpm -qf /sbin/init'
2024-11-09 13:31:20: ###### Begin DIE Stack Trace ######
2024-11-09 13:31:20:     Package         File                 Line Calling
2024-11-09 13:31:20:     --------------- -------------------- ---- ----------
2024-11-09 13:31:20:  1: main            rootcrs.pl            358 crsutils::dietrap
2024-11-09 13:31:20:  2: s_crsutils      s_crsutils.pm        2729 main::__ANON__
2024-11-09 13:31:20:  3: s_crsutils      s_crsutils.pm        2701 s_crsutils::s_checkLinuxInitMethod
2024-11-09 13:31:20:  4: s_crsutils      s_crsutils.pm        3712 s_crsutils::s_is_Linux_Systemd
2024-11-09 13:31:20:  5: oraClusterwareComp::oraohasd oraohasd.pm          1537 s_crsutils::s_copy_afdinit_init
2024-11-09 13:31:20:  6: crspatch        crspatch.pm           730 oraClusterwareComp::oraohasd::update_OHASD
2024-11-09 13:31:20:  7: crspatch        crspatch.pm          1912 crspatch::updateSystemFiles
2024-11-09 13:31:20:  8: crspatch        crspatch.pm          2249 crspatch::performPostPatch
2024-11-09 13:31:20:  9: crspatch        crspatch.pm           526 crspatch::crsPostPatch
2024-11-09 13:31:20: 10: main            rootcrs.pl            371 crspatch::new
2024-11-09 13:31:20: ####### End DIE Stack Trace #######

2024-11-09 13:31:20: ROOTCRS_POSTPATCH_UPDATE_OHASD_SERVICE checkpoint has failed

Solution

OS level RPM database is corrupted.

Recreate RPM database at OS level to resolve this issue:

1.  As root OS user run the following:

# rm -f /var/lib/rpm/__*
# /bin/rpm –rebuilddb
# echo $?

2.  Validate the rpm database, as the root user:

# /bin/rpm -qa | more


3.  If last command listing rpms correctly, then re-run postpatch command or resume your opatchauto patching session:

As root OS user:

cd $GI_HOME/crs/install
For Cluster Grid Infrastructure Install:#./rootcrs.sh -postpatch

For Restart Grid Infrastructure Install:#./roothas.sh -postpatch

Reference

Postpatch Command Fails on Grid Infrastructure Home Due To rpm Database Corruption At The OS Level (Doc ID 2365433.1)