ORA-16751: failed to switchover to physical standby database and Switchover Ends with Two Physical Standby Databases

Trying to switch over database from primary database to physical standby database,  it ran into the issue where both database became physical standby.

DGMGRL> switchover to "STDBYDB"
Performing switchover NOW, please wait...
Operation requires a connection to instance "STDBYDB" on database "STDBYDB"

Connecting to instance "STDBYDB"...
Connected.

Error: ORA-16751: failed to switchover to physical standby database

Failed.

Unable to switchover, primary database is still "PRIMDB"

DGMGRL> 

--
-- Check database role
--

NAME       OPEN_MODE            DATABASE_ROLE    
--------- -------------------- ---------------- 
PRIMDB    CLOSED BY SWITCHOVER PHYSICAL STANDBY  
STDBYDB   MOUNTED              PHYSICAL STANDBY

Data Guard Broker log:

07/19/2019 08:07:55
SQL Execution error=604, 
sql=[ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WAIT WITH SESSION SHUTDOWN]. 
See error stack below.
ORA-00604: error occurred at recursive SQL level 1
ORA-00344: unable to re-create online log '/u05/oralog/PRIMDB/redo1a.log'
ORA-27040: file create error, unable to create file
Linux-x86_64 Error: 2: No such file or directory
Additional information:
switchover to primary command failed
Database Resource SetState Error (16751)
Command SWITCHOVER TO STDBYDB completed with error ORA-16751

DataGuard cannot create online redo logs on standby database side. The reason is the following two parameters are not set up correctly.

The parameters on standby database side should be like those:

DB_FILE_NAME_CONVERT  = 'PRIMDB','STDBYDB'
LOG_FILE_NAME_CONVERT = 'PRIMDB','STDBYDB'

Solution

Logon to standby database server.

SQL>select database_role from v$database;

DATABASE_ROLE
----------------
PHYSICAL STANDBY

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.
...
..
.
Database mounted.


SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database recover managed standby database finish;

Database altered.

SQL> ALTER database commit to switchover to primary with session shutdown;

Database altered.

SQL> alter database open;

Database altered.

SQL>  select database_role from v$database;

DATABASE_ROLE
----------------
PRIMARY

Finally rebuild the Data Guard Broker Configuration.

Advertisement

“No space left on device” while AWS RDS Storage Auto Scaling Enabled

PostgreSQL RDS instance got “No space left on device” errors, even RDS storage auto scaling enabled. The storage can only be extended after at least 6 hours.

As for now,  although automatic scaling helps you to increase storage on your RDS instance dynamically, you should still configure the initial storage for your DB instance to an appropriate size for your typical workload.

Auto Scaling will kick in when these factors apply:

  • Free available space is less than 10 percent of the allocated storage.
  • The low-storage condition lasts at least five minutes.

We can also see, that the minimum the storage will increase is either 5GB or max 10% of your current allocated storage. That means that if you have an instance of 200Gb storage right now, your max ‘auto allocated’ storage space would be +20Gb of 200Gb which would be 220Gb. This will not be able to change for at least 6 hours.

Remember 6 hours policy when using “auto scaling”.