Physical Standby database creation using FlashRecover Snapshots

To create a physical standby database, DBAs generally take an RMAN backup of the primary database, transfer them to the standby system and recover/duplicate them to create the standby database.  Based on the size of the database and the distance between the sites where the standby database is created, the backup/restore, as well as transfer, can take a long time.

What if we can save significant time by using Pure FlashArray’s snapshot functionality in quickly creating the Standby database.  This blog post is intended to detail the steps in creating a physical standby database using FlashRecover snapshot functionality.

Note: I am not suggesting you move away from your application levels replication like Oracle Data Guard or Active Data Guard but use the storage level feature like Snapshots to quickly provision the standby database.

Following are the high-level processes to create an Oracle 12.2 physical standby database.

  1. Preparation of Primary Database for Standby Creation
  2. Step-by-step Standby Database creation
    1. Create a backup copy of the Primary database files
    2. Create a control file and a parameter file for the Standby database
    3. Copy database files from the Primary system to the Standby system
    4. Setup the Environment to support the Standby database
    5. Start the Physical Standby database and the apply process
    6. Verify the Physical Standby database is performing properly

The most time-consuming steps in the above list are #2.1,  #2.3 and #2.5 which involves backing up the database, transferring the backup and restoring the backup.

This is where Pure FlashArray’s snapshot functionality can help.  You can take a backup of the database by snapshotting all the volumes that make up the database at the storage level instantly irrespective of the size of the database.  

As Standby databases are ideally set up between two sites, you can use Pure FlashArray’s asynchronous replication functionality which is again based on Pure’s FlashRecover snapshot to transfer the snapshots from source site to the remote site.  As the data in FlashArray is always deduped and compressed, the highly reduced data set is alone replicated to the remote site the very first time and subsequent transfers include just the delta from the prior transfer.

For example, if the used-space (not allocated space) of a database is say 15TB, creating the standby database in the conventional method would mean transferring all 15TB over the WAN (unless it is compressed which will take up more CPU cycles at the database server level) to get the copy of the database.  In case of Pure FlashArray, assuming you are getting 3:1 data reduction, FlashArray only sends 5TB of data to the remote site saving over 67% of network traffic as well as reduce the time to provision the standby database as there is no backup and restore/duplicate process involved.

You can find more details about FlashRecover Snapshot here.

Assumptions

  1. Two physical servers oraprod and oradev are used in this exercise.  Both are running Oracle Linux 7.4.
  2. The primary server oraprod hosts the 12.2 database OLTP which is up and running
  3. The standby server oradev will host the standby database OLTP_SBY at the end of this exercise
  4. Source database uses ASM filesystem.
    +DATA/OLTP (datafiles, tempfiles, online redo logs, and control file) and +FRA/OLTP (archived logs)
  5. Standby database will also be on ASM filesystem with the files under
    +DATA/OLTP_SBY and +FRA/OLTP_SBY
  6. The Primary and Standby databases were hosted on the same FlashArray.  In the real world, they might be (and should be) hosted on two different FlashArrays across different sites to fulfill the disaster recovery requirement.

Following picture illustrates the environment used as part of this exercise.

Here are the detailed steps to create the Physical Standby database using FlashRecover Snapshot.

1. Preparation of Primary Database for Standby creation

1.1 Primary role Initialization Parameters

Setup LOG_ARCHIVE_CONFIG and LOG_ARCHIVE_DEST_n parameter to point to the standby database’s db_unique_name which should be different from the primary database’s db_unique_name.
Setup FAL_SERVER, DB_FILE_NAME_CONVERT, and LOG_FILE_NAME_CONVERT if in case primary has to be transitioned to a standby role.

In this example, the db_name and db_unique_name at the Primary database is OLTP whereas the db_name is OLTP and db_unique_name is OLTP_SBY at the Standby database.

DB_NAME=OLTP
DB_UNIQUE_NAME=OLTP
LOG_ARCHIVE_CONFIG='DG_CONFIG=(OLTP,OLTP_SBY)'
CONTROL_FILES='+DATA/OLTP/CONTROLFILE/control01.ctl'
LOG_ARCHIVE_DEST_1=
 'LOCATION=USE_DB_RECOVERY_FILE_DEST 
  VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
  DB_UNIQUE_NAME=OLTP'
LOG_ARCHIVE_DEST_2=
 'SERVICE=OLTP_SBY ASYNC
  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) 
  DB_UNIQUE_NAME=OLTP_SBY'
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc

FAL_SERVER=OLTP_SBY
DB_FILE_NAME_CONVERT='/OLTP_SBY/','/OLTP/'
LOG_FILE_NAME_CONVERT='/OLTP_SBY/','/OLTP/' 
STANDBY_FILE_MANAGEMENT=AUTO

Following are performed at the Primary database

SQL> alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(OLTP,OLTP_SBY)';
SQL> alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=OLTP';
SQL> alter system set log_archive_dest_2='SERVICE=OLTP_SBY ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=OLTP_SBY';
SQL> alter system set log_archive_dest_state_2=ENABLE;
SQL> alter system set LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' scope=SPFILE;
SQL> alter system set REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE scope=SPFILE;

SQL> alter system set FAL_SERVER=OLTP_SBY;
SQL> alter system set DB_FILE_NAME_CONVERT='/OLTP_SBY/','/OLTP/' SCOPE=SPFILE;
SQL> alter system set LOG_FILE_NAME_CONVERT='/OLTP_SBY/','/OLTP/' SCOPE=SPFILE;
SQL> alter system set STANDBY_FILE_MANAGEMENT=AUTO;

1.2 Services Setup

Make sure the TNS entries for both the Primary the Standby database are included in the $ORACLE_HOME/network/admin/tnsnames.ora file on both servers (Primary and Standby).

OLTP =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)
               (HOST = oraprod.puretec.purestorage.com)
               (PORT = 1521))
    (CONNECT_DATA =(SERVICE_NAME = OLTP)
    )
  )
OLTP_SBY =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)
               (HOST = oradev.puretec.purestorage.com)
               (PORT = 1521))
    (CONNECT_DATA =(SERVICE_NAME = OLTP_SBY)
    )
  )

1.3 Logging

The primary database should be in archivelog mode & enabled force logging.  Shutdown and startup the primary database for the updated init.ora parameters to take effect.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 8589934592 bytes
Fixed Size 12180200 bytes
Variable Size 4496295192 bytes
Database Buffers 4060086272 bytes
Redo Buffers 21372928 bytes
Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> alter database force logging;

Database altered.

1.4 Password security

Copy the password file from Primary to the Standby server with the db_unique_name format of the standby.

scp $ORACLE_HOME/dbs/orapwOLTP oracle@oradev:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapwOLTP_SBY

2.1 Create a backup copy of the Primary database data files

Instead of performing a full-fledged RMAN backup along with archived logs, we will take a storage level snapshot of the volumes that comprise of the datafiles and archived log files.  In our case, following are the volumes that are part of the OLTP database that should be included in the snapshot command.

select c.db_name, g.name, d.path
from v$asm_diskgroup g, v$asm_disk d, v$asm_client c
where d.group_number = g.group_number
and c.group_number = g.group_number
and c.db_name = 'OLTP'
order by 2, 3;

DB_NAME  NAME       PATH
-------- ---------- --------------------------------------------------
OLTP     DATA       /dev/mapper/dg_oraprod_data01
OLTP     DATA       /dev/mapper/dg_oraprod_data02
OLTP     FRA        /dev/mapper/dg_oraprod_fra

As the default behavior of Pure FlashArray’s snapshot is crash consistent and not application consistent, the primary database should be placed in HOT BACKUP mode prior to taking the storage level snapshot. 

Note: Starting Oracle 12c, you have an option of not placing the database in hot backup mode and using the SNAPSHOT TIME keyword with the RECOVER DATABASE command but it requires Oracle’s Advanced Compression license option and hence we opted to place the database in hot backup mode as it doesn’t make sense to get an Advanced Compression license from Oracle for using Pure Storage’s snapshots (robbing Peter to pay Paul?!).

As the storage level snapshot is created instantly, there shouldn’t be any concern about placing the database in backup mode as it will be in backup mode only for a very short period (generally in seconds).

[oracle@oraprod ~]$ cat snap.sh
sqlplus -s / as sysdba << EOF1
alter database begin backup;
exit
EOF1

ssh pureuser@sn1-x70-f05-27 purevol snap --suffix standbycopy dg_oraprod_data01 dg_oraprod_data02 dg_oraprod_fra

sqlplus -s / as sysdba << EOF2
alter database end backup;
exit
EOF2

[oracle@oraprod ~]$ ./snap.sh
Database altered.

pureuser@sn1-x70-f05-27's password:******
Name                           Size Source Created                  Serial
dg_oraprod_data01.standbycopy  2T dg_oraprod_data01 2018-08-13 17:59:28 PDT  6C1D7213605F4920000191E0
dg_oraprod_data02.standbycopy  2T dg_oraprod_data02 2018-08-13 17:59:28 PDT  6C1D7213605F4920000191E1
dg_oraprod_fra.standbycopy     2T dg_oraprod_fra 2018-08-13 17:59:28 PDT  6C1D7213605F4920000191E2

Database altered.

[oracle@oraprod ~]$

2.2 Create a Standby controlfile and a PFILE for Standby Database

Issue the following commands in the Primary database to create a standby controlfile and a PFILE for the standby database.

SQL> alter database create standby controlfile as'/tmp/OLTP_SBY.ctl';
SQL> create pfile='/tmp/initOLTP_SBY.ora' from SPFILE;

File created.

Update the following entries in the PFILE to make it relevant for the Standby database

db_name=OLTP
db_unique_name=OLTP_SBY
control_files='+DATA/OLTP_SBY/CONTROLFILE/control01.ctl'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(OLTP,OLTP_SBY)'
log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=OLTP_SBY'
log_archive_dest_2='SERVICE=OLTP ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=OLTP'
  
fal_server='OLTP'
# Commented out the following for the initial Standby creation.  
# Uncomment them once the Standby database is created
#db_file_name_convert='/OLTP/','/OLTP_SBY/'
#log_file_name_convert='/OLTP/','/OLTP_SBY/'
STANDBY_FILE_MANAGEMENT=MANUAL;  

Note: We have purposefully set the STANDBY_FILE_MANAGEMENT to MANUAL to avoid any automatic file management when we mount the standby database the very first time.  Also, note we have commented out the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT.  We will enable them once we have set up the Standby database successfully.

2.2.1 Copy the standby controlfile and pfile to the Standby server.

$ scp /tmp/initOLTP_SBY.ora oracle@oradev:/u01/app/oracle/product/12.2.0/dbhome_1/dbs
$ scp /tmp/OLTP_SBY.ctl oracle@oradev:/tmp

2.3 Copy datafiles from the Primary system to the Standby system

Instead of transferring the datafiles and latest archived log files from Primary to the Standby server, we will use the Pure FlashRecover Snapshots to create a copy of the volumes that were snapshotted which will be instantaneous and consumes very less space (generally in KB or MB depending on the amount of metadata that is required).

2.3.1 Instantiate the copy of the snapshot volumes into a new set of volumes by using purevol copy command.

[oracle@oraprod ~]$ ssh pureuser@sn1-x70-f05-27 purevol copy dg_oraprod_data01.standbycopy dg_standby_data01
Name               Size Source      Created Serial
dg_standby_data01  2T dg_oraprod_data01  2018-08-13 17:59:28 PDT 6C1D7213605F4920000191E3

[oracle@oraprod ~]$ ssh pureuser@sn1-x70-f05-27 purevol copy  dg_oraprod_data02.standbycopy dg_standby_data02
Name               Size Source      Created Serial
dg_standby_data02  2T dg_oraprod_data02  2018-08-13 17:59:28 PDT 6C1D7213605F4920000191E4

[oracle@oraprod ~]$ ssh pureuser@sn1-x70-f05-27 purevol copy dg_oraprod_fra.standbycopy dg_standby_fra
Name            Size Source      Created   Serial
dg_standby_fra  2T dg_oraprod_fra  2018-08-13 17:59:28 PDT  6C1D7213605F4920000191E5

2.3.2 Connect the new volumes to the Standby database server using the following command.

[oracle@oraprod ~]$ ssh pureuser@sn1-x70-f05-27 purevol connect --host oradev-sn1-r720-f05-011 dg_standby_data01 dg_standby_data02 dg_standby_fra
Name               Host Group Host                     LUN
dg_standby_data01  -          oradev-sn1-r720-f05-011  6
dg_standby_data02  -          oradev-sn1-r720-f05-011  7
dg_standby_fra     -          oradev-sn1-r720-f05-011  8   

2.3.3 Updated the /etc/multipath.conf with the 3 new volumes along with WWN information and alias for every volume.  (This part is optional and you can follow your standard operating procedures to add a volume to the host and apply relevant udev rules)

multipath {
                wwid 3624a93706c1d7213605f4920000191e3
                alias dg_standby_data01
        }
multipath {
                wwid 3624a93706c1d7213605f4920000191e4
                alias dg_standby_data02
        }
multipath {
                wwid 3624a93706c1d7213605f4920000191e5
                alias dg_standby_fra
        }

2.3.4 As we named the device in a format that adheres to the udev rules (anything that starts with dg_) the ASM user (grid) will have relevant read/write privileges on the ASM devices.

[root@oradev ~]# more /etc/udev/rules.d/99-oracleasm.rules
#All volumes which starts with dg_* ##
ENV{DM_NAME}=="dg_*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"

2.3.5 As the superuser (root), rescan SCSI bus on the Standby database.  In this case, we issued rescan-scsi-bus.sh command which discovered the new luns that were connected to the host.  

[root@oradev ~]# rescan-scsi-bus.sh -a

2.3.6 Make sure the devices have write privileges for the ASM user (grid).

lrwxrwxrwx 1 root root 8 Aug 13 18:05 /dev/mapper/dg_standby_fra -> ../dm-15
lrwxrwxrwx 1 root root 8 Aug 13 18:05 /dev/mapper/dg_standby_data02 -> ../dm-16
lrwxrwxrwx 1 root root 8 Aug 13 18:05 /dev/mapper/dg_standby_data01 -> ../dm-14

[root@oradev ~]# ls -ltr /dev/dm-1[456]
brw-rw---- 1 grid asmadmin 249, 15 Aug 13 18:05 /dev/dm-15
brw-rw---- 1 grid asmadmin 249, 16 Aug 13 18:05 /dev/dm-16
brw-rw---- 1 grid asmadmin 249, 14 Aug 13 18:05 /dev/dm-14

2.4 Setup the Environment to support the Standby database

2.4.1 Mount the diskgroups as grid user and rename the directory OLTP to OLTP_SBY.

[grid@oradev ~]$ asmcmd mount DATA
[grid@oradev ~]$ asmcmd mount FRA

If the directory under the diskgroup is created by System (through DBCA or RMAN) then the directory cannot be renamed.  In such case, the data files have to be copied to the new directory (which will defeat the process of using storage level snapshots). In this case, the +DATA/OLTP directory is not a system directory and hence can be renamed.

[grid@oradev ~]$ asmcmd
ASMCMD> ls
ASM/
DATA/
FRA/
ASMCMD> ls -l DATA
Type  Redund  Striped Time             Sys Name
                                        N    OLTP/
ASMCMD> ls -l FRA
Type  Redund  Striped Time             Sys Name
                                        Y    OLTP/
ASMCMD> exit
[grid@oradev ~]$ sqlplus / as sysasm
. . .
SQL> alter diskgroup DATA rename directory '+DATA/OLTP' to '+DATA/OLTP_SBY';

Diskgroup altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

2.4.2 Create the directory in the name of db_unique_name (OLTP_SBY) under FRA to host the archived logs and any backupsets.  

[grid@oradev ~]$ asmcmd
ASMCMD> cd FRA
ASMCMD> mkdir OLTP_SBY

2.4.3 Copy the Standby control file that was transferred from the Primary into the ASM location within the Standby system.

ASMCMD> cp /tmp/OLTP_SBY.ctl +DATA/OLTP_SBY/CONTROLFILE/control01.ctl
copying /tmp/OLTP_SBY.ctl -> +DATA/OLTP_SBY/CONTROLFILE/control01.ctl

2.4.4 Create the required directory under oracle user on the Standby server

mkdir -p /u01/app/oracle/admin/OLTP/adump

2.4.5 Establish the dependency between the standby (OLTP_SBY) database and the diskgroup resources. (This step is not required when using cooked filesystems instead of ASM).

[oracle@oradev]$ srvctl add database -d OLTP_SBY -o /u01/app/oracle/product/12.2.0/dbhome_1
[oracle@oradev]$ srvctl modify database -d OLTP_SBY -a "DATA,FRA"
[oracle@oradev]$ srvctl config database -d OLTP_SBY
Database unique name: OLTP_SBY
Database name:
Oracle home: /u01/app/oracle/product/12.2.0/dbhome_1
Oracle user: oracle
Spfile:
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA,FRA
Services:
OSDBA group:
OSOPER group:
Database instance: OLTPSBY

2.4.6 Update listener.ora with an entry for the standby (OLTP_SBY) and start the listener.

LISTENER =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = oradev.puretec.purestorage.com)(PORT = 1521))
    (ADDRESS=(PROTOCOL=ipc)(KEY=extproc))
  )
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = OLTP_SBY)
      (ORACLE_HOME = /u01/app/oracle/product/12.2.0/dbhome_1)
      (SID_NAME = OLTP_SBY)
    )
[oracle@oradev]$ lsnrctl stop
[oracle@oradev]$ lsnrctl start

2.5 Start the Physical Standby database and the apply process

2.5.1 Startup the Standby database in mount mode using the updated pfile that was copied from Primary.  Make sure the db_unique_name is different from that of the primary.

[oracle@oradev ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 13 18:34:45 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup mount pfile=/u01/app/oracle/product/12.2.0/dbhome_1/dbs/initOLTP_SBY.ora
ORACLE instance started.

Total System Global Area 8589934592 bytes
Fixed Size    12180200 bytes
Variable Size  4496295192 bytes
Database Buffers  4060086272 bytes
Redo Buffers    21372928 bytes
Database mounted.
SQL>

2.5.2 Rename datafiles, tempfiles and online logfiles location.

As we renamed +DATA/OLTP to +DATA/OLTP_SBY in the Standby server, we wanted the data dictionary to reflect the same.  This part would have been taken care of by Oracle when performing RESTORE DATABASE command. As we are not performing a restore, we will run the following SQL to create a SQL script that will alter the file location.  (Please update the script to match your diskgroup and db_unique_name references)

[oracle@oradev]$ more rename.sql
set pages 0
set lines 132
set feed off
set echo off
spool rename_dbf.sql
select 'alter database rename file '|| '''' ||name||''''||' to '||''''|| '+DATA/OLTP_SBY/DATAFILE/'||substr(name,instr(name,'/DATAFILE/')+10)||''''||';' 
from v$datafile;
select 'alter database rename file '|| '''' ||name||''''||' to '||''''|| '+DATA/OLTP_SBY/TEMPFILE/'||substr(name,instr(name,'/TEMPFILE/')+10)||''''||';' 
from v$tempfile;
select 'alter database rename file '|| '''' ||member||''''||' to '||''''|| '+DATA/OLTP_SBY/ONLINELOG/'||substr(member,instr(member,'/ONLINELOG/')+11)||''''||';' 
from v$logfile;
spool off

[oracle@oradev ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 13 18:34:45 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> @rename.sql
SQL> @rename_dbf.sql

2.5.3 Add standby logfiles on both Primary and Standby databases

This is required for the switchovers.  Size the standby redo log files to the same size as of the online redo logs.  To match the four online logs in our environment, we ran the following four times.

SQL> alter database add standby logfile ('+DATA'') size 500M; 

2.5.4 Catalog the archived logs in +FRA/OLTP directory and the ONLINE redo logs to the Standby database.

As there were no backups performed, the controlfile will not have any reference to the redo information that is required for the recovery.  Since we have access to all the archived logs and online redo logs, we manually catalog them to the Standby database for the MANAGED RECOVERY to work. 

[oracle@oradev]$ rman target /

Recovery Manager: Release 12.2.0.1.0 - Production on Tue Aug 14 18:26:53 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

connected to target database: OLTP (DBID=1555394941, not open)

RMAN> catalog start with '+FRA/OLTP/ARCHIVELOG/2018_08_14';
. . .
RMAN> catalog archivelog '+DATA/OLTP_SBY/ONLINELOG/group_1.261.975515007';

cataloged archived log

archived log file name=+DATA/OLTP_SBY/ONLINELOG/group_1.261.975515007 RECID=29 STAMP=984138387

RMAN> catalog archivelog '+DATA/OLTP_SBY/ONLINELOG/group_2.262.975515007';

cataloged archived log

archived log file name=+DATA/OLTP_SBY/ONLINELOG/group_2.262.975515007 RECID=30 STAMP=984138403

RMAN> catalog archivelog '+DATA/OLTP_SBY/ONLINELOG/group_3.263.975515007';

cataloged archived log

archived log file name=+DATA/OLTP_SBY/ONLINELOG/group_3.263.975515007 RECID=31 STAMP=984138413

RMAN> catalog archivelog '+DATA/OLTP_SBY/ONLINELOG/group_4.264.975515007';

cataloged archived log

archived log file name=+DATA/OLTP_SBY/ONLINELOG/group_4.264.975515007 RECID=32 STAMP=984138422

RMAN> exit

2.5.5 Start the Redo Apply Process on the Standby.

To perform the redo apply on the foreground run the following command.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

To perform the redo apply on the background and to get the control back to the session,

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

To cancel the apply process run the following command.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

2.6 Verify the Physical Standby

On the primary server, check the status of the archived logs and force a log switch.

SQL> select * from
     (select sequence#, name, standby_dest, first_change#, next_change#, dest_id from v$archived_log order by sequence# desc)
     where rownum < 5
     order by sequence#;

 SEQUENCE# NAME 							     STA FIRST_CHANGE# NEXT_CHANGE#    DEST_ID
---------- ----------------------------------------------------------------- --- ------------- ------------ ----------
       168 +FRA/OLTP/ARCHIVELOG/2018_08_15/thread_1_seq_168.330.984227985    NO       13627099	   13627150	     1
       168 oltp_sby							     YES      13627099	   13627150	     2
       169 +FRA/OLTP/ARCHIVELOG/2018_08_15/thread_1_seq_169.331.984231465    NO       13627150	   13634727	     1
       169 oltp_sby							     YES      13627150	   13634727	     2

SQL> alter system switch logfile;

On the Standby database, check the status of the archived logs.

SQL> select sequence#, name, first_change#, next_change#, dest_id from v$archived_log;

 SEQUENCE# NAME                                                                FIRST_CHANGE# NEXT_CHANGE#    DEST_ID
---------- ------------------------------------------------------------------- ------------- ------------ ----------
       168 +FRA/OLTP_SBY/ARCHIVELOG/2018_08_15/thread_1_seq_168.330.984227985       13627099     13627150          1
       169 +FRA/OLTP_SBY/ARCHIVELOG/2018_08_15/thread_1_seq_169.331.984231465       13627150     13634727          1
       170 +FRA/OLTP_SBY/ARCHIVELOG/2018_08_15/thread_1_seq_170.332.984231839       13634727     13635491          1

SQL> select client_process, process, thread#, sequence#, status
     from v$managed_standby
     where client_process ='LGWR' OR process ='MRP0';  

CLIENT_P PROCESS      THREAD#  SEQUENCE# STATUS
-------- --------- ---------- ---------- ------------
LGWR	 RFS		    1	     171 IDLE
N/A	 MRP0		    1	     171 APPLYING_LOG

2.7 Post Standby setup process

2.7.1 Create the spfile for the Standby database from the pfile.

2.7.2 Update the init.ora parameters on the Standby database.

db_file_name_convert='/OLTP/','/OLTP_SBY/' 
log_file_name_convert='/OLTP/','/OLTP_SBY/' 
STANDBY_FILE_MANAGEMENT=AUTO; 

Even though the number of steps looks numerous, the provisioning of the data was done in two steps which illustrate the simplicity of the Pure FlashArray.  As I had mentioned earlier as Physical standby is generally done across two sites for disaster recovery, the following picture illustrates the suggested approach.

Use the storage level replication for setting up the Standby database and application level replication for keeping the standby database in sync with the production database.

For more information on Physical Standby see:

 

Like it? Share ...Tweet about this on Twitter
Twitter
Share on LinkedIn
Linkedin
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Verified by MonsterInsights