I have seen that some of the DBAs are using shell script to backup the database either through RMAN or DataPump in Oracle RAC Deployment. The issue occurs when your node goes out of service where you had configured the backup shell script. Therefore, backup could not be executed as per the schedule time and SLA will be broken. It becomes tragedy when this issue did not get notified.
Your backup must not be failed because it’s mandatory requirement for business critical environment.
To overcome such issue, below is the one of the best high availability solution for production environment.
In this Artical we are going to demonstrate RMAN and DataPump Hands On to meet the Backup SLA requirement of business operations availability.
Mostly organization prefer to use below Oracle Utility to backup the database
Ideally FRA should be use for backup storage but still DBAs have below possible options that could be used as backup storage.
Oracle ASM - Diskgroup
- Oracle ASM support the storage of backup files
- You have to configure your FRA using below two important initialization parameters
For depth hands on you can refer this oracle article
Oracle Automatic Storage Management Cluster File System (ACFS)
For depth hands on refer this article
It is assume that Oracle RAC is deployed with 2 Nodes.
Fast Recovery Area is configured and store in ASM Diskgroup
For Oracle RMAN
RMAN is the honor tool for any DBA to take Online physical backup of the Oracle database since version 8i.
In this demonstration, RMAN backup will be stored in Oracle ASM Diskgroup (in this case FRA is configure) and log will be stored in local file system defined in script itself.
Below is the example of creating RMAN backup job for 2 -node RAC.
Step 1: Create the OS level credential from SYS user.
SQL> exec dbms_scheduler.create_credential(credential_name => 'oracle',username => 'oracle', password => 'xyz');
Here username => 'oracle',password => 'xyz' are the OS level user through which we want to run the RMAN backup job.
Step 2: Create the JOB_CLASS that will use the service to run the RMAN job.
SQL> exec dbms_scheduler.create_job_class(job_class_name=> 'RUN_BACKUP_JOB', service=> 'backupser');
Here, rmansec is the service that was created with below specifications.
[oracle@rh-test-db-01 ~]$ srvctl add service -d orcl12c -s backupser -r orcl12c2 -a orcl12c1
[oracle@rh-test-db-01 ~]$ srvctl config service -d orcl12c -s backupser
Service name: backupser
Service is enabled
Server pool: orcl12c_backupser
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Preferred instances: orcl12c2
Available instances: orcl12c1
Step 3: Create database JOB for RMAN backup.
job_name => 'RMAN_incr0_BACKUP',
job_type => 'EXECUTABLE',
job_action => '/bin/ksh',
NUMBER_OF_ARGUMENTS => 1,
start_date => SYSTIMESTAMP,
credential_name => 'oracle',
job_class => ‘RUN_BACKUP_JOB’,
auto_drop => FALSE,
enabled => FALSE);
Note: -> Do not schedule the job at the time of creation. Please do not forget to mention the NUMBER_OF_ARGUMENTS. The number of arguments are the one which we need to pass as a parameter when job will run.I am having only .sh file name as a input so I have only 1 argument.
Step 4: Set the argument VALUE of the above created JOB.
job_name => 'RMAN_INCR0_BACKUP',
ARGUMENT_POSITION => 1,
argument_value => '/oracle/backup/rman_incr0_backup.sh');
Note: Here the argument position is the script name which we want to run for RMAN backup. Give the absolute path of the RMAN script.
Step 5: Schedule the JOB for weekly backup.
name => 'RMAN_INCR0_BACKUP',
attribute => 'repeat_interval',
value => 'freq=weekly; byday=wed,sun; byhour=5; byminute=0; bysecond=0;');
dbms_scheduler.enable( 'RMAN_INCR0_BACKUP' );
Note: The job has been scheduled to run on every wednesday and sunday at 5 AM.
Step 6: Below script have been deployed on both the nodes under same location i.e. /oracle/backup/rman_incr0_backup.sh
[oracle@db-01 backup]$ cat rman_incr0_backup.sh
echo start >>/oracle/backup/RMAN/RMAN_incr0_BACKUP_SCRIPT.out
/u01/app/oracle/product/11.2.0/dbhome_1/bin/rman target \/ log=$RMANLOG\/RMAN_incr0_BKP_LOG_$DTTS.out<<EOF
backup incremental level 0 database;
crosscheck archivelog all;
backup archivelog all delete input;
backup current controlfile;
echo end >>/oracle/backup/RMAN/RMAN_incr0_BACKUP_SCRIPT.out
Note: ORACLE_SID=orcl12c2 needs to be modified in the shell script for RAC node2.
You can purge the job run details as below :
- Only purge the log
You can run the job manually as below :
- Only run the job
=> You can query the job execution details as below:
set lines 180 pages 300
col RUN_DURATION for a20
col JOB_NAME for a20
col STATUS for a10
col ACTUAL_START_DATE for a35
col CPU_USED for a20
col LOG_DATE for a35
SELECT instance_id,job_name,log_date,actual_start_date, run_duration, cpu_used ,status FROM dba_scheduler_job_run_details where job_name like 'RMAN_%' order by 4 desc;
=> You can query the job details as below:
set lines 180 pages 300
col SCHEDULE_NAME for a20
col START_DATE for a40
col REPEAT_INTERVAL for a40
col OWNER for a5
col JOB_NAME for a10
col NEXT_RUN_DATE for a30
select OWNER , JOB_NAME , JOB_TYPE , START_DATE , REPEAT_INTERVAL, ENABLED , NEXT_RUN_DATE , INSTANCE_ID , job_class, instance_stickiness from DBA_SCHEDULER_JOBS where JOB_NAME like 'RMAN_%';
For Oracle DataPump
DataPump is the alternate and favourite for any DBA to take logical backup the Oracle database since version 10g.
In this demonstration, DataPump backup will store in Oracle ASM Diskgroup(+CDATA) and log will configured to stored in local file system(/u01/app/oracle/admin/rcpdb/dpdump).
Step-1 Configure Logical directories for backup storage
create or replace directory DIR_ASM AS '+CDATA/datapump';
create or replace directory data_pump_dir as '/u01/app/oracle/admin/rcpdb/dpdump';
Step-2 Grant below privileges to the user (in our case it is SCOTT) through which you want to run this JOB of database.
grant read,write on directory data_pump_dir to scott;
grant read,write on directory DIR_ASM to scott;
Step-4 Provide some special permissions to use DBMS_DATAPUMP API from scott schema.
grant MANAGE SCHEDULER to scott;
grant all on DBMS_SCHEDULER to scott;
grant create table to scott;
grant EXP_FULL_DATABASE to scott;
grant all on DBMS_DATAPUMP to scott;
Step-5 Create stored procedure for full database logical backup using datapump.
CREATE OR REPLACE PROCEDURE expdp_full
h1 NUMBER := NULL;
h1 := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'FULL',
job_name => 'DATAPUMP_ORCL_FULL',
version => 'COMPATIBLE');
handle => h1,
filename => 'DB_DP_FULLEXP_'||to_char(sysdate,'ddmmrrrrhh24miss')||'.log',
directory => 'DATA_PUMP_DIR',
filetype => 3);
handle => h1,
filename => 'DB_DP_FULLEXP_'||to_char(sysdate,'ddmmrrrrhh24miss')||'_%U.dmp',
filesize => 1073741824,
directory => 'DIR_ASM',
filetype => 1);
Step-6 Configure the Scheduler for DataPump backup using CLASS created in step-2 of above RMAN demonstration.
job_name => 'JOB_DB_DATAPUMP',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN expdp_full; END;',
start_date => systimestamp,
repeat_interval => 'FREQ=DAILY;BYHOUR=1;BYMINUTE=21;',
job_class => 'RUN_BACKUP_JOB',
enabled => TRUE,
comments => 'JOB_DB_DATAPUMP High Availability');
Step-7 Run the Job manually to test the behavior
DBMS_SCHEDULER.RUN_JOB (job_name => 'JOB_DB_DATAPUMP');
Step--8 Troubleshooting for already existing Jobs
SET SERVEROUTPUT ON
FOR REC IN (SELECT object_name
FROM user_objects o, user_datapump_jobs j
AND j.job_name NOT LIKE 'BIN$%' ORDER BY 1)
execute immediate 'DROP TABLE '|| REC.object_name ||' purge';
dbms_output.put_line('DROP TABLE '|| REC.object_name ||' purge');
Step-9 Monitor datapump backup
ASMCMD [+CDATA/datapump] > ls -la
WARNING:option 'a' is deprecated for 'ls'
please use 'absolutepath'
Type Redund Striped Time Sys Name
N DB_DP_FULLEXP_03022016181730_01.dmp => +CDATA/RCPDB/DUMPSET/SCOTTNETVERTEX_FULL_116217_1.1188.902859451
N exp_test2.dmp => +CDATA/RCPDB/DUMPSET/SCOTTSYS_EXPORT_FULL_02_116046_1.846.902857587
N exp_test5.dmp => +CDATA/RCPDB/DUMPSET/SCOTTSYS_EXPORT_FULL_03_116172_1.848.902859031
ASMCMD [+CDATA/datapump] >
--Check backup Logs
-bash-4.1$ ls -ltr
-rw-r--r-- 1 oracle oinstall 2254 Feb 3 18:12 log_test5.log
-rw-r--r-- 1 oracle oinstall 2992 Feb 3 18:22 DB_DP_FULLEXP_03022016181730.log
-bash-4.1$ tail -50 DB_DP_FULLEXP_03022016181730.log
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 7.736 GB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
This artical successfully demonstrate integration of Oracle Cluster Services and Oracle Scheduler so that it's best techniques for RMAN and DATAPUMP backup high availability while deploying the same on production server. This artical also share the use cases of shared backup location for Oracle FRA to preferably use for any famous file system like Oracle ASM,Oracle OCFS2 and Oracle ACFS2.
Lead Database Administrator
OTN Community - Master Tag
Sr. Database Administrator