Forum Stats

  • 3,837,777 Users
  • 2,262,299 Discussions
  • 7,900,392 Comments

Discussions

Clone A Multi-Terabyte Oracle Database in 5 Minutes

Unknown
edited Oct 5, 2015 12:34AM in Database Software

CLONE A MULTI-TERABYTE ORACLE DATABASE IN 5 MINUTES

     
By Wissem EL KHLIFI , Oracle ACE

  

In an environment with multiple Oracle Databases running large amounts of data (multi-terabyte), the snapshot storage feature is very useful when you want to preserve the state of the Oracle database, make a clone, create a standby database or just backup the environment. This feature simply saves the current state of your Oracle storage environment while the database is online (in open mode) or offline (the Oracle instance is shutdown during the snapshot phase).  When the Oracle database is in no archive log mode, only snapshot offline mode is possible to save a consistent state of your Oracle storage environment. However, in an archive log mode, both online and offline modes are possible because it would be possible to apply archive log modes for the period between the SCN of the Snapshots and the target SCN (Database point-in-time recovery (DBPITR)). The Snapshot feature is another method to back up the Oracle database. Compare with hard disk imaging, it require less storage space for each snapshot than a system partition image file, and the restoration speed is significantly increased. Backing up the Oracle databases using Oracle Recovery manager and Snapshot feature guarantees much safer and protected Oracle database environments. In this article, we will show how the use HP 3PAR Thin Provisioning Software, Oracle ASM utilities and the Oracle database point-in-time recovery procedure to clone a multi-terabyte Oracle production database within 5 minutes (in this case 7 Terabyte of size). We will use HP 3PAR InForm OS Command Line Interface Software, command line user interface for managing, and configuring HP 3PAR Storage Systems and the Oracle Automatic Storage Management (ASM) commands for managing the Oracle Storage. (Note: Thin Provisioning is a mechanism to improve the utilization efficiency of storage capacity deployment and virtualization technology.)

Snapshot Creation:        

       

The following table describes the source and target Oracle databases. We name the source database (livedb in this case) the database to be cloned and the target database (clonedb) which is the database result of the clone procedure.

       

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Source database name

livedb

Source database Size

7 terabytes

Source database version

11.2.0.3

Source DB host name

jupiter

Source OS version

Oracle Linux 6

Source Oracle Storage manager

ASM

Source ASM disk group

Livedb_dg

Source Storage Solution

HP 3PAR

Target database name

clonedb

Target database Size

7 terabytes

Target  database version

11.2.0.3

Target  DB host name

jupiter

Target  OS version

Oracle Linux 6

Target  Oracle Storage manager

ASM

Target  ASM disk group

clonedb_dg

Target  Storage Solution

HP 3PAR

Let’s check the database size of the Source livedb database. As you can see the livedb has  7 terabytes.

jupiter(livedb):/home/oracle>sqlplus   / as sysdba

SQL*Plus: Release 11.2.0.3.0   Production on Sat Feb 28 05:50:44 2015

Copyright (c) 1982, 2011,   Oracle.  All rights reserved.

Connected to:
        Oracle Database 11g Enterprise   Edition Release 11.2.0.3.0 - 64bit Production
        With the Partitioning,   Automatic Storage Management, OLAP, Data Mining
        and Real Application Testing   options

SYS:livedb SQL>
        SELECT A.DATA_SIZE +   B.TEMP_SIZE + C.REDO_SIZE + D.CONTROLFILE_SIZE "total_size in GB"
          FROM (SELECT SUM(BYTES) / 1024 / 1024 /   1024 DATA_SIZE FROM DBA_DATA_FILES) A,
               (SELECT NVL(SUM(BYTES), 0) / 1024 /   1024 / 1024 TEMP_SIZE
                  FROM DBA_TEMP_FILES) B,
               (SELECT SUM(BYTES) / 1024 / 1024 /   1024 REDO_SIZE FROM SYS.V_$LOG) C,
               (SELECT SUM(BLOCK_SIZE *   FILE_SIZE_BLKS) / 1024 / 1024 / 1024 CONTROLFILE_SIZE
                  FROM V$CONTROLFILE) D;

total_size in GB
        ----------------
              7036.39496

SYS:livedb SQL>

The next step is to check the ASM disks used by the livedb database.

SYS:livedb SQL> show   parameter db_create

NAME                                              TYPE          VALUE
        ------------------------------------         -----------   ------------------------------
        db_create_file_dest                          string      +LIVEDB_DG
        db_create_online_log_dest_1          string      +LIVEDB_DG
        db_create_online_log_dest_2          string
        db_create_online_log_dest_3          string
        db_create_online_log_dest_4          string
        db_create_online_log_dest_5          string

We can see the livedb uses LIVEDB_DG diskgroup. We use asmcmd command to list the ASM disks;

JUPITER(+ASM):/home/oracle>asmcmd lsdsk -G LIVEDB_DG
      Path
      /dev/oracleasm/disks/JUPITER_3PAR_LIVE_0
      /dev/oracleasm/disks/JUPITER_3PAR_LIVE_1
      /dev/oracleasm/disks/JUPITER_3PAR_LIVE_2
      /dev/oracleasm/disks/JUPITER_3PAR_LIVE_3
      JUPITER(+ASM):/home/oracle>

The overall size allocated for the 4 ASM disks is 8 terabytes.

SQL>  select name,total_mb from v$asm_disk where   header_status='MEMBER' and group_number=35;

NAME                             TOTAL_MB
        ------------------------------   ----------
        LIVEDB_DG_0011                      2097152
        LIVEDB_DG_0010                      2097152
        LIVEDB_DG_0009                      2097152
        LIVEDB_DG_0008                      2097152

SQL>

Now, we use the HP 3PAR InForm OS Command Line Interface to list the set of the Virtual Volume used by the Oracle database. We run the command showvvset to show all autonomic virtual volume groups or sets.

3PAR700 cli%  showvvset

14 JUPITER_3PAR_LIVE     JUPITER_3PAR_LIVE.0
                                                         JUPITER_3PAR_LIVE.1
                                                         JUPITER_3PAR_LIVE.2
                                                         JUPITER_3PAR_LIVE.3

From the above outputs we see we have a virtual volume set name JUPITER_3PAR_LIVE   along with four virtual volumes (JUPITER_3PAR_LIVE.0, JUPITER_3PAR_LIVE.1 , JUPITER_3PAR_LIVE.2 and JUPITER_3PAR_LIVE.3) all used by the oracle livedb database.

The Oracle database livedb is in no archive log mode so we clean shutdown the database to make the snapshot.

SYS:livedb SQL> select   log_mode from v$database;

LOG_MODE
        ------------
        NOARCHIVELOG

SYS:livedb SQL>  shutdown immediate;

After clean shutdown, we create the snapshots using the following command:

3PAR700 cli%  createsv -exp 1825d TPAR_CLONEDB.@[email protected]   set:JUPITER_3PAR_LIVE

This command will create a snapshot backup of the JUPITER_3PAR_LIVE volume set. We set the virtual volume expiration times with the -exp option in our case we set it to 1825 days. We created a virtual copy, or snapshot, of the virtual volume set using the command createsv. That’s it. With a downtime of only 5 minutes we now have a snapshot copy of the livedb.

We can startup the livedb database using the startup command.

First we create a virtual volume set for easy management of the new TPAR_CLONEDB* copies using createvvset 3par command.

3PAR700 cli%  createvvset    TPAR_CLONEDB TPAR_CLONEDB*

We show the newly set created;

3PAR700 cli%  showvvset |grep TPAR_CLONEDB

75 TPAR_CLONEDB            TPAR_CLONEDB.0
                                                      TPAR_CLONEDB.1
                                                      TPAR_CLONEDB.2
                                                      TPAR_CLONEDB.3

Manipulate the Snapshot for the cloned database

First, export the virtual volumes to the JUPITER host using the command createvlun.

createvlun TPAR_CLONEDB.0 auto   JUPITER
      createvlun TPAR_CLONEDB.1 auto   JUPITER
      createvlun TPAR_CLONEDB.2 auto   JUPITER
      createvlun TPAR_CLONEDB.3 auto   JUPITER

Next, perform a LUN scanning on all HBAs in the Jupiter host and reload the multipath.

We can get the list of LUNS using the HP3PARInfo –i command; this will show the LUNS used by CLONEDB snapshots;

350002ac000000000020004d40000632e        2T      mpathxe         dm-154          TPAR_CLONEDB.0          JUPITER_3PAR_LIVE_0
      350002ac000000000020004d50000632e        2T      mpathxb         dm-142          TPAR_CLONEDB.1          JUPITER_3PAR_LIVE_1
      350002ac000000000020004d60000632e        2T      mpathxc         dm-143          TPAR_CLONEDB.2          JUPITER_3PAR_LIVE_2
      350002ac000000000020004d70000632e        2T      mpathxd         dm-153          TPAR_CLONEDB.3          JUPITER_3PAR_LIVE_3

At this stage, we need to rename the ASM disks from JUPITER_3PAR_LIVE* to TPAR_CLONEDB* using oracleasm renamedisk from ASMLIB command line.

/usr/sbin/oracleasm renamedisk   -f /dev/dm-154 TPAR_CLONEDB_0
      /usr/sbin/oracleasm renamedisk   -f /dev/dm-142 TPAR_CLONEDB_1
      /usr/sbin/oracleasm renamedisk   -f /dev/dm-143 TPAR_CLONEDB_2
      /usr/sbin/oracleasm renamedisk   -f /dev/dm-153 TPAR_CLONEDB_3

Another important step is to modify the disk headers of the ASM disks to reflect the new ASM diskgroup that cloned is going to use. From the outputs (see below), we see that the actual ASM disks are still referring to the LIVEDB_DG diskgroup. We need to modify the name from LIVEDB_DG to CLONEDB_DG.

[[email protected] LIVEDB]# strings   -a /dev/dm-154 |head -4
      ORCLDISKTPAR_LIVEDB_0
      LIVE_DG_0008
      LIVEDB_DG
      LIVE_DG_0008
      [[email protected] LIVEDB]# strings   -a /dev/dm-142  |head -4
      ORCLDISKTPAR_LIVEDB_1
      LIVE_DG_0009
      LIVEDB_DG
      LIVE_DG_0009
      [[email protected] LIVEDB]# strings   -a /dev/dm-143  |head -4
      ORCLDISKTPAR_LIVEDB_2
      LIVE_DG_0010
      LIVEDB_DG
      LIVE_DG_0010
      [[email protected] LIVEDB]# strings   -a /dev/dm-153  |head -4
      ORCLDISKTPAR_LIVEDB_3
      LIVE_DG_0011
      LIVEDB_DG
      LIVE_DG_0011
      [[email protected] LIVEDB]#

To modify the ASM disk headers, we can use kfed command; kfed can be used to read and write ASM metadata in disk headers and ASM metadata files. Please note that kfed in write mode is a powerful tool and can corrupt the disk metadata if it is not well used. You can refer to the My Oracle Support Document: “ASM tools used by Support: KFOD, KFED, AMDU (Doc ID 1485597.1) “for more details about kfed usage.

First we change the owner and group of CLONEDB_DG* disks from root to oracle;

Chown –R oracle:dba   /dev/oracleasm/disks/ TPAR_CLONEDB *

Then, I created a script to do the kfed work; basically we search in ASM disks TPAR_CLONEDB* and we replace the occurrences of LIVEDB_DG to CLONEDB_DG.

[[email protected] LIVEDB]# cat   /usr/local/oracle/bin/LIVEDB/LIVEDB_kfed.sh
      for file in   /dev/oracleasm/disks/ TPAR_CLONEDB *
      do
      echo "Processing REDO disk   $file ..."
      search=LIVEDB_DG
      replace=CLONEDB_DG
      newlength=`echo $replace | wc   -m`
      let newlength=$newlength-1
      shortname=`echo $file | cut -f   6 -d /`

kfed op=read dev=$file | sed -e   '24,24s/ '$search' / '$replace' /' -e '24,24s/length=.*/length='$newlength'/'   > /tmp/$shortname.kfed
        kfed op=write dev=$file   text=/tmp/$shortname.kfed CHKSUM=YES
        done

After running the kfed command as oracle user. We check again the ASM disk headers. You can see we now have CLONEDB_DG showing up.

[[email protected] LIVEDB]# strings   -a /dev/dm-154 |head -4
      ORCLDISKTPAR_LIVEDB_0
      LIVE_DG_0008
      CLONEDB_DG
      LIVE_DG_0008
      [[email protected] LIVEDB]# strings   -a /dev/dm-142  |head -4
      ORCLDISKTPAR_LIVEDB_1
      LIVE_DG_0009
      CLONEDB_DG
      LIVE_DG_0009
      [[email protected] LIVEDB]# strings   -a /dev/dm-143  |head -4
      ORCLDISKTPAR_LIVEDB_2
      LIVE_DG_0010
      CLONEDB_DG
      LIVE_DG_0010
      [[email protected] LIVEDB]# strings   -a /dev/dm-153  |head -4
      ORCLDISKTPAR_LIVEDB_3
      LIVE_DG_0011
      CLONEDB_DG
      LIVE_DG_0011
      [[email protected] LIVEDB]#

The next step is to mount the new diskgroup CLONEDB_DG;


      ALTER DISKGROUP CLONEDB_DG   mount;

Now that we have the new diskgroup mounted, we need to create a new pfile and conrolfile for the new cloned database.

We login into the livedb database and we issue the following commands:

alter database backup   controlfile to trace as '/tmp/cr_ctl_clonedb_wissem.sql';
      create   pfile=’/tmp/pfile_clonedb_wissem.ora’ from spfile;

The following steps include replacing all the occurrences of livedb and LIVEDB_DG in /tmp/cr_ctl_clonedb_wissem.sql' and ’/tmp/pfile_clonedb_wissem.ora’ files.

The clonedb controlfile will looks like this; please note we replaced the “REUSE DATABASE” from the original control file to “SET DATABASE”.

CREATE CONTROLFILE SET DATABASE   "CLONEDB" RESETLOGS NOARCHIVELOG
          MAXLOGFILES 80
          MAXLOGMEMBERS 3
          MAXDATAFILES 800
          MAXINSTANCES 8
          MAXLOGHISTORY 47280
      LOGFILE
        GROUP 13   '+CLONEDB_DG/livedb/onlinelog/group_13.860.849881219'  SIZE 1G BLOCKSIZE 512,
        GROUP 14   '+CLONEDB_DG/livedb/onlinelog/group_14.758.849881221'  SIZE 1G BLOCKSIZE 512,
        GROUP 15   '+CLONEDB_DG/livedb/onlinelog/group_15.753.849881223'  SIZE 1G BLOCKSIZE 512,
        GROUP 16   '+CLONEDB_DG/livedb/onlinelog/group_16.816.849881225'  SIZE 1G BLOCKSIZE 512
      DATAFILE
          '+CLONEDB_DG/livedb/datafile/system.833.865924661',
        ...
       >>>>>>> content   truncated , too many datafiles to show
        ...
          '+CLONEDB_DG/livedb/datafile/apps_ts_tx_data.831.872086137'
      CHARACTER SET UTF8
      ;

Finally we startup the database in nomount state, we run the controlfile script, open the database with resetlogs and add temporary files to the temporary tablespaces. (Note: you may also invoke the DBNEWID utility to modify the DBID of the newly cloned database).

Startup nomount pfile=’   /tmp/pfile_clonedb_wissem.ora’;
      @/tmp/cr_ctl_clonedb_wissem.sql
      Alter database open resetlogs;
      Alter tablespace TEMP add   TEMPFILE size 10G;

Conclusion

We have seen how combine snapshots feature and Oracle ASM utilities (ASMLIB, kfed, asmcmd) speed up and simply the cloning process of multi – terabyte Oracle databases.


About the Author

Wissem is a Senior DBA with over 12 years of experience specialized in Oracle HA solutions / Big Data. He works for “Schneider Electric / APC Global operations”. Wissem has also worked for several international enterprise leaders in many industries including Banking, Telecommunications, Internet and Energy. Wissem is the first Oracle ACE in Spain and he has earned many certifications including OCP DBA.

«1

Comments

  • user11991419
    user11991419 Member Posts: 4 Red Ribbon

    For the case you describe the following conditions apply (some of them you state clearly, the rest I'm assuming):

    • you shutdown the database
    • the ASM instance is not used by another instance/database
    • the ASM disk group is not shared with any other instance/database

    Thus, you can be sure that there's no activity on the volumes you snapshot.

    Is it possible to use the same method if any of the above conditions do not apply, when still using ASM?

  • orawiss
    orawiss Member Posts: 2,760 Silver Trophy

    For the case you describe the following conditions apply (some of them you state clearly, the rest I'm assuming):

    • you shutdown the database
    • the ASM instance is not used by another instance/database
    • the ASM disk group is not shared with any other instance/database

    Thus, you can be sure that there's no activity on the volumes you snapshot.

    Is it possible to use the same method if any of the above conditions do not apply, when still using ASM?

    Thanks for your comment.

    1- Yes, The database will be shutdown during cloning process.

    2- No. The ASM instance can be used by another database. Here we only clone the database.

    3- Correct. The ASM diskgroup must not be shared by another instance, as it will be recreated.

  • user11991419
    user11991419 Member Posts: 4 Red Ribbon

    Thanks for your comment.

    1- Yes, The database will be shutdown during cloning process.

    2- No. The ASM instance can be used by another database. Here we only clone the database.

    3- Correct. The ASM diskgroup must not be shared by another instance, as it will be recreated.

    The focus of my question was on a different point though.

    You make sure that the volumes are practically inactive.

    What I'm guessing is that this can not be done on an active volume, because the snapshot would/could be corrupt. Am I right on this?

    If I am right, this means that this method can not be applied for a "hot" clone of the database (without shutting down the instance).

  • Igoroshka
    Igoroshka Member Posts: 1,585 Gold Trophy

    Snapshoting in such a way actually means differential snapshot. I.e. instead of copying all blocks to a new location references to the original data used. Mostly same technique is used for deduplication. Oracle warnings against backuping with snapshots -- Snapshots Are NOT Backups.

    Other possibilities are also possible:

    We used deduplication features of the PureStore SUN to clone complete multi-terabyte RAC for dev and qa. That needed RAC shutdown. Nevertheless down time was about 15 min for everything (total size of DBs was about 10TB).

  • Igoroshka
    Igoroshka Member Posts: 1,585 Gold Trophy

    The focus of my question was on a different point though.

    You make sure that the volumes are practically inactive.

    What I'm guessing is that this can not be done on an active volume, because the snapshot would/could be corrupt. Am I right on this?

    If I am right, this means that this method can not be applied for a "hot" clone of the database (without shutting down the instance).

    If I am right, this means that this method can not be applied for a "hot" clone of the database (without shutting down the instance).
    

    Yes as soon as snapshots are made by means of OS/vendor specific tools that are not aware of Oracle DB.

    No if snapshots made withing Oracle technological cycle.

  • user11991419
    user11991419 Member Posts: 4 Red Ribbon
    If I am right, this means that this method can not be applied for a "hot" clone of the database (without shutting down the instance).
    

    Yes as soon as snapshots are made by means of OS/vendor specific tools that are not aware of Oracle DB.

    No if snapshots made withing Oracle technological cycle.

    What would be the Oracle tools you're implying?

    The text talks about a database using ASM with datafiles being ASM files (and not ACFS or anything of the like).

  • Snapshoting in such a way actually means differential snapshot. I.e. instead of copying all blocks to a new location references to the original data used. Mostly same technique is used for deduplication. Oracle warnings against backuping with snapshots -- Snapshots Are NOT Backups.

    Other possibilities are also possible:

    We used deduplication features of the PureStore SUN to clone complete multi-terabyte RAC for dev and qa. That needed RAC shutdown. Nevertheless down time was about 15 min for everything (total size of DBs was about 10TB).

    "We used deduplication features of the PureStore SUN to clone complete multi-terabyte RAC for dev and qa. That needed RAC shutdown. Nevertheless down time was about 15 min for everything (total size of DBs was about 10TB)."

    I am interested if you have achieved any storage efficiency with this? As far as the word around "Deduplication" with Oracle Database doesn't give you much of efficiency, Refer: Storage Myths: Dedupe for Databases | flashdba

  • Pini Dibask
    Pini Dibask Member Posts: 521 Gold Badge

    Snapshoting in such a way actually means differential snapshot. I.e. instead of copying all blocks to a new location references to the original data used. Mostly same technique is used for deduplication. Oracle warnings against backuping with snapshots -- Snapshots Are NOT Backups.

    Other possibilities are also possible:

    We used deduplication features of the PureStore SUN to clone complete multi-terabyte RAC for dev and qa. That needed RAC shutdown. Nevertheless down time was about 15 min for everything (total size of DBs was about 10TB).

    Hello Igoroshka,

    Oracle warnings against backuping with snapshots -- Snapshots Are NOT Backups.
    

    I agree that generally speaking backuping with third parties snapshots is not recommended for production sites; however, in my previous job we used NetApp Crash-Consistent Snapshot Copies for a couple of years and it worked perfect. It's also supported by Oracle as you can read here: http://www.netapp.com/us/media/tr-3858.pdf

    Hany Ezzat -Oracle
  • If I want to do on-line method without bring down source database, what are steps needed and where should be included,

    Kaushik DChapada
  • Chapada
    Chapada Member Posts: 1 Blue Ribbon
    Very interresting article especially the cloning part! Does this work without ASM too? Where can I find some more paperwork to this subject?