This discussion is archived
1 2 3 Previous Next 30 Replies Latest reply: Apr 2, 2013 7:27 PM by Hemant K Chitale Go to original post RSS
  • 15. Re: Cold Backup/Recovering A Table
    896971 Newbie
    Currently Being Moderated
    EdStevens wrote:
    DataGuard isn't performing a backup at all. In the classic physical standby, dataguard is performing a continuous recovery, applying the redo as the archivlogs arrive. Much like if you were sitting at an rman prompt, responding 'yes' to every prompt for the next archivelog.
    Thanks for the info Ed. As I stated earlier, I'm a new DBA and new to this specific environment. The old DBA departed and a lot of institutional knowledge went with them.

    So, given my situation, is my earlier plan still viable?

    ) Copy over relevant files from production to the test server
    2) Backup the original TableX on production
    2.5) MOUNT the copied data files onto the test server
    3) Restore the system, sysaux, undo and the tablespace which contains TableX on test server
    4) Drop all other tablespaces in mount state (or not, if this step isn't necessary?)
    5) Recover the database till the time when the table was dropped or updated
    6) Take an export of TableX from database and import it into production, overwriting the current TableX
    7) Update the imported production TableX with the past week of "good" data from the backup in step 2

    Again, it doesn't appear that we are utilizing RMAN in any way that I can tell.

    Thank you all for the insightful information.
  • 16. Re: Cold Backup/Recovering A Table
    mseberg Guru
    Currently Being Moderated
    So, given my situation, is my earlier plan still viable?

    No.

    Copy over relevant files from production to the test server
    This seems to be the loop we are struck in.

    What relevant files??

    In order to recreate the table at some prior point you have to have a backup from that time.

    Please give us something to work with.

    If I have the same request at my shop I go to tape for a backup or the daily Data Pump file. I'm sorry but so far I have been unable to get the relevant files information from you. I'm starting believe there are none.

    If this is true you should use this chance to setup RMAN and maybe a full data pump.


    Best Regards

    mseberg

    Edited by: mseberg on Apr 2, 2013 12:11 PM

    Edited by: mseberg on Apr 2, 2013 12:15 PM
  • 17. Re: Cold Backup/Recovering A Table
    896971 Newbie
    Currently Being Moderated
    mseberg wrote:
    So, given my situation, is my earlier plan still viable?

    No.

    Copy over relevant files from production to the test server
    This seems to be the loop we are struck in.

    What relevant files??

    In order to recreate the table at some prior point you have to have a backup from that time.

    Please give us something to work with.

    If I have the same request at my shop I go to tape for a backup or the daily Data Pump file. I'm sorry but so far I have been unable to get the relevant files information from you. I'm starting believe there are none.

    If this is true you should use this chance to setup RMAN and maybe a full data pump.


    Best Regards

    mseberg
    I have mentioned in a past post that there are no ".bkp" files in the fast_recovery_area. I have confirmed now that there are none on the primary server at all. I'm going to see if there are any on the standby servers. If not, then is it a fair statement to say that I am up poop creek without a paddle?

    EDIT: Though if this is true, then what is the point of the archivelogs going back to December 1st?

    Edited by: 893968 on Apr 2, 2013 10:22 AM
  • 18. Re: Cold Backup/Recovering A Table
    mseberg Guru
    Currently Being Moderated
    Perhaps.

    As I tried to state before ".bkp" is a format I use for RMAN. You could have something else.

    Can you tell me what OS?

    Can you tell me anything about the table? Why is it bad and how?

    h2. Update
    EDIT: Though if this is true, then what is the point of the archivelogs going back to December 1st?
    No idea, but the no backup makes no sense either. You apply archive to a backup. A way of bring a backup forward. Still need something to bring forward.

    Best Regards

    mseberg

    Edited by: mseberg on Apr 2, 2013 12:25 PM
  • 19. Re: Cold Backup/Recovering A Table
    Levi-Pereira Guru
    Currently Being Moderated
    Example: How do a restore using RMAN since you have a valid backup as mentioned by mseberg.


    vi initclone.ora
     
    DB_NAME=prod
    DB_UNIQUE_NAME=clone
    CONTROL_FILES=/fra/stage_area_tspitr/PROD/controlfile/control01.ctl
    REMOTE_LOGIN_PASSWORDFILE=exclusive
    COMPATIBLE =11.2.0
    DB_BLOCK_SIZE=8192
    memory_target=700M
    streams_pool_size=100M
     
    ### creating stage area to deploy database
     
    mkdir -p  /fra/stage_area_tspitr/PROD
    cd /fra/stage_area_tspitr/PROD
    mkdir controlfile datafile onlinelog
    
    $ rman target / 
     
    RMAN> startup nomount
     
    RMAN> restore controlfile from '/fra/PROD/autobackup/2012_12_19/o1_mf_s_802449729_8f3vm2cc_.bkp';
    
    RMAN> startup mount
    
    
    ### Get all datafiles number to be restored (e.g) restoring only tablespace SOE
    RMAN> report schema;
    List of Permanent Datafiles
    ===========================
    File Size(MB) Tablespace           RB segs Datafile Name
    ---- -------- -------------------- ------- ------------------------
    1    700      SYSTEM               ***     +DS8000_DG/prod/datafile/system.349.799325249
    2    600      SYSAUX               ***     +DS8000_DG/prod/datafile/sysaux.348.799325249
    3    200      UNDOTBS1             ***     +DS8000_DG/prod/datafile/undotbs1.347.799325251
    4    5        USERS                ***     +DS8000_DG/prod/datafile/users.335.799325251
    5    100      AUDIT_AUX            ***     +DS8000_DG/prod/datafile/audit_aux.287.799330083
    6    100        SOE                  ***     +DS8000_DG/tspitr/datafile/soe.368.802445265
    7    100      XDB                  ***     +DS8000_DG/prod/datafile/xdb.311.801410833
    run {
    set newname for datafile 1 to '/fra/stage_area_tspitr/PROD/datafile/datafiles_%U';
    set newname for datafile 2 to '/fra/stage_area_tspitr/PROD/datafile/datafiles_%U';
    set newname for datafile 3 to '/fra/stage_area_tspitr/PROD/datafile/datafiles_%U';
    set newname for datafile 7 to '/fra/stage_area_tspitr/PROD/datafile/datafiles_%U';
    set newname for datafile 6 to '/fra/stage_area_tspitr/PROD/datafile/datafiles_%U';
     
    restore datafile 1,2,3,7,6 from tag 'TAG20121219T143840' ;
    switch datafile 1;
    switch datafile 2;
    switch datafile 3;
    switch datafile 7;
    switch datafile 6;
    switch tempfile all;
    set until time "to_date('19-12-2012 14:42:00','dd-mm-yyyy hh24:mi:ss')";
    recover database SKIP TABLESPACE  "USERS", "AUDIT_AUX" delete archivelog;
    ## Renaming ONLINELOG
    sql "alter database rename file ''+DS8000_DG/prod/onlinelog/group_1.330.802352807'' to ''/fra/stage_area_tspitr/PROD/onlinelog/group1.rdo''";
    sql "alter database rename file ''+DS8000_DG/prod/onlinelog/group_2.331.802352815'' to ''/fra/stage_area_tspitr/PROD/onlinelog/group2.rdo''";
    sql "alter database rename file ''+DS8000_DG/prod/onlinelog/group_3.321.802352821'' to ''/fra/stage_area_tspitr/PROD/onlinelog/group3.rdo''";
    alter database open resetlogs;
    sql 'alter tablespace  SOE read only';
    }
    After restore database you can use export utility (expdp,impdp) or create a DBLINK to make the table available on PROD database.
  • 20. Re: Cold Backup/Recovering A Table
    mseberg Guru
    Currently Being Moderated
    Levi;

    Thank you. The problem is we have not been able to locate a backup, let alone from the time period needed. All we have to work with is Archive.

    Best Regards

    Michael
  • 21. Re: Cold Backup/Recovering A Table
    896971 Newbie
    Currently Being Moderated
    mseberg wrote:
    Can you tell me what OS?
    Red Hat Enterprise Linux 5.8
    Can you tell me anything about the table? Why is it bad and how?
    The developer, a month ago, put in a code change which--when a user made a change, such as editing a single text field--caused the entire table to be updated with all the fields of the single updated record. This went on for 3 weeks because it really isn't noticeable until one tries to run reports against that table (which some folks like to do monthly).

    The good news is that it appears as if there are backups occurring on one of the standby databases. They are weirdly sporadic (not daily or anything), and they violate best practices (SYS is the RECOVERY_CATALOG_OWNER and there is no RMAN user in the db), but...there are .bkp files. I am having issues trying to access the catalog at RMAN command prompt due to the owner being SYS. It's telling me that connection should be as SYSDBA or SYSOPER.

    When it rains it pours...
  • 22. Re: Cold Backup/Recovering A Table
    EdStevens Guru
    Currently Being Moderated
    893968 wrote:
    mseberg wrote:
    So, given my situation, is my earlier plan still viable?

    No.

    Copy over relevant files from production to the test server
    This seems to be the loop we are struck in.

    What relevant files??

    In order to recreate the table at some prior point you have to have a backup from that time.

    Please give us something to work with.

    If I have the same request at my shop I go to tape for a backup or the daily Data Pump file. I'm sorry but so far I have been unable to get the relevant files information from you. I'm starting believe there are none.

    If this is true you should use this chance to setup RMAN and maybe a full data pump.


    Best Regards

    mseberg
    I have mentioned in a past post that there are no ".bkp" files in the fast_recovery_area. I have confirmed now that there are none on the primary server at all. I'm going to see if there are any on the standby servers. If not, then is it a fair statement to say that I am up poop creek without a paddle?

    EDIT: Though if this is true, then what is the point of the archivelogs going back to December 1st?

    Edited by: 893968 on Apr 2, 2013 10:22 AM
    This might help get some of the information mseberg has been trying to get:
    Open a command prompt session on your still unnamed OS ... on the database server
    Then issue one of these commands:
    for Windows:
    set ORACLE_SID=????
    For *nix
    export ORACLE_SID=????
    In either case, replace the ???? with the name of your oracle instance. Case sensitivity is important.

    Next, in the same session:
    rman target /
    list backup;
    exit
    Copy the entire session -- full commands and full responses --- and paste it all into your next post. Please precede the pasted code with a line containing \
    .  Likewise, follow the pasted code with a single line containing \
    . So as you write your post it looks like this:

    \
    code copied from command session goes here
    \
  • 23. Re: Cold Backup/Recovering A Table
    896971 Newbie
    Currently Being Moderated
    EdStevens wrote:
    Next, in the same session:
    rman target /
    list backup;
    exit
    Unfortunately the database is part of a internal system that is disconnected from the internet so I can't just copy and paste.

    There are roughly 3 dozen backupset *.bkp files going back to 3/25.
    There are 8 autobackup *.bkp files going back 3/26.

    It still appears that this is a no-go situation as I would need backups from earlier.

    I do appreciate the input folks.
  • 24. Re: Cold Backup/Recovering A Table
    Levi-Pereira Guru
    Currently Being Moderated
    This output does not helps?

    On PROD database:
    RMAN>  restore database validate until time "TO_DATE(<date_before_table_get_corrupted>,<mask>)" preview;
    That command will list backup piece needed to perform restore.
  • 25. Re: Cold Backup/Recovering A Table
    896971 Newbie
    Currently Being Moderated
    Levi Pereira wrote:
    This output does not helps?

    On PROD database:
    RMAN>  restore database validate until time "TO_DATE(<date_before_table_get_corrupted>,<mask>)" preview;
    That command will list backup piece needed to perform restore.
    Levi, there appears to be no RMAN backups occurring on production, so that gives me the RMAN-06023 error "no backup or copy of datafile 1 found to restore" several times.

    When I try it on the standby which does have some backup files, it gives me the same errors, presumably because the backup files only go back to 3/25 and my table corrupted date is prior to that.
  • 26. Re: Cold Backup/Recovering A Table
    Levi-Pereira Guru
    Currently Being Moderated
    Levi, there appears to be no RMAN backups occurring on production, so that gives me the RMAN-06023 error "no backup or copy of datafile 1 found to restore" several times.
    When I try it on the standby which does have some backup files, it gives me the same errors, presumably because the backup files only go back to 3/25 and my table corrupted date is prior to that.
    Ok... It's discarded use RMAN to try find this info since that backup needed is not cataloged.

    Did you have backup (backuppiece files) prior 03/25/2013 in some place?
  • 27. Re: Cold Backup/Recovering A Table
    896971 Newbie
    Currently Being Moderated
    Levi Pereira wrote:
    Levi, there appears to be no RMAN backups occurring on production, so that gives me the RMAN-06023 error "no backup or copy of datafile 1 found to restore" several times.
    When I try it on the standby which does have some backup files, it gives me the same errors, presumably because the backup files only go back to 3/25 and my table corrupted date is prior to that.
    Ok... It's discarded use RMAN to try find this info since that backup needed is not cataloged.

    Did you have backup (backuppiece files) prior 03/25/2013 in some place?
    No.

    Do you know what the purpose of the files in the archivelog directory is? Is there any method for extracting past data from them?

    Thanks
  • 28. Re: Cold Backup/Recovering A Table
    Levi-Pereira Guru
    Currently Being Moderated
    Ok... It's discarded use RMAN to try find this info since that backup needed is not cataloged.

    Did you have backup (backuppiece files) prior 03/25/2013 in some place?
    No.

    Do you know what the purpose of the files in the archivelog directory is? Is there any method for extracting past data from them?
    Archivelog directory have a purpose (in your case) to be a stage area to apply REDO online from PROD on STANDBY and after be backed up. (it's a gess, since I don't have a view of your env)
    Archivelogs are part of Backup full, without Backup Full archivelogs are garbage.


    This is not good :( .. No Backup = No Restore = No Data. RMAN is not a option anymore if there is no backup available.


    If you have archived logs from period that problem occurred. You can try use LogMiner, but It's not guaranteed that you'll get your data back.


    http://docs.oracle.com/cd/E11882_01/server.112/e22490/logminer.htm


    Later (mseberg light):

    Did you have any DUMPFILE (exp/expdp) from that SCHEMA prior 03/25/2013?

    Edited by: Levi Pereira on Apr 2, 2013 5:19 PM
  • 29. Re: Cold Backup/Recovering A Table
    mseberg Guru
    Currently Being Moderated
    Hello;

    For future use since we both use Linux
    #!/bin/bash
    ####################################################################
    #
    # Data Pump Full script
    #
    ####################################################################
    
    if [ "$1" ]
    then DBNAME=$1
    else
    echo "basename $0 : Syntax error : use . expdp_full.sh <DBNAME> "
    exit 1
    fi
    
    export DMPDIR=/tmp
    
    #
    # Set the Environmental variable for the instance
    #
    . /u01/app/oracle/dba_tool/env/${DBNAME}.env
    #
    #
    
    cd $DMPDIR
    
    # check for existing logfile and
    # remove if found.
    if [ -s ${DMPDIR}/${LOGFILE} ] ; then
    /bin/rm ${DMPDIR}/${LOGFILE}
    fi
    
    # check for existing dump file and
    # remove if found.
    if [ -s ${DMPDIR}/${DUMPFILE} ] ; then
    /bin/rm ${DMPDIR}/${DUMPFILE}
    fi
    
    # change to the working directory
    cd ${DMPDIR}
    
    # call expdp and use the full path to it.
    $ORACLE_HOME/bin/expdp parfile=expdp_full.par DUMPFILE=${DUMPFILE} LOGFILE=${LOGFILE} DIRECTORY=DMPDIR
    
    # once the expdp is complete gzip it to save space.
    /bin/gzip -S .`date +%m%d`.Z ${DUMPFILE}
    
    # delete the existing dump file. Zip does not remove it.
    if [ -s ${DMPDIR}/${DUMPFILE} ] ; then
    /bin/rm ${DMPDIR}/${DUMPFILE}
    fi
    
    # rename the log file to preserve it.
    if [ -s ${DMPDIR}/${LOGFILE} ] ; then
    
    /bin/mv ${DMPDIR}/${LOGFILE} ${DMPDIR}/${LOGFILE}.`date +%m%d`
    fi
    
    # remove old logs and exports
    # 
    
    find ${DMPDIR}/ -name "*.Z" -mmin +1440 -exec \rm {} \;
    find ${DMPDIR}/ -name "*.log*" -mmin +1440 -exec \rm {} \;
    
    exit
    expdp_full.par
    USERID="/ as sysdba"
    FULL=Y
    Enviro examples ( you will need more ) ( /u01/app/oracle/dba_tool/env/PRIMARY.env )
    export LOGFILE=PRIMARY_expdp_full.log
    
    export DUMPFILE=PRIMARY_expdp_full.dmp
    
    export DMPDIR=/u01/oradata/PRIMARY_export
    I put the par file in the DMPDIR and run this daily. Then it goes to tape. Won't help today but...


    mseberg

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points