Hi, For both primary as well as the standby database, you'll have to check views such as v$archived_log, whether archive is applied to the dataguard or not, if any archive threads found missing then, query the view, v$archive_gap to find out what are all the archives missing , and you'll have to manually copy the missed archives and paste to the standby, and apply it to the standby.,
There are few things to make your life easier and sleep well during night.
As part of Dataguard setup you can configure Dataguard Broker, it's a tool which takes care of that both primary and standby databases are using correct settings and give you the ability to command Dataguard from one place issuing simple commands instead of typing long SQLs (if we assume it's correctly configured). Reporting and monitoring tasks are easier as well.
The other thing which is very popular and used in vast majority of ASYNC shipping method for physical standby databases (as you probably know Dataguard support two redolog shipping modes - asynchronous and synchronous (I assume you have a physical DG)). In your case of asynchronous redolog shipping there is one step further - use of Real Time Apply instead of shipping archived logs. So instead of waiting for redologs to become archived (either by log switch or setting ARCHIVE_LAG_TARGET) and then shipped you could set Real Time Apply where blocks from the log buffer of the primary database are send directly to the standby logs of the standby database. The benefits are obvious - the lag is almost few seconds i.e. changes got applied on the standby few seconds after the primary, you don't have to worry whether the archivelogs are sent or not.
Here is link for more information about Real Time Apply at Oracle documentation Apply Services
I have to say that I can't image having DG without Dataguard Broker.
Thanks Sunny & Sve,
I am just new to this company and they have the Oracle Partner consultants configured the DataGuard. Maybe the ASYNC you mentioned was already configured by them?
And we just do not use it? How can I check if it was already configured?
Does the script below showing "MRP0 WAIT_FOR_LOG" show that archivelogs are currently updated, hence it is waiting for logs?
SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY
PROCESS CLIENT_P SEQUENCE# STATUS
--------- -------- ---------- ------------
ARCH ARCH 530 CLOSING
ARCH ARCH 531 CLOSING
ARCH ARCH 0 CONNECTED
ARCH ARCH 529 CLOSING
MRP0 N/A 532 WAIT_FOR_LOG
RFS UNKNOWN 0 IDLE
RFS LGWR 532 IDLE
Thanks a lot,
I am just new to this company ...
First time I've seen you claimed this was on Feb'13: I am just new to this company
That's already more than 7 months... and according to your history over here, your company did a mistake.
Before that you tell me:
1.how much archive log generated per hour?
2.Whether it is a OLTP or DW environment?
3.what the size of the archive_log?
If it is OLTP
you can easily find on the following query:
select applied,sequence# from v$archived_log where applied='NO' order by sequence#;
It will shows how many it was not applied?
1.First SCP the archive_log from primary .
2.change the ownership which you got from primary.
3.Then in cancel based recovery ,
alter database recover managed standby database cancel;
Then put the query
alter database register logfile 'path of the missed logfile';
4.parallely you open the alert_log_file for standby database.after issue the above command check.
5.alter database recover managed standby database disconnect from session;
The part that you seem to be missing is that most DBA's will make their job easier by correctly managing these situations. You seem to be looking for a job that you can put no work into and get paid. Good luck with that.
google has enough links to point you in the right direction. Whether you can figure that out or not is another question.
I thank you all,
I already have these commands you mentioned.
How do I met the service level that when disaster strikes the standby db is updated to the lastest 30 min archivelog?
Do I need to issue alter system switch log file (every 30 min) at Primary DB so that it will force update the Standby DB.
I just thought this is what DG was made about. To automate everything and to ease the job of dbas.
I just thought when DG was setup, the dba can now sit down and relax and no worries about disaster?
As simple at that? then I can sit down relax and no worries about disaster?
Since there is a 30 mins service level, and if I am sleeping at night and there suddenly archive GAPS occurring for some unknown reason like missing or corrupted archivelogs.
Do I need to be emailed or sent SMS for this error?
ARCHIVE_LAG_TARGET is a directive only at the Primary to force an archive log generation.
It does not mean that errors / issues like
a. The standby being down
b. Network connectivity failing
c. Either side being out of disk space for archivelogs
d. The standby being slow to apply archivelog, thus lagging behind the primary
would be avoided
So : You DO have to setup monitoring and alerts for the standby lag.
Hemant K Chitale