This content has been marked as final. Show 6 replies
Where is the issue? Are the logs transported but not applied or not transported?
Does the delay ever change or is it always exactly 30 minutes?
Is there an apply delay configured?
What is the nature of the network?
How many km between sites?
How much redo is being generated?
What is the bandwidth?
You've given us nothing to work with to help you. Sort of like saying "my watch is 30 minutes behind ... tell me why?"
You have given very less information about your problem.
If archives are not shipped because of network glitch and archive gap is huge then you should go for roll forwarding of standby using RMAN incremental backups.
If delay is configured then their is no issue at all.
Please provide complete information so that anyone can help you.
Keep monitoring the delay. Is it fluctuating ? Are your redo volumes fluctuating by the hour ? Is the network bandwidth sufficient for the redo volume ?
Hemant K Chitale
Before giving solution to your problem - You need to be clearly put whats the problem? - What was error are u getting from alert log from standby ? - Also give me inputs what damorgan/hemanth asked
There was no error in the standby db alert log, but in the grid control of the standby section ORA-16810 was noticed.
Standby dbs '30 minutes behind from Primary' message was both on test and prod dbs.
The problem was solved somehow, but honestly speaking I am still confused or not sure how it got solved. I tried different things in the test environment but used only 1 command in prod and that worked.
In Test environment:
- In grid control (11g), I reset DG and then verify configuration from the data guard option. DG Status was not maked with green color tick mark and Normal.
- Verify Configuration reported ORA-16810 was reported (but it is sill there even the problem is resolved)
- When reset DB didn't solved, I changed the DG option to Maximum Performance from maximum protection to try to recover things manually.
- Current log was 1870 and last applied was 1860 (usually it is 1 behind or same).
- Checked List archivelog all with rman target and list backup of archivelog all with rman catalog, the archivelog files was there if I remember correctly.
Note: The command which I think did the magic was: RESTORE ARCHIVE LOG SEQUENCE 1871 from the Primary with RMAN catalog. It seems that Standby knew that oh someone restored my missing archivelog from the Primary. Was that a great feature of 11gr2 DB?
I would be thankful if you could write how I can find from the standby throw command current log and last log applied, especially may be with rman if there are archive log missing? and if for example couple of them are missing how to resolve them.
Edited by: John-MK on Apr 25, 2013 11:13 AM
The alert log on the standby will tell you which logs have been applied.1 person found this helpful
Are you using real time apply or redo apply? With real time apply you don't have to wait for the log to be archived before it is applied.
You also have the v$arcvhived_log view which has an applied column which tells you which log has been applied.
When a redo log on the primary is archived it should automatically be shipped to your standby. How did the logs go missing? If the logs have been removed from the standby before they are applied then the standby will try and fetch the logs again from primary (FAL_SERVER and FAL_CLIENT parameters). If they have been "removed" from the primary then you will need to restore the archive logs from your backups. You can check to see if there is a gap by running the following from your standby.
select SEQUENCE#, APPLIED, COMPLETION_TIME from v$archived_log;
Are you removing your logs manually or using RMAN to remove them. You should be using RMAN to remove them by using the following RMAN configuration:
select * from v$archive_gap;
configure archivelog deletion policy to applied on standby;
Are you really using maximum protection? Do you have multiple standby's? In maximum protection if a transaction can't be written to the standby then the primary will be shut down.