You can reduce the frequency of archive log generation by increasing the size, but the total size of archive logs generated would be the same.
General rule of thumb is a log switch every 15-20 minutes.
So during the busiest period you should have a log switch every 15-20 minutes.
Yes and no.
First, a general rule of thumb is just that. It needs to be evaluated in the context of a specific situation.
And given that, I don't find it at all unreasonable to accept a log switch rate greater than every 15-20 minutes during heavy batch processes.
Facing huge no of archivelog generation alerts on daily basis. Could anyone please advise me on this.
Ok - but why do you consider that to be a problem?
Archive logs are not generated - they are copies of redo log files.
Redo log files are generated. The number generated depends on the number of log groups, the number of members in each group, the size of the log files, the frequency and timing of log switches and basically the amount of redo being generate.
What PROBLEM are you trying to solve?
Post supporting evidence that you actually HAVE a problem.
To find out which session is generating archive log most , please follow below link :-
then steps to minimize the archive log generations ,Few more recommendations, see below:-
1) Regarding the tables that are involved in this data activity/loads.
Please alter the DDL of the tables to avoid redo log generation using options like NOLOGGING. It should boost DB performance significantly.must use append and nologging hint in the insert query.
However, with this kind of change, you should do RMAN database backups immediately after your db loads are finished (for disaster recovery in the future, if needed).
2) Use GLOBAL TEMPORARY tables to store temp activity instead of regular tables. It should cut down redo generation & improve DB performance.
3) Spread your data load & update jobs to different timings, particularly when working on the same tables. It should avoid queuing-up/wait events.
4) Again, ensure redo logs/temp files are stored under different mount points than the data files (basically different controller on the storage array). Otherwise, there would be lot of contention with the I/O activity resulting in slowness.
Shivendra Narain Nirala
the volume of archives produced represent the "activity" of the database. Modifying the size of redologs (hence of archives) will not modify the volume produced (example: instead of producing N archives of size S per day, you might produce N/2 archives of size 2S per day). Try to adapt the size so that you have 3 to 6 logswitches per hour.
But the question is "do you produce too much redo?" This can be...
Long long time ago I had to manage a database hosting an application. A new version of the application suddenly started to produce huge amounts of archives... In fact the application was using "queues" to send some work to do from one session to another, and the session having to do the work was looping as fast as possible, checking if "there was something to do", and 99.99% of the time there was nothing on the queue, but the session was inserting a row in an audit table 'on 21-JUL-2014 at 14:15:56: nothing to do'...
The solution was to identify the session producing a lot of redo, then identify the statement, then go to the developer that said "oops, sorry, I didn't know it would be so fast" and he reviewed his code ;-) So for your case: check the "heavy redo producers", who knows, or maybe your database/application just needs this amount of redo (BTW, what do you call "huge", what is the size of the redologs, what is the logswitch frequency? Do you have peak times? ..."