13 Replies Latest reply: Feb 6, 2014 1:06 AM by CKPT RSS

    Archive log size suggestion for DG Standby database

      Hi all,

       

      11.2.0.3

      Aix 6

       

      Our current prod achivelog/redolog size is 250Mb. And based on the  history log generation, and current activity of our database we have:

       

      DATABASE:PROD1 Archivelogs generated per hour
      DAY/Time01234567891011121314151617181920212223TOTAL
      15-Jan00000000000000000000027110
      16-Jan29374101336666565667643241157
      17-Jan33150011134556556778663250134
      18-Jan3420010234565555465442140107
      19-Jan2913000111233433434433213090
      20-Jan2340000123455566788764240110
      21-Jan25200011134555545567543140119
      22-Jan29130001133555555678643231120
      23-Jan3020001124455445567542131101
      24-Jan27200012245656678911985251131
      25-Jan2619110233510179555566543141146
      26-Jan2817000111334434344443213097
      27-Jan282000012245454556754223197
      28-Jan262000112354544555753314095
      29-Jan2620001124455455667643240102
      30-Jan33800112458788891011131296471175
      31-Jan302519011345787667677653251171
      1-Feb26260001223565545556532231122
      2-Feb2811000111234433444444213091
      3-Feb212000102455555666654323197
      4-Feb2714000112455500000000000064
      Average26.611.21.10.10.20.91.42.23.54.85.55.14.34.44.95.15.76.35.14.12.71.63.90.5111.2

       

       

      Do I need to increase our redo size from 250Mb to 500Mb? Which is beneficial to updating a DG standby database? smaller redo or larger ones?

      I understand that db performed better if you have larger redo?

       

      Your advise is highly appreciated.

       

      Thanks a lot,

      pK

        • 1. Re: Archive log size suggestion for DG Standby database
          CKPT

          Do I need to increase our redo size from 250Mb to 500Mb? Which is beneficial to updating a DG standby database? smaller redo or larger ones?

          I understand that db performed better if you have larger redo?

           

          What is the version and protection mode you are in?

          Oracle suggest to have 4-5 log switches per hour and as per the log history only 2-3 times it was exceeded, You have to check is there any jobs are running at that time and caused much log switches.

          BTW, If you have 250mb and having 10 log switches an hour, then if you increase to 500mb the log switches per hour may fall to 5 count.(Rough calculation)

           

          Moreover archive log suggestion is not on standby, You have to concentrate performance on primary database and then whatever size you have created from online redo logfiles and the same amount of standby redo logs have to be created on standby,

           

          If you are using ARCH transport then huge archive log shipping also can cause issue over network, So you have to maintain balance.

          HTH.

          • 2. Re: Archive log size suggestion for DG Standby database

            Thanks Rp Oppps sorry, I mean Chkpt. you two look twins to me

             

            *What is the version and protection mode you are in?  Which v$view can I query this info?


            *Oracle suggest to have 4-5 log switches per hour.  Our busiest time is 12am-1am with average of 27 logs per hour. Do I compute the size of log for this peak hour only or for the entire 24-hour average. because other time is low. If I have to compute for the peak-hour, so to reduce it to 5 logs I need to increase the size of redo to 1.2Gb, is this viable?


            *Moreover archive log suggestion is not on standby. Yes I know, I just made a typo.


            *If you are using ARCH transport then huge archive log shipping also can cause issue over network, So you have to maintain balance. This balance thing is the most oblivious or a grey area. you can not measure which is which. Can you suggest which is balance for our database based on the info above?



            I thank you all

            • 3. Re: Archive log size suggestion for DG Standby database
              CKPT

              Thanks Rp

               

              *What is the version and protection mode you are in?  Which v$view can I query this info?


              Rp?

              You can check from "select protection_mode from v$database"



              *Oracle suggest to have 4-5 log switches per hour.  Our busiest time is 12am-1am with average of 27 logs per hour. Do I compute the size of log for this peak hour only or for the entire 24-hour average. because other time is low. If I have to compute for the peak-hour, so to reduce it to 5 logs I need to increase the size of redo to 1.2Gb, is this viable?


              Ok certainly you can consider to increase but is it really causing any performance issue and reaching bottleneck? But ensure you using real time apply(Creating standby redo logs + RTA) in case to avoid huge data loss so that every commited transaction will be send to the standby.

               


              *If you are using ARCH transport then huge archive log shipping also can cause issue over network, So you have to maintain balance. This balance thing is the most oblivious or a grey area. you can not measure which is which. Can you suggest which is balance for our database based on the info above?

              You can calculate bandwidth using below calculation that is only for Speed to sent redo to standby.

              (Required bandwidth (Mbps) = ((Redo rate bytes per sec. / 0.7) * 8) / 1,000,000)

              • 4. Re: Archive log size suggestion for DG Standby database

                SQL> select protection_mode from v$database;

                 

                 

                PROTECTION_MODE

                --------------------

                MAXIMUM PERFORMANCE

                 

                Ok certainly you can consider to increase but is it really causing any performance issue and reaching bottleneck? But ensure you using real time apply(Creating standby redo logs + RTA) in case to avoid huge data loss so that every commited transaction will be send to the standby.

                 

                What if we have bottleneck in bandwidth?

                 

                We have max of 34 logs per hour. I computed it as 34*250Mb*8/3600=18.9Mbps (this is in bits not bytes hence I multiply by 8)

                Is this computaion correct?

                 

                Thanks

                • 5. Re: Archive log size suggestion for DG Standby database
                  CKPT

                  You said it was maximum protection but in fact it is in maximum performance mode.

                  The 34 logs per hour but i asked you to use calculation using redo bytes per second, –(Required bandwidth (Mbps) = ((Redo rate bytes per sec. / 0.7) * 8) / 1,000,000)

                   

                  you can get redo rate bytes per second either on primary also and use awr report to get it.

                  If bandwidth is too low then huge size of may archive logs can cause issue with network and so on. 

                   

                  Is it clear ?

                  • 6. Re: Archive log size suggestion for DG Standby database

                    Thanks Ck,

                     

                    How do I get the redo rate per seconds? Based on the 34 max logs per hour?

                     

                    If bandwidth is too low then huge size of may archive logs can cause issue with network and so on.


                    That is why i am giving this info so that I can suggest to management what Bandwidth size will we upgrade to


                    Wait i will run AWR report, what time do I get? brb

                    • 7. Re: Archive log size suggestion for DG Standby database
                      CKPT

                      Thanks Ck,

                       

                      How do I get the redo rate per seconds? Based on the 34 max logs per hour?

                       

                      From AWR report, if you go to load profile you can see redo size per second, based on that you have to calcualte with the method (Required bandwidth (Mbps) = ((Redo rate bytes per sec. / 0.7) * 8) / 1,000,000)

                       

                      Load Profile

                      Per SecondPer TransactionPer ExecPer Call
                      DB Time(s):5.915.20.010.01
                      DB CPU(s):1.64.20.000.00
                      Redo size:5,676,656.914,696,946.5
                      • 8. Re: Archive log size suggestion for DG Standby database

                        Thanks Ck,

                         

                         

                        This is my AWR load profile, So how do I compute my banthwitdth? I already submitted 26Mbps reguirement to my boss

                        Load Profile

                         

                        Per SecondPer TransactionPer ExecPer Call
                        DB Time(s):1.70.30.080.01
                        DB CPU(s):0.10.00.000.00
                        Redo size:2,108,239.3360,434.5
                        Logical reads:17,364.82,968.8
                        Block changes:8,947.91,529.8
                        Physical reads:2,000.2342.0
                        Physical writes:714.2122.1
                        User calls:304.652.1
                        Parses:1.20.2
                        Hard parses:0.40.1
                        W/A MB processed:0.70.1
                        Logons:0.10.0
                        Executes:20.93.6
                        Rollbacks:0.10.0
                        Transactions:5.9
                        • 9. Re: Archive log size suggestion for DG Standby database
                          CKPT

                          Redo size will be different to each second, minute, each database and it is completely depend on your database and you must gather data from peak hours.

                           

                          Some times redo generation may be high and some times less or more less, If it is from peak hour AWR report then the reso size per second is 2,108,239.3 and the required bandwidth should be around 24 Mbps.  This is only for Data Guard and there is no rule that you must have this network and it is the required speed to avoid any disconnects between location or avoid latency.

                          Whatever the redo size, you can use the formula and of course its correct and provided by Oracle and not by me

                          • 10. Re: Archive log size suggestion for DG Standby database

                            Thank Ck1,

                             

                            So It is ok that I submitted to my boss the computation that our requirement is 18Mbps? I will not get imprisoned  for my action?

                            • 11. Re: Archive log size suggestion for DG Standby database
                              CKPT

                              Before that it is always good to check what is bandwidth instead of what you need.

                              • 12. Re: Archive log size suggestion for DG Standby database

                                What do you mean by that?

                                • 13. Re: Archive log size suggestion for DG Standby database
                                  CKPT

                                  You can also contact with network admins to know present network bandwidth between two sites.

                                  I feel question is going beyond, actual question was on the log switches per hour and now we are talking about the bandwidth and so on when in case of maximum protection/availability.

                                   

                                  What i want to say is

                                  1) If you are using maximum performance and you have average good speed then no need to worry at all

                                  2) in case of maximum protection or availability then you must concentrate on bandwidth to ensure zero data loss

                                   

                                  But you are using maximum performance and there are no issues reported from you that your standby is always lagging or so on. Moreover you have to concentrate on the primary performance what is really causing, is it waiting for the archiving ?

                                  then you can concentrate on standby database.