Discussions
Categories
- 385.5K All Categories
- 5.1K Data
- 2.5K Big Data Appliance
- 2.5K Data Science
- 453.4K Databases
- 223.2K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 47 Multilingual Engine
- 606 MySQL Community Space
- 486 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3.2K ORDS, SODA & JSON in the Database
- 585 SQLcl
- 4K SQL Developer Data Modeler
- 188K SQL & PL/SQL
- 21.5K SQL Developer
- 46 Data Integration
- 46 GoldenGate
- 298.4K Development
- 4 Application Development
- 20 Developer Projects
- 166 Programming Languages
- 295K Development Tools
- 150 DevOps
- 3.1K QA/Testing
- 646.7K Java
- 37 Java Learning Subscription
- 37.1K Database Connectivity
- 201 Java Community Process
- 108 Java 25
- 22.2K Java APIs
- 138.3K Java Development Tools
- 165.4K Java EE (Java Enterprise Edition)
- 22 Java Essentials
- 176 Java 8 Questions
- 86K Java Programming
- 82 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.3K Java SE
- 13.8K Java Security
- 208 Java User Groups
- 25 JavaScript - Nashorn
- Programs
- 667 LiveLabs
- 41 Workshops
- 10.3K Software
- 6.7K Berkeley DB Family
- 3.6K JHeadstart
- 6K Other Languages
- 2.3K Chinese
- 207 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 474 Portuguese
Recommended Redo Log size
Answers
-
May be 1PM to 2PM.
Starting backwards where 23 is assumed to be 11 PM to 12 AM.
T.
Srini
-
Wrong !
But thanks for co-operating.
It's a reasonable guess, of course - clearly the system must have seen a higher average rate of change over that hour (and the following one) - but that doesn't mean the system was under stress at any particular time. The only thing this report can tell you is that the pattern isn't the same as usual - and when the pattern hasn't changed you might have a problem, and when the pattern has changed you might not have a problem.
The point at which the most significant redo waits appeared was from 20:58 to 21:02 when there were 10 log file switches in just over 3 minutes. As Cary Millsap puts it: you can't derive detail from a summary.
Regards
Jonathan Lewis
-
I knew I would be wrong, Jonathan - I was wincing awaiting the inevitable rebuke
-
Hi,
yes i used the query ( V$ARCHIVED_LOG ). I find mininum 6 switschs/hour all the day.
-
Hi,
i have said i must because i noticed locks sometimes in the DB. and when i have checked (diagnostic) i have found that requests wait the LGWR process.
i have understand that it wait the switch to can write in the next redo log file..
Med.
-
2995489 wrote: Hi, yes i used the query ( V$ARCHIVED_LOG ). I find mininum 6 switschs/hour all the day.
But you still haven't demonstrated that log switches -- regardless of how frequently they occur -- actually causes a problem in your business. The key is NOT the rate at which log switches occur, the key is measured wait events related to log switches. Take some meaningful statspack reports and see how often those wait events appear in the 'top 5' list. And even then you need to apply a bit of reason, as there will always be a 'top 5' events.
How much effort is justified to reduce the average response time of an OLTP transaction by 90%? (Really? what if the averages response time is already 0.05 seconds?)
How much effort is justified to reduce the run time of a nightly batch job from 5 minutes to 2 minutes?
How much effort is justified to reduce the run time of a nightly batch job from 2 hours to 30 minutes, if no other process is found to ever be waiting on completion of the batch job?
-
Which types of locks did you notice, how much time did they spend waiting, and what diagnostic did you use to find them and then to decide that the processes were waiting for LGWR ?
Regards
Jonathan Lewis