This content has been marked as final. Show 6 replies
The buffer cache hit ratio is a more-or-less worthless metric to use to assess anything at all about your database. A drop in BCHR can mean that you are suffering from less block contention and/or less undo segment header contention... or maybe it could mean that your users have stopped doing flashback queries. Or maybe you are doing lots more full table scans today than you were doing yesterday. It could be bad news. It could be no news. It could actually be good news.
Asserting a relationship between 'performance decrease and thus buffer cache hit ratio decrease' is not factual; it's not realistic; it's not accurate; it's a nasty, long-standing myth by which you've been duped. There is no "and thus" about those two things, because they are completely separate issues.
Stop measuring the BCHR, therefore, and start measuring useful facts about your database (wait events, for example).
I invariably set all my production databases to use 4 to 6 100MB log files, multiplexed, with ARCHIVE_LAG_TARGET set to 1800. That way, I get regular log switches every half hour, unless the database is under load. For the databases I've worked with, 100MB is sufficient space to slow the rate of log switching due to load, at its very worst, to once every ten minutes or so.
No, increasing the log size cannot make your database slow down.
Increasing your redolog file size would not decrease performance.Its going to take longer time to switch logfile but 10m is smaller size and it should not have noticeable impact.What is your OS?
has any other change been implemented in your system (application change/database change)? If you could run statspack (9i) or AWR(10g) then you could see what is taking your time.
Worst case you could create redolog files with 5m , drop the 10m ones and see if that helps. I would take this approach
1) only if there are no application/network/ OS/database changes done after increasing redolog file size and
2) If I'm not able to find any clues from my stasckpack/AWR reports.
Thanks for the information about performance. My OS is Windows Server 2003, and as it turns out, the performance issue now appears to have been network related, with a fairly substantial finding there. There hasn't been any other change, and I did run a statspack report, which didn't indicate any latch problems. At this stage, I can fairly safely estimate it was the network issue. Thanks for your help. DW
No, increasing the log size cannot make your database slow down.(Howard - Many moons ago on Usenet, you made me start questioning any absolute statement I see around Oracle. Your legacy ...)
Reading between the lines of the OP's resolution, makes me wonder ...
is it possible that increasing the log file size CAN cause the system to slow down noticeably, if the archive logs are stored on a network (NFS, CIFS) drive?
Thanks for this info; a lot clearer now. I had noticed BCHR drop at times when the server seemed to get a heavy load of some kind (usually for an ad hoc reason), and recover later. Point taken about BCHR observance. While my databases are realtively smaller transactional type (OLTP), I'll still be looking into ARCHIVE_LAG_TARGET setting. Thanks for the definitive answer.
Well, logfile size can affect two things, really: one, the rate of checkpointing; and two, the rate at which ARCH has to do its thing.
The rate of checkpointing can certainly affect database performance, but if you're slowing down the rate of checkpointing, I can't see how that can make things worse for the database! In the bad old days of 7 and 8.0, without good incremental checkpointing, you could argue that you were just storing up trouble... the database did nothing for ages and then BANG! ...all hell lets loose as the inevitably huge checkpoint finally happens. But it's not been like that for many years now, and all you're doing by increasing the redo size is to remove the size of the logs from being a checkpoint driver. That lets incremental checkpointing do its thing without being forever interrupted by the big stuff that happens at a log switch. I won't say 'never, ever', but I will say I can't see how it could possibly be a detriment to performance.
As for the ARCH thing... well, it's possible. If ARCH is slow at the best of times, and you suddenly ask it to process 10 times the amount of data at each log switch than before, you can certainly suffer from 'loopback problems' -LGWR switching round all available log groups faster than ARCH can shovel big chunks of redo away on disk. But you'd spot that a mile off as nasty wait events and alert log error messages by the dozen complaining about not being able to archive a file.
I could get cutesy at this point and say that if the above was indeed an issue, its an archive subsystem problem that was always there really, maybe just not very visible, and therefore that the slowdown still cannot be said to be a function of the increase ini size of the logs per se... but fair enough. It's always difficult to explain the subtleties of these things in a forum post, and without subtleties and nuances, it is indeed vital one doesn't inadvertently rule out the subtle-possible when seeking to lay out the unsubtle-general.