This discussion is archived
1 2 Previous Next 21 Replies Latest reply: Jan 22, 2013 2:20 PM by jgarry RSS

Alert.log: Private strand flush not complete

490297 Newbie
Currently Being Moderated
HI,

I'm using the Oracle 10g R2 in a server with Red Hat ES 4.0, and i received the following message in alert log "Private strand flush not complete", somebody knows this error?

The part of log, where I found this error is:

Fri Feb 10 10:30:52 2006
Thread 1 advanced to log sequence 5415
Current log# 8 seq# 5415 mem# 0: /db/oradata/bioprd/redo081.log
Current log# 8 seq# 5415 mem# 1: /u02/oradata/bioprd/redo082.log
Fri Feb 10 10:31:21 2006
Thread 1 cannot allocate new log, sequence 5416
Private strand flush not complete
Current log# 8 seq# 5415 mem# 0: /db/oradata/bioprd/redo081.log
Current log# 8 seq# 5415 mem# 1: /u02/oradata/bioprd/redo082.log
Thread 1 advanced to log sequence 5416
Current log# 13 seq# 5416 mem# 0: /db/oradata/bioprd/redo131.log
Current log# 13 seq# 5416 mem# 1: /u02/oradata/bioprd/redo132.log

Thanks,
Everson

---------------------------
Everson Reude de Piza
DBA Oracle
MSN: eversonpiza@bol.com.br

Message was edited by:
eversonpiza
  • 1. Re: Alert.log: Private strand flush not complete
    488481 Newbie
    Currently Being Moderated
    How big are your redo logs? What kind of storage are you using for all your redo log groups/members (SAN, local RAID n, etc.)?
  • 2. Re: Alert.log: Private strand flush not complete
    490297 Newbie
    Currently Being Moderated
    Hi John,

    I have 15 redo logs, with 2 member each, and the size of each member is 300Mb. I believe that this value is more than enough, or not?

    My storage have 8 disks wiht RAID 0+1.

    Thanks,
    Everson
  • 3. Re: Alert.log: Private strand flush not complete
    488481 Newbie
    Currently Being Moderated
    If you're writing both members of each group to the same disk array, then that may be part of your problem. You need to move one member from each group to a different array, and make sure the logs are not being archived to either of the redo log arrays. With log switches every 30 secs, you may want to consider having larger logs and fewer groups.
  • 4. Re: Alert.log: Private strand flush not complete
    490297 Newbie
    Currently Being Moderated
    John,

    Thanks for your help.

    Everson
  • 5. Re: Alert.log: Private strand flush not complete
    490297 Newbie
    Currently Being Moderated
    Hi John,

    Sorry for later, but your suggestion did not solve my problem.
    I have other db with log switch more logem, and there have same problem.


    Tue Feb 14 10:05:55 2006
    Thread 1 advanced to log sequence 1713
    Current log# 4 seq# 1713 mem# 0: /db/oradata/bioteste/redo41.log
    Current log# 4 seq# 1713 mem# 1: /u02/oradata/bioteste/redo42.log
    Tue Feb 14 12:00:19 2006
    Thread 1 cannot allocate new log, sequence 1714
    Private strand flush not complete
    Current log# 4 seq# 1713 mem# 0: /db/oradata/bioteste/redo41.log
    Current log# 4 seq# 1713 mem# 1: /u02/oradata/bioteste/redo42.log
    Thread 1 advanced to log sequence 1714
    Current log# 5 seq# 1714 mem# 0: /db/oradata/bioteste/redo51.log
    Current log# 5 seq# 1714 mem# 1: /u02/oradata/bioteste/redo52.log

    Any idea?

    Thanks,
    Everson
  • 6. Re: Alert.log: Private strand flush not complete
    488481 Newbie
    Currently Being Moderated
    Everson,

    Personally, I think you may be suffering from slow disk I/O, but it could also be a 10.2.0.1 bug. Please execute the following in SQL*Plus on the original database in question:
    set linesize 100
    col member format a50

    select l.group#, lf.member, l.bytes/1024/1024 mb,  l.status, l.archived
    from v$logfile lf, v$log l
    where l.group# = lf.group#
    order by 1, 2;
    then report the results here. It would help to have your init.ora parameters (just the ones with non-default values) posted here as well.

    John
  • 7. Re: Alert.log: Private strand flush not complete
    490297 Newbie
    Currently Being Moderated
    SQL> select l.group#, lf.member, l.bytes/1024/1024 mb, l.status, l.archived
    2 from v$logfile lf, v$log l
    3 where l.group# = lf.group#
    4 order by 1, 2;

    GROUP# MEMBER MB STATUS ARCHIVED
    ---------- -------------------------------------------------------------------------------- ---------- ---------------- --------
    1 /db/oradata/bioprd/redo011.log 300 ACTIVE YES
    1 /u02/oradata/bioprd/redo012.log 300 ACTIVE YES
    2 /db/oradata/bioprd/redo021.log 300 INACTIVE YES
    2 /u02/oradata/bioprd/redo022.log 300 INACTIVE YES
    3 /db/oradata/bioprd/redo031.log 300 INACTIVE YES
    3 /u02/oradata/bioprd/redo032.log 300 INACTIVE YES
    4 /db/oradata/bioprd/redo041.log 300 INACTIVE YES
    4 /u02/oradata/bioprd/redo042.log 300 INACTIVE YES
    5 /db/oradata/bioprd/redo051.log 300 INACTIVE YES
    5 /u02/oradata/bioprd/redo052.log 300 INACTIVE YES
    6 /db/oradata/bioprd/redo061.log 300 INACTIVE YES
    6 /u02/oradata/bioprd/redo062.log 300 INACTIVE YES
    7 /db/oradata/bioprd/redo071.log 300 INACTIVE YES
    7 /u02/oradata/bioprd/redo072.log 300 INACTIVE YES
    8 /db/oradata/bioprd/redo081.log 300 INACTIVE YES
    8 /u02/oradata/bioprd/redo082.log 300 INACTIVE YES
    9 /db/oradata/bioprd/redo091.log 300 INACTIVE YES
    9 /u02/oradata/bioprd/redo092.log 300 INACTIVE YES
    10 /db/oradata/bioprd/redo101.log 300 INACTIVE YES
    10 /u02/oradata/bioprd/redo102.log 300 INACTIVE YES

    GROUP# MEMBER MB STATUS ARCHIVED
    ---------- -------------------------------------------------------------------------------- ---------- ---------------- --------
    11 /db/oradata/bioprd/redo111.log 300 CURRENT NO
    11 /u02/oradata/bioprd/redo112.log 300 CURRENT NO
    12 /db/oradata/bioprd/redo121.log 300 INACTIVE YES
    12 /u02/oradata/bioprd/redo122.log 300 INACTIVE YES
    13 /db/oradata/bioprd/redo131.log 300 INACTIVE YES
    13 /u02/oradata/bioprd/redo132.log 300 INACTIVE YES
    14 /db/oradata/bioprd/redo141.log 300 INACTIVE YES
    14 /u02/oradata/bioprd/redo142.log 300 INACTIVE YES
    15 /db/oradata/bioprd/redo151.log 300 ACTIVE YES
    15 /u02/oradata/bioprd/redo152.log 300 ACTIVE YES
    16 /db/oradata/bioprd/redo161.log 300 INACTIVE YES
    16 /u02/oradata/bioprd/redo162.log 300 INACTIVE YES
    17 /db/oradata/bioprd/redo171.log 300 INACTIVE YES
    17 /u02/oradata/bioprd/redo172.log 300 INACTIVE YES
    18 /db/oradata/bioprd/redo181.log 300 INACTIVE YES
    18 /u02/oradata/bioprd/redo182.log 300 INACTIVE YES
    19 /db/oradata/bioprd/redo191.log 300 INACTIVE YES
    19 /u02/oradata/bioprd/redo192.log 300 INACTIVE YES
    20 /db/oradata/bioprd/redo201.log 300 INACTIVE YES
    20 /u02/oradata/bioprd/redo202.log 300 INACTIVE YES

    40 rows selected

    ===========================================

    I found this in the oracle documentation:

    - log file switch (private strand flush incomplete):
    User sessions trying to generate redo, wait on this event when LGWR waits for DBWR
    to complete flushing redo from IMU buffers into the log buffer; when DBWR is
    complete LGWR can then finish writing the current log, and then switch log files.

    I think that I need to increase mine dbw? You agree?

    Everson
  • 8. Re: Alert.log: Private strand flush not complete
    488481 Newbie
    Currently Being Moderated
    Everson,

    Yes, I would agree that additional DBWRs should help, especially seeing that you have multiple log groups (1 and 15) simultaneously in the ACTIVE state. That's indicative of a slow DBWR. I don't recall what the recommended setting is for a given number of CPUs, so check the docs for that.

    John
  • 9. Re: Alert.log: Private strand flush not complete
    488481 Newbie
    Currently Being Moderated
    A couple more things:

    Are /u02 and /db physically different arrays, and not just two mount points on different partitions in the same array?

    With two 300mb members in each of twenty groups, you're using 12 GB of storage just for redo. Is that an attempt to solve this and/or other problems? Just curious, because I've never heard of anyone using that many groups before. Maybe I've led a sheltered life :).
  • 10. Re: Alert.log: Private strand flush not complete
    490297 Newbie
    Currently Being Moderated
    I changed my DBWn from 2 to 4, but not solve the problem....

    The /db and /u02, are diferents fisical storage.....

    I am starting to believe that this is a bug...
  • 11. Re: Alert.log: Private strand flush not complete
    445907 Newbie
    Currently Being Moderated
    We are suffernig that problem also:
    GROUP# MEMBER MB STATUS ARC
    ---------- -------------------------------------------------- ---------- ---------------- ---
    1 /lun2/prod/log1a.dbf 100 INACTIVE YES
    1 /lun4/prod/log1b.dbf 100 INACTIVE YES
    2 /lun2/prod/log2a.dbf 100 INACTIVE YES
    2 /lun4/prod/log2b.dbf 100 INACTIVE YES
    3 /lun2/prod/log3a.dbf 100 INACTIVE YES
    3 /lun4/prod/log3b.dbf 100 INACTIVE YES
    4 /lun2/prod/log4a.dbf 100 CURRENT NO
    4 /lun4/prod/log4b.dbf 100 CURRENT NO
    5 /lun2/prod/log5a.dbf 100 INACTIVE YES
    5 /lun4/prod/log5b.dbf 100 INACTIVE YES

    10 rows selected.

    Elapsed: 00:00:00.00
    sys@PROD> show parameters dbw

    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    dbwr_io_slaves integer 0
    sys@PROD>


    What DBWn parameter can we tried to change?
    THX!
  • 12. Re: Alert.log: Private strand flush not complete
    528670 Newbie
    Currently Being Moderated
    Hi,

    Note:372557.1 has brief explanation of this message.

    Best Regards,
    Alex
  • 13. Re: Alert.log: Private strand flush not complete
    user12195466 Newbie
    Currently Being Moderated
    this is just an message and can be ignored.


    Best regards,

    madhav
  • 14. Re: Alert.log: Private strand flush not complete
    karan Pro
    Currently Being Moderated
    In the past i faced this problem.. The same wait event.. But in my case the space was insufficient in data file and DBWR was not able to write to the datafile, When i added space to the datafile it was solved... In your case it may be this or may not be this..

    Regards
    Karan
1 2 Previous Next