Skip to Main Content

Oracle Database Discussions

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Archiver hnug log error

user5856470Sep 12 2008 — edited Sep 15 2008
I am getting this error through Oracle Enterprise Manager for my Production RAc database
It has been working fine for ages. Its just suddenly started doing this.

The archiving is hung? Please can someone help explain what I need to do to resolve or if already ersolved. I'll know if its resolved as new archiving will be written in 1 days time otherwise it complain again when it tries to write to the archiving???

SEVERITY
critical

CATEGORY
Archiver Hung Alert Log Error

TIME
11-Sep-2008 23:02:43

ALERT LOG STACK
ORA-16038: log 3 sequence# 1351 cannot be archived
ORA-19504: failed to create file ""
ORA-00312: online log 3 thread 2: '/u02/logs/gbprod01/redo03a.log'
ORA-00312: online log 3 thread 2: '/u02/logs/gbprod01/redo03b.log'
Trace File: /opt/app/oracle/admin/gbprod01/udump/gbprod012_ora_21265.trc

ALERT TRIGGERED
11-Sep-2008 23:14:15


The disk where the archiving would be written has plenty of space and full write access.
$>df -kh /u03
/dev/sdd 300G 186G 115G 62% /u03

$>ls -ltr /u03/arch/gbprod01
drwxr-xr-x 1 oracle oinstall 131072 Jun 12 2007 gbprod011



Please can someone help as to what I need to do or whether it has resolved itself.

FYI



SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u03/arch/ddprod01/ddprod011
Oldest online log sequence 1326
Next log sequence to archive 1327
Current log sequence 1327


SQL> select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIME
------------- -------------------
1 1 1327 52428800 2 NO CURRENT
7883970584 11.09.2008 23:02:38

2 1 1326 52428800 2 YES INACTIVE
7883951113 11.09.2008 22:00:40

3 2 1351 52428800 2 YES INACTIVE
7883967749 11.09.2008 23:00:35


GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIME
------------- -------------------
4 2 1352 52428800 2 NO CURRENT
7883970586 11.09.2008 23:02:38



I'm also getting this in the trac file however this has been happening since the 7th of Sept and the problem has happened today. So this may be another issue.

LGWR: Archivelog for thread 1 sequence 1261 will NOT be compressed
*** 2008-07-31 17:14:16.863
...
...
...
*** 2008-09-11 23:02:38.163
LGWR: Archivelog for thread 1 sequence 1327 will NOT be compressed

Thank you in addvance.

Talha

Comments

Paulzip

Add an index on target_table (id, create_date)
Then try this...

delete from target_table 
where create_date >= trunc(sysdate) and 
      create_date < trunc(sysdate) + 1 and 
      id in (select id from source_table) ;

This will facilitate index usage as you won't be applying a function on create_date

Uday_N
Answer

Hi Paul,
Thanks for your suggestion . I will apply now . Thanks once again

Marked as Answer by Uday_N · Dec 27 2021
Frank Kulash

Hi, Uday_N
Always try to use the raw column from your table rather than some expression based on the column. As Paulzip coded it above,

where create_date >= trunc(sysdate) and 
      create_date < trunc(sysdate) + 1

is likely to be a lot more efficient than

where trunc(create_date ) = trunc(sysdate) 

because it allows the optimizer to use an index on create_date. Also, it requires fewer function calls. TRUNC (SYSDATE) only needs to be evaluated once, no matter how big the table is, but TRUNC (create_date) needs to be called for every row. If you have over 200,000 rows in the table, that could be 200,000 function calls.

User_H3J7U

Total records in the target table is 228997 and to be deleted is 108444
The table (number,number,date) will consume 1-10MB, 50% rows to be deleted. The index on the target_table is not needed.

Uday_N

Hi Frank,
Thanks for the suggestions . Your guidance always helped me . Will henceforth apply it.
Hi H37ju,
Thanks for your suggestion . It was kind of you to reply to me

Regards,
Uday

mathguy

If you want help, you must give accurate information. When you go to the doctor, you don't tell him your back is hurting when you have a leg injury, do you?
"I need to delete a million records" vs "volume of data to be deleted is 108444". Which one should we believe? We can't believe both, obviously.
Then - you say the tables are "examples". That's bad. What is the structure of your ACTUAL tables? "Something went over my leg, for example a bicycle". When in reality it was a truck. Sorry, can't give you good answers if we have incomplete (or wrong) information.
Before you jump to any solutions, try to diagnose the problem. Find out what is slowing down your DELETE. It may be that "selecting" the rows to delete is taking a long time. You didn't tell us anything about the SOURCE table; if it is very large, checking the id's may take a very long time. An index on the id column in THAT table may help.
Much more likely, when you delete rows in a simple scenario like you have, what is taking a long time is the actual "delete" part - writing the undo segments and the redo logs. Not so much if the table structure is as you showed us, but more likely if the tables have many more columns.
Another thing to keep an eye on is whatever may be blocking / delaying your operation. Rows locked by other operations, waiting for those other operations to complete. (So, your own DELETE is not slow; it just must wait in line.) If that's the issue, then you are barking up the wrong tree. Figure out what else is going on in the database at the same time. (But, first, find out if THAT is the issue and not something else.) If you really only have to delete about 100k rows from a table with about 200k rows, then this would be the most likely cause IMO.
You also didn't tell us about the location of different things. Is this all in one database, running on one server? Or are you working over a slow network, with the two tables in different databases on different servers, etc.?
Perhaps you can also explain the difference between Query 1 and Query 2; I didn't see any.

Uday_N

Hi Math ,
sure . Thanks for your detail information . I will check what is on causing the issue . I am also very sorry for not able to give the details of the actual tables . I mean it as I have huge respect for the people in this group . I will look what causing the delay and will apprise in the group . Thanks a lot .

Dave_VZ

@mathguy Hi, I'm an old dinosaur from Oracle 6 days, and I sympathize with both you and the asker. The message came across to me as derisive, probably my misunderstanding and no offense intended. What I'd like to contribute is that there are numerous circumstances in which accuracy may be illegal: HIPPA and US DoD work specifically, and non-disclosure or confidentiality agreements in general. As Uday discovers redaction techniques I trust his message will improve.
@Frank Kulash In my outmoded ways I still think of the compound condition (A AND B) as requiring two passes against the result set. If that's still true, I speculate that the condition drops to one I/O pass if the condition becomes "x BETWEEN trunc(sysdate) AND trunc(sysdate) + 1". If I recall correctly, the BETWEEN conditions are also inclusive, which eliminates the extra I/O required for the GE qualifier. Food for thought.

Solomon Yakobson

@Dave_VZ In my outmoded ways I still think of the compound condition (A AND B) as requiring two passes against the result set.
No, there will be a single pass where each row is read once and condition A (or B - optimizer decides on predicate order) is checked first and if result is TRUE then second condition is checked. That's if we do table full scan. In case of index one or both conditions will be checked against index.
Operator BETWEEN is nothing but a "shortcut" and X BETWEEN Y AND Z results in X>= Y AND X <= Z. Just check the plan:

SQL> explain plan for select * from emp where sal between 1000 and 2000;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------
Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     6 |   228 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |     6 |   228 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------

   1 - filter("SAL"<=2000 AND "SAL">=1000)

13 rows selected.

SQL>

Now to OP's "Volume of data in target table to be deleted is 108444 . It takes a lot of time more than 2-3 hours". I see two possible causes:
Some other session(s) lock(s) to be deleted row(s) causing DELETE to wait.
Some other session(s) are modifying SOURCE_TABLE causing DELETE to spend time digging into UNDO.
OP needs to run AWR (or trace DELETE).
SY.

Solomon Yakobson

Actually I missed another rather simple possibility - table SOURCE_TABLE is very large and there is no index on column ID. In such case optimizer most likely will do hash join of TARGET_TABLE and SOURCE_TABLE which can take a while. OP needs to provide explain plan.
SY.

Frank Kulash

Hi, Dave_VZ
In my outmoded ways I still think of the compound condition (A AND B) as requiring two passes against the result set.
No, only one pass is required. (By the way, it's not a pass through the result set, it's a pass through the data to produce the resu;lt set.)
If that's still true, I speculate that the condition drops to one I/O pass if the condition becomes "x BETWEEN trunc(sysdate) AND trunc(sysdate) + 1". If I recall correctly, the BETWEEN conditions are also inclusive, which eliminates the extra I/O required for the GE qualifier. 
I'm not sure what you mean by " the extra I/O required for the GE qualifier". You're absolutely correct that x BETWEEN y AND z includes both y and z, so

x >= TRUNC (SYSDATE)  AND  x < TRUNC (SYSDATE) + 1

is NOT equivalent to

x   BETWEEN  TRUNC (SYSDATE)  AND  TRUNC (SYSDATE) + 1.

If x is exactly 00:00:00 tomorrow, then the former is FALSE, but the latter is TRUE.

Uday_N

Hi All,
Thanks for all your wonderful analysis. It was really enriching me . Thanks once again . I use the below query which was given by paul which takes time . There is no lock as well . I have used the below query .

delete from target_table 
where create_date >= trunc(sysdate) and 
      create_date < trunc(sysdate) + 1 and 
      id in (select id from source_table) ;

Plan:

-------------------------------------------------------------------------------------------------------------------------
| Id | Operation               | Name              | Rows | Bytes | Cost (%CPU)| Time   |
-------------------------------------------------------------------------------------------------------------------------
|  0 | DELETE STATEMENT            |                | 12215 |  644K|  360  (1)| 00:00:01 |
|  1 | DELETE                | TARGET_TABLE           |    |    |      |     |
|* 2 |  FILTER                |                |    |    |      |     |
|* 3 |  HASH JOIN SEMI           |                | 12215 |  644K|  360  (1)| 00:00:01 |
|  4 |   TABLE ACCESS BY INDEX ROWID BATCHED| TARGET_TABLE          | 12215 |  536K|   5  (0)| 00:00:01 |
|* 5 |   INDEX RANGE SCAN         | I_TARGET_TABLE_CREATE_DATE   |   1 |    |   4  (0)| 00:00:01 |
|  6 |   TABLE ACCESS FULL         | SOURCE_TABLE          | 109K|  960K |  354  (1)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
  2 - filter(TRUNC(SYSDATE@!)+1>TRUNC(SYSDATE@!))
  3 - access("C"."ID"="B"."ID")
  5 - access(TRUNC(INTERNAL_FUNCTION("CREATE_DT"))>=TRUNC(SYSDATE@!) AND 
       TRUNC(INTERNAL_FUNCTION("CREATE_DT"))<TRUNC(SYSDATE@!)+1)
Is there anything i should look into on what causing the performance issue in delete ? Please advise

Paulzip

How long is "takes time"?

Uday_N

Hi Paul,
It takes more than 2 hours

Paulzip

Are your table stats up to date?
Is there an index on source_table.id?
Are other processes updating / deleting on this table?
Is the row size large on target_table?
Try exclusively locking the target_table before the delete.

Uday_N

Hi Paul ,
thanks for the answers . There is no index on the source table and stats are upto date . No process are on the table . Will exclusively lock the table as per your advice . Really Thanks for the advice you have given . It was really helpful . Thanks once again

Solomon Yakobson

@Uday_N There is no lock as well
OK. Then statement level write consistency (a.k.a. mini-rollback) could be the cause. If DELETE runs into a row that changed since DELETE started the DELETE will rollback and start over. So lock table exclusively, as you mentioned, and test again.
SY.

Uday_N

Hi Solomon ,

Sure . I will try . Thanks for your valuable advise . It’s always helps me . Thanks once again

1 - 18
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Oct 13 2008
Added on Sep 12 2008
12 comments
364 views