Forum Stats

  • 3,751,464 Users
  • 2,250,366 Discussions
  • 7,867,433 Comments

Discussions

Index is not used...

Hokins
Hokins Member Posts: 20
edited Jul 9, 2012 12:38AM in SQL & PL/SQL
Hi, I have a DB with 20 M records. I am using a query to search records but it is not using index. Query is as follows:-


select *
from DOCUMENT_IN documentin0_
inner join ORGANISATION_SYN organisati1_ on documentin0_.ORIGINATING_SECTION_ID=organisati1_.ID
inner join FILE_RECORD filerecord2_ on documentin0_.FILE_RECORD_ID=filerecord2_.ID
inner join doc_security_lookup documentse3_ on documentin0_.ID=documentse3_.DOCUMENT_ID and ( documentse3_.document_type like 'In')
inner join DOCUMENT_IN_FILE documentin4_ on documentin0_.ID=documentin4_.DOCUMENT_IN_ID
where documentse3_.APPOINTMENT_ID=:1
and documentin0_.IS_DELETE='No'
and (:2 is null or lower(documentin0_.SUBJECT) like :2)
and (:3 is null or documentin0_.DOCUMENT_ID like :3 )
and (:4 is null or documentin0_.RECEIVE_DATE>=:4)
and (:5 is null or documentin0_.RECEIVE_DATE<:5)


I have defined index on document_ID. When i use ":3 is null or documentin0_.DOCUMENT_ID like :3" then it is not using the Index and goes for full table scan. But when i change it to "documentin0_.DOCUMENT_ID like :3" then it uses the Index. I am using above pattern to avoid hard parsing.

Thanks for ur relpy.
Tagged:

Answers

  • Tubby
    Tubby Member Posts: 6,987 Bronze Crown
    937454 wrote:
    Hi, I have a DB with 20 M records. I am using a query to search records but it is not using index. Query is as follows:-


    select *
    from DOCUMENT_IN documentin0_
    inner join ORGANISATION_SYN organisati1_ on documentin0_.ORIGINATING_SECTION_ID=organisati1_.ID
    inner join FILE_RECORD filerecord2_ on documentin0_.FILE_RECORD_ID=filerecord2_.ID
    inner join doc_security_lookup documentse3_ on documentin0_.ID=documentse3_.DOCUMENT_ID and ( documentse3_.document_type like 'In')
    inner join DOCUMENT_IN_FILE documentin4_ on documentin0_.ID=documentin4_.DOCUMENT_IN_ID
    where documentse3_.APPOINTMENT_ID=:1
    and documentin0_.IS_DELETE='No'
    and (:2 is null or lower(documentin0_.SUBJECT) like :2)
    and (:3 is null or documentin0_.DOCUMENT_ID like :3 )
    and (:4 is null or documentin0_.RECEIVE_DATE>=:4)
    and (:5 is null or documentin0_.RECEIVE_DATE<:5)


    I have defined index on document_ID. When i use ":3 is null or documentin0_.DOCUMENT_ID like :3" then it is not using the Index and goes for full table scan. But when i change it to "documentin0_.DOCUMENT_ID like :3" then it uses the Index. I am using above pattern to avoid hard parsing.

    Thanks for ur relpy.
    First thing i would do would be to refactor what you have, replacing it with this type of mechanism.

    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1669972300346534908

    Cheers,
  • Hokins
    Hokins Member Posts: 20
    Hi, The link you provide is fine but the issue is the query i am using is because it is not hard pared but only go for full table scan. if i go for writing a procedure even then the query will be hard parsed. So i want to avoid it.
  • Tubby
    Tubby Member Posts: 6,987 Bronze Crown
    edited May 30, 2012 11:34PM
    937454 wrote:
    Hi, The link you provide is fine but the issue is the query i am using is because it is not hard pared but only go for full table scan. if i go for writing a procedure even then the query will be hard parsed. So i want to avoid it.
    No, it won't be hard parsed for every execution. It will be soft parsed assuming the plan can be found in the shared pool.

    You have 2 options.

    Option 1) live with what you have and deal with the excessive IO caused by the "bad plan" when you have certain inputs (i'd highly recommend against this)
    Option 2) implement as is outlined in the link i sent you and deal with the nominal overhead of soft parsing, and get an accurate and non-wasteful execution plan (the waste you're going to encounter using a generic and "bad" execution plan is going to FAR outweigh the cost of soft parsing).

    You can't have your cake and eat it too. Lets be clear here though, say you have 2 possible inputs to your procedure p_input1 and p_input2.

    That's going to allow you for 3 execution plans.

    1) p_input1 NOT specified and p_input2 specified
    select *
    from table
    where (1=1 or :p_input1 is null)
    and (column_name = :p_input2)
    2) p_input1 specified and p_input2 NOT specified
    select *
    from table
    where (column_name = :p_input1)
    and (1=1 or :p_input2 is null)
    3) p_input1 specified AND p_input2 specified
    select *
    from table
    where (column_name = :p_input1)
    and (column_name = :p_input2)
    This assumes you must pass in either p_input1 or p_input2 and that you could potentially pass in both.

    So for the first execution (1, 2 or 3) you will have to do a hard parse, so yes... in that respect you will be doing more hard parsing. But you will NOT be hard parsing for each execution of this stored procedure. Each subsequent execution would be able to find an execution plan in the shared pool (assuming it's not aged out) after doing a soft parse.

    If you really want, do some benchmarking. I'll be you dollars to tasty doughnuts the method i pointed you to comes out on top, and by a great deal.

    Cheers,
  • indra budiantho
    indra budiantho Member Posts: 1,387
    Null can eliminate the use of index by optimizer. So, put contraint NOT NULL on your indexed COLUMN .
This discussion has been closed.