Forum Stats

  • 3,815,992 Users
  • 2,259,126 Discussions
  • 7,893,357 Comments

Discussions

slow select while a long write transaction is running

livioc
livioc Member Posts: 7
edited Feb 18, 2019 4:59AM in SQL & PL/SQL

hi, i'm working on a 12c EE Extreme Perf Release 12.2.0.1.0 instance.

I've a table TABLE1 with about 1 millions of records and a procedure, executed on request by a job, that deletes and inserts about 300k of that table's records in about 10 minutes.

Than the SP executes about 10 millions of insert, for about other 90 minutes, in another table TABLE2 with a FK to TABLE1.

I don't have any explicit transaction management within the SP.

I have an external app that must frequently execute this query "select max(COLUMN1) from TABLE1",

the query usually is immediate but, when the SP is running, it has always his duration increased up to 30/60 seconds that isn't acceptable for my requirements.

Note: it hasn't any lock, i can execute it multiple times while SP is running and it uses the default read-commit behavior so it is correctly returning the value of the data without the SP updates.

I'm pretty new to oracle so i don't know if i can do something to make it faster, i cannot understand why it's impacted by a concurrent not-committed transaction.

thanks for your suggestions

Tagged:
liviocRandolf Geist

Best Answer

  • Jonathan Lewis
    Jonathan Lewis Member Posts: 9,953 Blue Diamond
    edited Feb 15, 2019 11:29AM Answer ✓

    @John Thorton

    SQL> create table t1 nologging as select * from all_Objects;

    Table created.

    SQL> create index t1_i1 on t1(data_object_id);

    Index created.

    SQL> set autotrace traceonly explain

    SQL> select max(data_object_id) from t1;

    Execution Plan

    ----------------------------------------------------------

    Plan hash value: 1743745495

    ------------------------------------------------------------------------------------

    | Id  | Operation                  | Name  | Rows  | Bytes | Cost (%CPU)| Time    |

    ------------------------------------------------------------------------------------

    |  0  | SELECT STATEMENT           |       |     1 |    13 |    1  (0)  | 00:00:01 |

    |  1  |  SORT AGGREGATE            |       |     1 |    13 |            |          |

    |  2  |   INDEX FULL SCAN (MIN/MAX)| T1_I1 |     1 |    13 |    1  (0)  | 00:00:01 |

    ------------------------------------------------------------------------------------

    Of course the optimizer can use an ordinary index for "select MAX(column) from table".

    livioclivioc

Answers

  • John Thorton
    John Thorton Member Posts: 14,493 Silver Crown
    edited Feb 15, 2019 10:29AM
    livioc wrote:hi, i'm working on a 12c EE Extreme Perf Release 12.2.0.1.0 instance.I've a table TABLE1 with about 1 millions of records and a procedure, executed on request by a job, that deletes and inserts about 300k of that table's records in about 10 minutes.Than the SP executes about 10 millions of insert, for about other 90 minutes, in another table TABLE2 with a FK to TABLE1.I don't have any explicit transaction management within the SP.I have an external app that must frequently execute this query "select max(COLUMN1) from TABLE1", the query usually is immediate but, when the SP is running, it has always his duration increased up to 30/60 seconds that isn't acceptable for my requirements.Note: it hasn't any lock, i can execute it multiple times while SP is running and it uses the default read-commit behavior so it is correctly returning the value of the data without the SP updates.I'm pretty new to oracle so i don't know if i can do something to make it faster, i cannot understand why it's impacted by a concurrent not-committed transaction.thanks for your suggestions

    create a function based INDEX on MAX(COLUMN1)

  • livioc
    livioc Member Posts: 7
    edited Feb 15, 2019 10:50AM

    I'm not sure that i understand, why a function index ? and why and index (function or simple) is not impacted by concurrent writes?

    as i told before i don't know oracle very well

    note: the distinct values in that column are a few dozen

    thanks

  • John Thorton
    John Thorton Member Posts: 14,493 Silver Crown
    edited Feb 15, 2019 10:57AM
    livioc wrote:I'm not sure that i understand, why a function index ? and why and index (function or simple) is not impacted by concurrent writes? as i told before i don't know oracle very wellnote: the distinct values in that column are a few dozenthanks

    When all else fails Read The Fine Manual

    https://docs.oracle.com/database/121/ADMIN/indexes.htm#ADMIN11730

    If you create normal index on COLUMN1, it won't be used in SELECT MAX(COLUMN1) FROM TABLE1;

    With a FBI, the SELECT will obtain the MAX(COLUMN1) directly from the FBI

    livioc
  • livioc
    livioc Member Posts: 7
    edited Feb 15, 2019 11:04AM

    thanks, on Monday I'll try your suggestion.

    i have read the manual but i can't imagine how oracle will use a function index built with an aggregate function

  • Jonathan Lewis
    Jonathan Lewis Member Posts: 9,953 Blue Diamond
    edited Feb 15, 2019 11:29AM Answer ✓

    @John Thorton

    SQL> create table t1 nologging as select * from all_Objects;

    Table created.

    SQL> create index t1_i1 on t1(data_object_id);

    Index created.

    SQL> set autotrace traceonly explain

    SQL> select max(data_object_id) from t1;

    Execution Plan

    ----------------------------------------------------------

    Plan hash value: 1743745495

    ------------------------------------------------------------------------------------

    | Id  | Operation                  | Name  | Rows  | Bytes | Cost (%CPU)| Time    |

    ------------------------------------------------------------------------------------

    |  0  | SELECT STATEMENT           |       |     1 |    13 |    1  (0)  | 00:00:01 |

    |  1  |  SORT AGGREGATE            |       |     1 |    13 |            |          |

    |  2  |   INDEX FULL SCAN (MIN/MAX)| T1_I1 |     1 |    13 |    1  (0)  | 00:00:01 |

    ------------------------------------------------------------------------------------

    Of course the optimizer can use an ordinary index for "select MAX(column) from table".

    livioclivioc
  • Jonathan Lewis
    Jonathan Lewis Member Posts: 9,953 Blue Diamond
    edited Feb 15, 2019 11:33AM

    You need only create an ordinary index on COLUMN1 to optimize the query, butthe performance will still be affected by the need to apply undo to get a read-consistent version of the data - but it's possible that the work that the number of undo records that need to be applied to make the relevant blocks of the index read-consistent will small compared to the number of undo records that would need to be applied to make every block in the table read-consistent.

    Regards

    Jonathan Lewis

    livioc
  • Sven W.
    Sven W. Member Posts: 10,534 Gold Crown
    edited Feb 15, 2019 12:05PM
    John Thorton wrote:livioc wrote:I'm not sure that i understand, why a function index ? and why and index (function or simple) is not impacted by concurrent writes? as i told before i don't know oracle very wellnote: the distinct values in that column are a few dozenthanksWhen all else fails Read The Fine Manualhttps://docs.oracle.com/database/121/ADMIN/indexes.htm#ADMIN11730 If you create normal index on COLUMN1, it won't be used in SELECT MAX(COLUMN1) FROM TABLE1;With a FBI, the SELECT will obtain the MAX(COLUMN1) directly from the FBI

    How is that supposed to work? You can't create an fb index on an aggregate function. Or did I missunderstood something?

  • livioc
    livioc Member Posts: 7
    edited Feb 18, 2019 4:42AM

    the function index cannot be used with max

    "ORA-00934: group function is not allowed here"

    Anyhow using a normal index works perfectly, thanks all

    As a side question: if my query cannot works on indexed values and must do, for example, a full table scan does another solution exist to optimize it  (still in the scenario that a long concurrent write transaction is running) ?

    thanks

  • Jonathan Lewis
    Jonathan Lewis Member Posts: 9,953 Blue Diamond
    edited Feb 18, 2019 4:59AM
    livioc wrote:As a side question: if my query cannot works on indexed values and must do, for example, a full table scan does another solution exist to optimize it (still in the scenario that a long concurrent write transaction is running) ?thanks

    The basic answer to your question is no.  If you have to do a lot of working updating data while executing a long-running report on the same data then your reporting session will do a lot of work recreating read-consistent versions of the data. If you are doing a tablescan as the driver of the report then the larger the data set the longer it will take to run and the more data will be changed (and then made read-consistent) during the run - leading to a geometric increase in the time to complete. The best you can do is find ways to make the report run as quickly as possible.

    If you're prepared to use an unsupported mechanism, and if the nature of the workload is such that it's only the more recently inserted data that is subject to modification then you could try running the tablescan backwards:  https://jonathanlewis.wordpress.com/2016/03/14/quiz-2/   If you'd rather not use a method that is unsupported and not guaranteed to produce the right answer then you could still raise the option in an SR with Oracle support - they may have some comment to make on it, and the more people who raise it the more likely that the feature will get incorporated into the optimizer.

    Regards

    Jonathan Lewis

    Randolf Geist