Discussions
Categories
- 196.7K All Categories
- 2.2K Data
- 235 Big Data Appliance
- 1.9K Data Science
- 449.8K Databases
- 221.6K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 31 Multilingual Engine
- 549 MySQL Community Space
- 477 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3K ORDS, SODA & JSON in the Database
- 532 SQLcl
- 4K SQL Developer Data Modeler
- 186.9K SQL & PL/SQL
- 21.3K SQL Developer
- 295.4K Development
- 17 Developer Projects
- 138 Programming Languages
- 292.1K Development Tools
- 104 DevOps
- 3.1K QA/Testing
- 645.9K Java
- 28 Java Learning Subscription
- 37K Database Connectivity
- 153 Java Community Process
- 105 Java 25
- 22.1K Java APIs
- 138.1K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 17 Java Essentials
- 158 Java 8 Questions
- 85.9K Java Programming
- 79 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.2K Java SE
- 13.8K Java Security
- 203 Java User Groups
- 24 JavaScript - Nashorn
- Programs
- 395 LiveLabs
- 37 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.6K Other Languages
- 2.3K Chinese
- 170 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 230 Portuguese
slow select while a long write transaction is running

hi, i'm working on a 12c EE Extreme Perf Release 12.2.0.1.0 instance.
I've a table TABLE1 with about 1 millions of records and a procedure, executed on request by a job, that deletes and inserts about 300k of that table's records in about 10 minutes.
Than the SP executes about 10 millions of insert, for about other 90 minutes, in another table TABLE2 with a FK to TABLE1.
I don't have any explicit transaction management within the SP.
I have an external app that must frequently execute this query "select max(COLUMN1) from TABLE1",
the query usually is immediate but, when the SP is running, it has always his duration increased up to 30/60 seconds that isn't acceptable for my requirements.
Note: it hasn't any lock, i can execute it multiple times while SP is running and it uses the default read-commit behavior so it is correctly returning the value of the data without the SP updates.
I'm pretty new to oracle so i don't know if i can do something to make it faster, i cannot understand why it's impacted by a concurrent not-committed transaction.
thanks for your suggestions
Best Answer
-
@John Thorton
SQL> create table t1 nologging as select * from all_Objects;
Table created.
SQL> create index t1_i1 on t1(data_object_id);
Index created.
SQL> set autotrace traceonly explain
SQL> select max(data_object_id) from t1;
Execution Plan
----------------------------------------------------------
Plan hash value: 1743745495
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 1 (0) | 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | INDEX FULL SCAN (MIN/MAX)| T1_I1 | 1 | 13 | 1 (0) | 00:00:01 |
------------------------------------------------------------------------------------
Of course the optimizer can use an ordinary index for "select MAX(column) from table".
Answers
-
livioc wrote:hi, i'm working on a 12c EE Extreme Perf Release 12.2.0.1.0 instance.I've a table TABLE1 with about 1 millions of records and a procedure, executed on request by a job, that deletes and inserts about 300k of that table's records in about 10 minutes.Than the SP executes about 10 millions of insert, for about other 90 minutes, in another table TABLE2 with a FK to TABLE1.I don't have any explicit transaction management within the SP.I have an external app that must frequently execute this query "select max(COLUMN1) from TABLE1", the query usually is immediate but, when the SP is running, it has always his duration increased up to 30/60 seconds that isn't acceptable for my requirements.Note: it hasn't any lock, i can execute it multiple times while SP is running and it uses the default read-commit behavior so it is correctly returning the value of the data without the SP updates.I'm pretty new to oracle so i don't know if i can do something to make it faster, i cannot understand why it's impacted by a concurrent not-committed transaction.thanks for your suggestions
create a function based INDEX on MAX(COLUMN1)
-
I'm not sure that i understand, why a function index ? and why and index (function or simple) is not impacted by concurrent writes?
as i told before i don't know oracle very well
note: the distinct values in that column are a few dozen
thanks
-
livioc wrote:I'm not sure that i understand, why a function index ? and why and index (function or simple) is not impacted by concurrent writes? as i told before i don't know oracle very wellnote: the distinct values in that column are a few dozenthanks
When all else fails Read The Fine Manual
https://docs.oracle.com/database/121/ADMIN/indexes.htm#ADMIN11730
If you create normal index on COLUMN1, it won't be used in SELECT MAX(COLUMN1) FROM TABLE1;
With a FBI, the SELECT will obtain the MAX(COLUMN1) directly from the FBI
-
thanks, on Monday I'll try your suggestion.
i have read the manual but i can't imagine how oracle will use a function index built with an aggregate function
-
@John Thorton
SQL> create table t1 nologging as select * from all_Objects;
Table created.
SQL> create index t1_i1 on t1(data_object_id);
Index created.
SQL> set autotrace traceonly explain
SQL> select max(data_object_id) from t1;
Execution Plan
----------------------------------------------------------
Plan hash value: 1743745495
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 1 (0) | 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | INDEX FULL SCAN (MIN/MAX)| T1_I1 | 1 | 13 | 1 (0) | 00:00:01 |
------------------------------------------------------------------------------------
Of course the optimizer can use an ordinary index for "select MAX(column) from table".
-
You need only create an ordinary index on COLUMN1 to optimize the query, butthe performance will still be affected by the need to apply undo to get a read-consistent version of the data - but it's possible that the work that the number of undo records that need to be applied to make the relevant blocks of the index read-consistent will small compared to the number of undo records that would need to be applied to make every block in the table read-consistent.
Regards
Jonathan Lewis
-
John Thorton wrote:livioc wrote:I'm not sure that i understand, why a function index ? and why and index (function or simple) is not impacted by concurrent writes? as i told before i don't know oracle very wellnote: the distinct values in that column are a few dozenthanksWhen all else fails Read The Fine Manualhttps://docs.oracle.com/database/121/ADMIN/indexes.htm#ADMIN11730 If you create normal index on COLUMN1, it won't be used in SELECT MAX(COLUMN1) FROM TABLE1;With a FBI, the SELECT will obtain the MAX(COLUMN1) directly from the FBI
How is that supposed to work? You can't create an fb index on an aggregate function. Or did I missunderstood something?
-
the function index cannot be used with max
"ORA-00934: group function is not allowed here"
Anyhow using a normal index works perfectly, thanks all
As a side question: if my query cannot works on indexed values and must do, for example, a full table scan does another solution exist to optimize it (still in the scenario that a long concurrent write transaction is running) ?
thanks
-
livioc wrote:As a side question: if my query cannot works on indexed values and must do, for example, a full table scan does another solution exist to optimize it (still in the scenario that a long concurrent write transaction is running) ?thanks
The basic answer to your question is no. If you have to do a lot of working updating data while executing a long-running report on the same data then your reporting session will do a lot of work recreating read-consistent versions of the data. If you are doing a tablescan as the driver of the report then the larger the data set the longer it will take to run and the more data will be changed (and then made read-consistent) during the run - leading to a geometric increase in the time to complete. The best you can do is find ways to make the report run as quickly as possible.
If you're prepared to use an unsupported mechanism, and if the nature of the workload is such that it's only the more recently inserted data that is subject to modification then you could try running the tablescan backwards: https://jonathanlewis.wordpress.com/2016/03/14/quiz-2/ If you'd rather not use a method that is unsupported and not guaranteed to produce the right answer then you could still raise the option in an SR with Oracle support - they may have some comment to make on it, and the more people who raise it the more likely that the feature will get incorporated into the optimizer.
Regards
Jonathan Lewis