Discussions
Categories
- 196.8K All Categories
- 2.2K Data
- 239 Big Data Appliance
- 1.9K Data Science
- 450.3K Databases
- 221.7K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 31 Multilingual Engine
- 550 MySQL Community Space
- 478 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3K ORDS, SODA & JSON in the Database
- 544 SQLcl
- 4K SQL Developer Data Modeler
- 187K SQL & PL/SQL
- 21.3K SQL Developer
- 295.8K Development
- 17 Developer Projects
- 138 Programming Languages
- 292.5K Development Tools
- 107 DevOps
- 3.1K QA/Testing
- 646K Java
- 28 Java Learning Subscription
- 37K Database Connectivity
- 155 Java Community Process
- 105 Java 25
- 22.1K Java APIs
- 138.1K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 18 Java Essentials
- 160 Java 8 Questions
- 86K Java Programming
- 80 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.3K Java SE
- 13.8K Java Security
- 204 Java User Groups
- 24 JavaScript - Nashorn
- Programs
- 439 LiveLabs
- 38 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.7K Other Languages
- 2.3K Chinese
- 171 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 232 Portuguese
Tablescan on AUD$ from Oracle Enterprise Manager (DBSNMP)

SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE 12.1.0.2.0 Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production
---------------------------------------------------------------------------------
When looking at SQL Monitoring in Oracle Enterprise Manager, I find that this query is causing full tablescan on SYS.AUD$ in all our databases, i.e. test/dev and production databases:
sqlid: 4ztz048yfq32s
SELECT TO_CHAR(CURRENT_TIMESTAMP AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp,
COUNT(username) AS failed_count,
TO_CHAR(MIN(timestamp), 'yyyy-mm-dd hh24:mi:ss') AS first_occur_time,
TO_CHAR(MAX(timestamp), 'yyyy-mm-dd hh24:mi:ss') AS last_occur_time
FROM sys.dba_audit_session
WHERE returncode != 0
AND timestamp >= CURRENT_TIMESTAMP - TO_DSINTERVAL('0 0:30:00');
The query monitors failed login counts.
As aud$ is has a size of 2 GB in one of our production databases, this tablescan generates 2 GB IO in that database, every 30th minute. All other databases are also experiencing tablescan on AUD$ from this query, and the shared disks rack in test/dev is also loaded by these tablescans.
The problem is missing indexes on SYS.AUD$ combined with the criteria "...WHERE returncode != 0...".
If the returncode in AUD$ is always >=0, then the criteria can be changed to "...WHERE returncode > 0...". Combined with this index:
create index aud$_idx3 on sys.aud$(returncode,timestamp#);
...the tablescan is gone and execution time is reduced from up to 39 seconds for the 2 GB tablescan to a few milliseconds.
My questions are:
1) I am too picky when a raise this query as a concern to the dba-team? Our database is not IO-bound, but has a general challenge with bad performance.
2) This is a query that we cannot change, as it is generated by Oracle Enterprise Manager Agent. Would it be feasible to point out this to Oracle so that the query and indexing can be improved?
Answers
-
That's an old problem that I thought would have been fixed by now. It's associated with the fact that the critical column is declared as type timestamp, and Oracle then uses sys_extract_utc to ensure that the timestamp you supply in your query is suitably adjusted before comparing with the timestamp stored in the table. I first wrote about this just over 6 years ago: https://jonathanlewis.wordpress.com/2010/04/05/failed-login/
Check the execution plan, predicate section - this is what I get:
6 - filter("AUD"."ACTION#">=100 AND "AUD"."ACTION#"<=102 AND
SYS_EXTRACT_UTC(CAST(FROM_TZ(INTERNAL_FUNCTION("NTIMESTAMP#"),'00:00') AT LOCAL AS
date))>=SYS_EXTRACT_UTC(CURRENT_TIMESTAMP(6)-INTERVAL'+000000000 00:30:00.000000000'
DAY(9) TO SECOND(9)) AND "RETURNCODE"<>0)
Regards
Jonathan Lewis
-
user8602269 wrote: SQL> select banner from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production PL/SQL Release 12.1.0.2.0 - Production CORE 12.1.0.2.0 Production TNS for Linux: Version 12.1.0.2.0 - Production NLSRTL Version 12.1.0.2.0 - Production --------------------------------------------------------------------------------- When looking at SQL Monitoring in Oracle Enterprise Manager, I find that this query is causing full tablescan on SYS.AUD$ in all our databases, i.e. test/dev and production databases: sqlid: 4ztz048yfq32s SELECT TO_CHAR(CURRENT_TIMESTAMP AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_count, TO_CHAR(MIN(timestamp), 'yyyy-mm-dd hh24:mi:ss') AS first_occur_time, TO_CHAR(MAX(timestamp), 'yyyy-mm-dd hh24:mi:ss') AS last_occur_time FROM sys.dba_audit_session WHERE returncode != 0 AND timestamp >= CURRENT_TIMESTAMP - TO_DSINTERVAL('0 0:30:00'); The query monitors failed login counts. As aud$ is has a size of 2 GB in one of our production databases, this tablescan generates 2 GB IO in that database, every 30th minute. All other databases are also experiencing tablescan on AUD$ from this query, and the shared disks rack in test/dev is also loaded by these tablescans. The problem is missing indexes on SYS.AUD$ combined with the criteria "...WHERE returncode != 0...". If the returncode in AUD$ is always >=0, then the criteria can be changed to "...WHERE returncode > 0...". Combined with this index: create index aud$_idx3 on sys.aud$(returncode,timestamp#);
...the tablescan is gone and execution time is reduced from up to 39 seconds for the 2 GB tablescan to a few milliseconds.
My questions are: 1) I am too picky when a raise this query as a concern to the dba-team? Our database is not IO-bound, but has a general challenge with bad performance. 2) This is a query that we cannot change, as it is generated by Oracle Enterprise Manager Agent. Would it be feasible to point out this to Oracle so that the query and indexing can be improved?Why order the index (returncode,timestamp#) and not (timestamp#,returncode)? Think of the implications about how Oracle will need to search the index based on your predicates.
However as per Doc ID 1329731.1 this is unsupported by Oracle.
If your table is so big that a full tablescan is slow for you then maybe consider purging the audit trail.
-
So the answers to your actual questions are:
1) It's worth mentioning it to the DBA team - but I would hope that the DBA team would have noticed it already and (as Andrew says) perhaps you need to purge the aud$ table, and perhaps there is a purge operation but no-one checks the HWM of the table in case it has grown to be a very large table with very little data. (Maybe it needs a one-off, or occasional, MOVE.)
2) You could raise it, but if Oracle Corp. hasn't done anything about it in the last 6 years since Greg Rahn raised (unpublished) bug 9556885, but you might not get much response. A better index, given the current query, is (action#, ntimestamp#) - this accesses by action 100 - 102, and filters on the ntimestamp# in the index.
Regards
Jonathan Lewis
UPDATE: There is a patch for 10.2.0.5, 11.1.0.1, and 11.2.0.3 for this bug - search MoS for 9556885 - I wouldn't be surprised if this simply built the index I've suggested, but get onto MoS and ask for a patch for your verion.
UPDATE 2: Interestingly document 9556885.8 says that the problem is fixed in EM 12.1.0.1, so presumably you're running an earlier version of EM despite running databsae 12.1.0.2
-
Thanks for you advises so far!
I created a multiple set of indexes on aud$, including (timestamp#,returncode), but Oracle chose to use the index (returncode,timestamp#).
If this is already raised as a bug, I will forward that information to our DBA team and let them handle it towards Oracle.
Adding index (action#, ntimestamp#) did not help, unfortunately.
-
As Jonathan mentioned, there is some data type conversion going on. Looking at dba_views, the dba_audit_session view selects the timestamp column from dba_audit_trail and filters where action between 100 and 102. dba_audit_trail uses
cast ( /* TIMESTAMP */
(from_tz(ntimestamp#,'00:00') at local) as date)
for the timestamp column, and action# for the action column
From sys.aud$
I'd therefore suggest making an index on (cast ((from_tz(ntimestamp#,'00:00') at local) as date), action#) this will mean that a range scan can be used on the index to look at all index keys that fit the date range, it will then filter out those that don't match the action filter. It depends on the data distribution whether or not you'd include the response column in the index, do you audit all queries or only those that fail due to security?
I can't test this index right now as I don't have access to a sys user anywhere.
--
Just to reiterate that this is unsupported even if it does work for you
-
Thank you, Jonathan.
However, document 9556885.8 addresses the redundant use of TO_CHAR in the statement. A fix to that does not improve speed in any substantial way. From the document:
Description
'Failed login' metric is using an inefficient SQL statement. The 'TO_CHAR' conversion to do the date arithmetic is redundant and should not be used. Workaround: None Testcase: None
-
I tried implementing the index you suggested using this statement:
create index aud$13 on aud$(cast ((from_tz(ntimestamp#,'00:00') at local) as date), action#);
However, I get this error when running the statement: ORA-01743: only pure functions can be indexed
Something is incorrect in the statement, but I cannot see what it is. Can you?
-
I removed "at local" from the statement, then I was able to create the index:
create index aud$13 on aud$(cast ((from_tz(ntimestamp#,'00:00') ) as date), action#);
Unfortunately, it didn't help. The only index that has helped so far is:
create index pcp_aud$_idx3 on pcp_aud$(returncode,timestamp#);
pcp_aud$ is a copy of sys.aud$ and I have create a pcp*-copy of sys.dba_audit_session and the others view(s) in the stack down to aud$.
I have tried all the other relevant indexes that I could think of:
create index pcp_aud$_idx1 on pcp_aud$(timestamp#);
create index pcp_aud$_idx2 on pcp_aud$(returncode);
create index pcp_aud$_idx4 on pcp_aud$(timestamp#,returncode);
create index pcp_aud$_idx5 on pcp_aud$( decode(returncode,0,0,1),timestamp#);
create index pcp_aud$_idx6 on pcp_aud$(userid);
create index pcp_aud$_idx7 on pcp_aud$(returncode,timestamp#,userid);
create index pcp_aud$_idx8 on pcp_aud$(timestamp#,returncode,userid);
create index pcp_aud$_idx9 on pcp_aud$(userid,timestamp#,returncode);
create index pcp_aud$_idx10 on pcp_aud$(userid,returncode,timestamp#);
create index pcp_aud$_idx11 on pcp_aud$( decode(returncode,0,0,1),timestamp#);
create index pcp_aud$_idx12 on pcp_aud$(action#);
So, my conclusion is:
Only a rewrite of the where statement from "...WHERE returncode != 0..." to "...WHERE returncode > 0..." and creation of the index "create index pcp_aud$_idx3 on aud$(returncode,timestamp#);" will solve the problem.
However, as the statement is controlled by Oracle, there is actually nothing we can do ourselves to fix this, except for reporting it to Oracle.
Do you agree with that?
-
Ah, Of course, the "at local" means the function is not a pure function.
Having the column purely in the index will mean that it will have to be read completely in order for the conversion to take place, the CBO will choose to do this based on the size of the index and the reduction of the result set it thinks the filter will achieve. You can let Oracle know that the timestamp has an effective upper bound of systimestamp this will reduce the expect result set size further. Using the index Jonathan suggested before of (action#,ntimestamp#) should be the best bet here.
I fiddled and created my own version of aud$ with some hacky stats to show the difference.
explain plan for select * FROM dba_audit_session where RETURNCODE <> 0 and timestamp >= CURRENT_TIMESTAMP - TO_DSINTERVAL('0 0:30:00'); PLAN_TABLE_OUTPUT _______________________________________________________________________________________________________ Plan hash value: 1696165548 ---------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 625 | 62500 | 1154 (1)| 00:00:14 | |* 1 | TABLE ACCESS BY INDEX ROWID| AS_AUD$ | 625 | 62500 | 1154 (1)| 00:00:14 | |* 2 | INDEX RANGE SCAN | AS_AUD$2 | 2250 | | 29 (18)| 00:00:01 | ---------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("RETURNCODE"<>0) 2 - access("AUD"."ACTION#">=100 AND "AUD"."ACTION#"<=102) filter(SYS_EXTRACT_UTC(CAST(INTERNAL_FUNCTION(FROM_TZ(INTERNAL_FUNCTION(" NTIMESTAMP#"),'00:00')) AT LOCAL AS date))>=SYS_EXTRACT_UTC(CURRENT_TIMESTAMP(6) -INTERVAL'+000000000 00:30:00.000000000' DAY(9) TO SECOND(9))) explain plan for select * FROM dba_audit_session where RETURNCODE <> 0 and timestamp >= CURRENT_TIMESTAMP - TO_DSINTERVAL('0 0:30:00') and timestamp < CURRENT_TIMESTAMP; PLAN_TABLE_OUTPUT ______________________________________________________________________________________________________ Plan hash value: 3411562882 ----------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 31 | 3100 | 85 (5)| 00:00:02 | |* 1 | FILTER | | | | | | |* 2 | TABLE ACCESS BY INDEX ROWID| AS_AUD$ | 31 | 3100 | 85 (5)| 00:00:02 | |* 3 | INDEX RANGE SCAN | AS_AUD$2 | 113 | | 29 (18)| 00:00:01 | ----------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP(6)-INTERVAL'+000000000 00:30:00.000000000' DAY(9) TO SECOND(9))<=SYS_EXTRACT_UTC(CURRENT_TIMESTAMP(6))) 2 - filter("RETURNCODE"<>0) 3 - access("AUD"."ACTION#">=100 AND "AUD"."ACTION#"<=102) filter(SYS_EXTRACT_UTC(CAST(INTERNAL_FUNCTION(FROM_TZ(INTERNAL_FUNCTION("N TIMESTAMP#"),'00:00')) AT LOCAL AS date))<=SYS_EXTRACT_UTC(CURRENT_TIMESTAMP(6)) AND SYS_EXTRACT_UTC(CAST(INTERNAL_FUNCTION(FROM_TZ(INTERNAL_FUNCTION("NTIMESTAMP# "),'00:00')) AT LOCAL AS date))>=SYS_EXTRACT_UTC(CURRENT_TIMESTAMP(6)-INTERVAL'+0 00000000 00:30:00.000000000' DAY(9) TO SECOND(9)))
Although this doesn't quite show the difference easily, you can see that the expected numbers of rows returned from the index is reduced (to 5%) making the cost of accessing the table via the index also reduced (to 5%). Hopefully you can appreciate that the effect of the additional filter.
-
The columns you'd include in the index depend on how selective they are and how much they'd grow the index by or how much you'd reduce the scan of the index by. It's all cost based, you've just got to remember that as you go from left to right in an index (column position order) as soon as you stop using equality operations you can no longer be selective about what you scan in the index, this might be fine in a small index but in a large index it will cost you.