Skip to Main Content

Oracle Database Discussions

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Oracle DB 19c: AVG function and wrong results. Already known BUG, or something else?

S-MaxAug 5 2020 — edited Aug 6 2020

Hi all,

after upgrade from 12c to 19c I have problems with select statements including AVG function.

Without any regularity it brings 0 or correct value, and it is dependent on the place in the select clause and/or selected columns!

If you would like to test this case I have attached 2 files with create table, insert rows, create view and select statements I use.

On the 11g and 12g all select statements are workíng fine!

Anyone any ideas?

Comments

JohnWatson2

The sys.aud$ table is not indexed. You need to move the audit records to a table of your own, and query them there.

MhAGOU

Hi,

the underlying table AUD$ is not indexed and it should be archived and purged regularly to limit the volume grow.

if you can't archive it due to some company reasons try to gather statistics if it help :

exec  dbms_stats.GATHER_FIXED_OBJECTS_STATS();
exec  dbms_stats.GATHER_DICTIONARY_STATS();

Regards.

John Stegeman

How will gathering statistics of that table help the performance of the query?

AndrewSayer

Why do you care how fast it takes? How often do you run it -every 10 minutes? Does it take longer than that to complete? Can you just run it less often with a wider date range? It will take pretty much the same amount of time as it will always full scan the table

If you only care about these users then what about a logon trigger that only fires for these users, if you don't expect it happens often then the overhead is pretty much nothing. You can then index your own table, should it need indexing.

EdStevens

As an aside, I do hope you have moved sys.aud$ out of its default tablespace (SYSTEM) and into its own dedicated TS . .. .

John Stegeman

Thanks for that article which demonstrates why gathering fixed object stats has nothing whatsoever to do with AUD$.

Mark D Powell

mc88, I agree with JohnWatson2.  Creating your own table with the appropriate indexes is the most efficient way to access the audit data where repeated access is needed.  But how often is the data queried?  And how?  Does it really matter if the query is slow?

- -

HTH -- Mark D Powell -- 

toonie

Hi Ed,

Just out of curiosity - Would you mind if I ask you please to expand on the reasons for moving sys.aud$ like this?

What are the benefits to isolating it into its own tablespace?

Regards,

Ruan

toonie

Hi Mark & @"JohnWatson2" (Based on the very first thread reply),

I would appreciate some of your thoughts / input if you would be so kind... The AUD$ segment is not indexed by default, however I am very curious to get some opinion on the potential impact of manually creating an index say for example on the NTIMESTAMP# column on the table in order to improve query performance, and whether this might have a detrimental effect on DML on the table. What I am wondering is what sort of impact could be expected or what sort of event wait types might one look for to highlight potential contention on the index? I understand (per the below MOS document) that putting an index in place is unsupported but dependent on customer use case, may be beneficial and it is a case of testing it out in your own environment.

Oracle Support Document 1329731.1 (The Effect Of Creating Index On Table Sys.Aud$) can be found at: https://support.oracle.com/epmos/faces/DocumentDisplay?id=1329731.1

To explain the background of why I ask: We are currently exploring a third party Business analytics tool for audit / log analysis which has a database plugin and can be used to query and present database audit info in real time dashboards. Hence why we would need the query (with index) on AUD$ to perform somewhat quickly if ran periodically, and with a relatively well trimmed down version of AUD$ (retaining only say a weeks data).

Regards,

Ruan

AndrewSayer

Updates done to the index will use standard db file sequential reads to load the appropriate index blocks into the buffer cache to update. If your queries can filter down the number of rows that it would visit from the table enough by accessing the index then it would be useful there. You may find that you need to provide both lower and upper bounds to your filter so that the cbo can identify the date range and make an accurate estimate of the number of rows you would find in the table.

You can reduce the overhead for unimportant logins by using a case when statement in your index so that the index key is only not null for the users of interest. This will mean that only those user logins will have to do anything with the index.

The real benchmark (the only one that matters) is to time how long a login takes, it doesn't matter what wait events are seen, all that matters is the login process remains an acceptable time.

I will ask again though, surely this query only needs to complete in 10 minutes? If it takes longer then can't you just run it less often? How real time do you news to be looking at?

John Stegeman

Because aud$ can grow to be quite large - do you really want your system tablespace to get huge just for this? Getting it into another tablespace makes it easier to manage (it's not in a tablespace with the data dictionary any more)

JohnWatson2
Jonathan Lewis

A DDL trigger won't fire if the logon fails, will it ?  Note that the original query includes the predicate "return_code != 0"' - i.e. report only failures.

For the grimmer side effects of this type of problem there's a blog I wrote a few years ago about an OEM "failed login" report that was doing a 1.1M block tablescan every few minutes and was responsible for more than 30% of the total I/O on the system (with a performance impact on all disk I/Os) - and sometimes it happened during critical periods when the I/O impact made an important difference. It's the side effect that can be more important than the run time: https://jonathanlewis.wordpress.com/2010/04/05/failed-login/

Regards

Jonathan Lewis

AndrewSayer

Jonathan Lewis wrote:

A DDL trigger won't fire if the logon fails, will it ? Note that the original query includes the predicate "return_code != 0"' - i.e. report only failures.

For the grimmer side effects of this type of problem there's a blog I wrote a few years ago about an OEM "failed login" report that was doing a 1.1M block tablescan every few minutes and was responsible for more than 30% of the total I/O on the system (with a performance impact on all disk I/Os) - and sometimes it happened during critical periods when the I/O impact made an important difference. It's the side effect that can be more important than the run time: https://jonathanlewis.wordpress.com/2010/04/05/failed-login/

Regards

Jonathan Lewis

I completely missed the failed part, thanks for pointing it out. In that case though, you could user an after servererror trigger to capture the necessary details using server_env. 

For general manageability and future requirements that might require audit data, the archival of the audit data seems the most sensible.

toonie

Thanks Andrew for your reply,

I will examine and compare/contrast db file sequential reads for a window of the index in place versus without.

Currently, the query performance and row filtering on the aud$ segment is working quite effectively for us. The query criteria applied by the tool works based on a rising column method whereby it queries along the lines of "show me everything from the table" greater than the last greatest column value I already have received (Using NTIMESTAMP#). I am not logged onto work at present but from session tracing - I recall it uses something akin to the below and does make use of the index in its execution plan...

select * from sys.aud$ where ntimestamp >

(select max ntimestamp# from <data the tool has already received>);

Although manipulation of the SQL query applied by the tool (Splunk) is not something we can actually readily manipulate (its such that we point the tool at the NTIMESTAMP# column and it constructs the SQL query along the lines of the logic above. Initial testing has reduced down a 6-8 minute query time for a FTS of AUD$ down to a small number of seconds based on this rising-column logic. This allows for a frequency interval of the tool polling AUD$ every 10-15 minutes without much concern of overhead. So in terms of the AUD$ query performance, all looks good thus far.

I'm afraid I don't fully understand what you are suggesting by this (below) statement though?

You can reduce the overhead for unimportant logins by using a case when statement in your index so that the index key is only not null for the users of interest.

Would you mind expanding or showing a simple example of what you have in mind?

I had never really given much consideration to the impact on the login time (Just given that our supported databases are predominantly data warehouse types with usually persistently logged in sessions).

Thanks

Ruan

toonie

Okay, thanks John - thats fair point,

But if its just a case of administrative overhead on space management? - thats not something which would keep me up at night. Disk space is cheap.

From the initial stages of researching my purpose for indexing AUD$, I attempted to identify some of our more aud$ "busy" databases by checking for non-prod environments with a comparatively high number of block changes on that segment. The largest I came across was approx 300M rows (trimmed to 4 months of recent data only) and a segment physical size of 70GB, so not massive in my estimation (But everything is relative).

Regards,

Ruan

toonie

John,

Thank you kindly for the very appropriate link demo. I'll likely borrow that such logic for the purposes my my own testing.

Regards,

Ruan

AndrewSayer

toonie wrote:

Thanks Andrew for your reply,

I will examine and compare/contrast db file sequential reads for a window of the index in place versus without.

Currently, the query performance and row filtering on the aud$ segment is working quite effectively for us. The query criteria applied by the tool works based on a rising column method whereby it queries along the lines of "show me everything from the table" greater than the last greatest column value I already have received (Using NTIMESTAMP#). I am not logged onto work at present but from session tracing - I recall it uses something akin to the below and does make use of the index in its execution plan...

  1. select*fromsys.aud$wherentimestamp>
  2. (selectmaxntimestamp#from<datathetoolhasalreadyreceived>);

Although manipulation of the SQL query applied by the tool (Splunk) is not something we can actually readily manipulate (its such that we point the tool at the NTIMESTAMP# column and it constructs the SQL query along the lines of the logic above. Initial testing has reduced down a 6-8 minute query time for a FTS of AUD$ down to a small number of seconds based on this rising-column logic. This allows for a frequency interval of the tool polling AUD$ every 10-15 minutes without much concern of overhead. So in terms of the AUD$ query performance, all looks good thus far.

I'm afraid I don't fully understand what you are suggesting by this (below) statement though?

You can reduce the overhead for unimportant logins by using a case when statement in your index so that the index key is only not null for the users of interest.

Would you mind expanding or showing a simple example of what you have in mind?

I had never really given much consideration to the impact on the login time (Just given that our supported databases are predominantly data warehouse types with usually persistently logged in sessions).

Thanks

Ruan

I'm not sure that method of comparing a date/timestamp column to the last time you ran the query works as a general method. You can only read from the table what has been committed, the timestamp column can't be updated upon commit- it is only as the insert statement runs. There might be some internals working in your favour here as it seems OEM used to rely on the exact same sort of query - hopefully Oracle wouldn't make errors with consistency!


The other problem is that you used a > rather than >=, a row could be inserted and committed with the same timestamp as you are querying but a moment after you start the query. This might just be an error in your recollection.

As you seem to have pretty heavy audit activity, preventing those other actions from touching the index will save a lot of the index overhead - do this by making the index key NULL for the rows of disinterest. The views use a function on ntimestamp# to convert it to a date when exposing it, so any index access will have to read all of the ntimestamp# values of the index then apply the function to all index keys before it can identify rowid s of interest. You could limit the amount you read with the case when ..null trick or you could just prefix the index with your other filtering columns, but this would not help your other audit activities.

As you are now querying sys.aud$ directly, you don't have to worry so much about the function conversion but you should still be wary about impacting all the other audited activity.

Something like:

create index index_name on sys.aud$ (case when action# between 100 and 102 and returncode != '0' and userid in ('USER1','USER2','USER3') then ntimestamp# else null end);

explain plan for

SELECT *

FROM sys.aud$

WHERE case when action# between 100 and 102 and returncode != '0' and userid in ('USER1','USER2','USER3') then ntimestamp# else null end>=SYSDATE - 10/(24*60)

Plan hash value: 1935142001

--------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name      | Rows  | Bytes | Cost (%CPU)| Time    |
--------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |            |    2 |  236 |    0  (0)| 00:00:01 |
|  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| AUD$      |    2 |  236 |    0  (0)| 00:00:01 |
|*  2 |  INDEX RANGE SCAN                  | INDEX_NAME |    1 |      |    0  (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  2 - access(CASE  WHEN ("ACTION#">=100 AND "ACTION#"<=102 AND "RETURNCODE"<>0 AND
              ("USERID"='USER1' OR "USERID"='USER2' OR "USERID"='USER3')) THEN "NTIMESTAMP#" ELSE NULL
              END >=SYSDATE@!-.006944444444444444444444444444444444444444)

EdStevens

toonie wrote:

Okay, thanks John - thats fair point,

But if its just a case of administrative overhead on space management? - thats not something which would keep me up at night. Disk space is cheap.

It's not a matter of disk space, per se.  It's going to take as much space as it's going to take regardless of what TS you put it in.  But the SYSTEM ts is so critical to the operation of the database, you don't want the high amount of insert/delete activity that the audit table generates to be happening in the SYSTEM.  And you sure don't want the audit table to end up maxiing out the allocated space for the SYSTEM TS.  Putting aud$ in a dedicated TS greatly reduces the risk to the database that could result from having it in the SYSTEM TS. 

toonie

Thanks Andrew,

You've actually touched upon a very important point below which we identified in early stages.

I'm not sure that method of comparing a date/timestamp column to the last time you ran the query works as a general method. You can only read from the table what has been committed, the timestamp column can't be updated upon commit- it is only as the insert statement runs. There might be some internals working in your favour here as it seems OEM used to rely on the exact same sort of query - hopefully Oracle wouldn't make errors with consistency!

The other problem is that you used a > rather than >=, a row could be inserted and committed with the same timestamp as you are querying but a moment after you start the query. This might just be an error in your recollection.

The Bus analytics tool is designed to work with a PK rising column i.e. an incrementing unique sequential value that can be easily identifiable as having been collected or not. When we are opting to designate NTIMESTAMP# as our column of selection on AUD$ (of which there is no index and no PK by default), we are ignoring that rule and have to live with the risk that there is a possibility of events occurring at the very extremity of a timestamp of a polling interval (i.e. Potentially missed records if they happen in the very same timestamp).

Thats also a very neat trick in terms of the column indexing option - I'll have to explore further to see if it can be leveraged in our scenario.

Thanks,

Ruan

toonie

Thanks Ed,

And you sure don't want the audit table to end up maxiing out the allocated space for the SYSTEM TS.  Putting aud$ in a dedicated TS greatly reduces the risk to the database that could result from having it in the SYSTEM TS. 

Yes, if the SYSTEM TS runs out, you are in trouble no doubt; however, if a dedicated AUDIT TS runs out of space in the same manner - I assume you are also in just as much difficulty as you're database will become inaccessible for login?

Regards,

Ruan

AndrewSayer

Using a primary key column would have exactly the same problem. You can insert one row in one session without committing, then another session can insert another row (incrementing the sequence) then commit. Your query will only see the second row. First session commits and will never get seen.

Also bear in mind that sequences don't have to return numbers in order on RAC (unless you use the order attribute with additional overhead). I don't know I'd there's any reason to suggest the same can't occur on single instance - best to refer to the docs about these sorts of things - don't make assumptions.

EdStevens

toonie wrote:

Thanks Ed,

And you sure don't want the audit table to end up maxiing out the allocated space for the SYSTEM TS. Putting aud$ in a dedicated TS greatly reduces the risk to the database that could result from having it in the SYSTEM TS.

Yes, if the SYSTEM TS runs out, you are in trouble no doubt; however, if a dedicated AUDIT TS runs out of space in the same manner - I assume you are also in just as much difficulty as you're database will become inaccessible for login?

Regards,

Ruan

No, you aren't in just as much trouble.  Any necessary housekeeping is much easier on a non-SYSTEM ts. You can't take the SYSTEM ts offline. Tablespace recovery is a whole different ballgame when dealing with the SYSTEM ts.

It's really the same as any application schema.  The reason you don't want your aud$ table in SYSTEM is the same reason you don't want your HR tables, or your FINANCE tables, or your PURCHASING or INVENTORY or PRODUCTION_SCHEDULE tables in SYSTEM.  The audit table is really no different.  I can only assume the only reason oracle puts it in SYSTEM is because that is the one TS they could guarantee is there. 

Jonathan Lewis

  I can only assume the only reason oracle puts it in SYSTEM is because that is the one TS they could guarantee is there.

And if you use the dbms_audit_mgmt package - in particular the init_cleanup procedure - Oracle will move the AUD$ table into the SYSAUX tablespace (by default) anyway.

Regards

Jonathan Lewis

EdStevens

Jonathan Lewis wrote:

I can only assume the only reason oracle puts it in SYSTEM is because that is the one TS they could guarantee is there.

And if you use the dbms_audit_mgmt package - in particular the init_cleanup procedure - Oracle will move the AUD$ table into the SYSAUX tablespace (by default) anyway.

Regards

Jonathan Lewis

Yep.  And if I recall, there is a fair amount of message traffic on the web regarding problems with putting it there, as well.  If it were up to me, I'd code orcle to simply not create the audit tables "out of the box".  Defer it to later, and reference that as part of setting up auditing, documenting that as pre-req alongside setting the AUDIT_TRAIL init parm.

AJ

mc88 skrev:

Unfortunately the performance of this SQL is quite poor since it does TABLE ACCESS FULL on sys.aud$. I tried to narrow it with additional conditions (e.g. action_name), also I put some hints (paraller, result_cache and some other) but nothing seem to work. My aud$ is pretty large due to company restrictions and regulations so my question here:. Is it possible at all to optimize that query by forcing oracle to use indexes here? I would be grateful for any help&tips.

Have you considered Audit Vault? (Oracle Audit Vault and Database Firewall Documentation )

If the business requires you to retain the audit records for a long period of time and especially if you must maintain audit records for several databases I would probably look into that.. http://docs.oracle.com/cd/E69292_01/index.html

AJ

1 - 27

Post Details

Added on Aug 5 2020
11 comments
1,735 views