13 Replies Latest reply on Mar 25, 2013 5:01 PM by jgarry

    reading from AUD$ Table causes the performance issue ?

    574638
      I am not a DBA.

      One of my Customer is running Oracle 11g R2.

      I am responsible to Collect logs from Oracle Database via a Log Manager. The Log Manager connects the Database via JDBC, and read the audit information(logs/events) from AUD_TRAIL(AUD$) Table.

      as per the Log Manager Documentation
      <snip>
      As the Database Connector reads data from the database and keeps track of its offset, or position in the data.
      The next time the Connector starts, it uses the saved offset to start where it stopped.
      This prevents the Connector from skipping data or sending duplicate data.
      </snip>

      Problem:
      The DBA wants me to configure the Log Manager to connect at an interval of 80 seconds, because every time Log Manager connects, it
      reads/query/scan the whole/complete AUD$ Table which causes significant performance issues.

      My Question here is does it really effect the performance if a Log Manager reads the AUD$ Table at an Interval of 10 seconds ?

      Or if we keep the AUD$ Table size low, e.g if the DBA set the size AUD$ Table to not grow larger then 200 MB.. will help in this issue ?

      Edited by: user571635 on Mar 22, 2013 10:49 AM
        • 1. Re: reading from AUD$ Table causes the performance issue ?
          JustinCave
          It is entirely possible that there would be a substantial performance impact from trying to do a full scan on AUD$ on a regular basis. It's hard for us to know for sure without looking at what the DBA is looking at (presumably, AWR or Statspack reports). But it's certainly possible.

          You can't configure the AUD$ table not to grow-- it will grow to accomodate whatever actions are configured to be audited. It is certainly possible to move data from the AUD$ table to a different database (frequently, organizations will create one database that just stores audit data and regularly move data from AUD$ in each individual database to an audit table in this central database before deleting it on the source). This tends to be a relatively slow and relatively expensive process that is done in a window where the database is relatively inactive (i.e. overnight in an OLTP database).

          Justin
          • 2. Re: reading from AUD$ Table causes the performance issue ?
            marksmithusa
            Ask your DBA how large the audit trail is - if it has four quadrillion rows in it, it's going to take a long time to FTS it.

            If it does has four quadrillion rows in it, tell him to use the very nice audit management capabilities in 11.2....
            • 3. Re: reading from AUD$ Table causes the performance issue ?
              jgarry
              Yes, there are two solutions according to MOS      Performance Issues Caused by DB Collector When AUD$ Is Very Large (Full Table Scan on AUD$) [ID 1356088.1] (You have Oracle support, right?)

              1. Keep the data small.
              2. Add an index. However, if that causes its own performance problems, it is not supported.
              • 4. Re: reading from AUD$ Table causes the performance issue ?
                574638
                Thanks for all the insightfull comments/suggestions.

                I asked the DBA to provide me the information about size of the the table.

                Few things I want to correct here, the logs manager does not reads from SYS.AUD$ but from DBA_AUDIT_TRAIL, and as per the log manager docs, we did following
                grant CREATE SESSION to <user>;
                grant select on v_$session to <user>;
                grant select on v_$version to <user>;
                grant select on SYS.DBA_AUDIT_TRAIL to <user>;
                I don't know how our DBA will receive the comments provided here.

                As a workaround to minimize the performance issue, would you recommend me to configure the Log Manager(jdbc connector) to capture the audit information from Database at an interval of every 1 hour for 5 minutes only, rather then remain connected to read/scan/query the DBA_AUDIT_TRAIL all the times

                Regards,
                • 5. Re: reading from AUD$ Table causes the performance issue ?
                  Jonathan Lewis
                  >
                  as per the Log Manager Documentation
                  <snip>
                  As the Database Connector reads data from the database and keeps track of its offset, or position in the data.
                  The next time the Connector starts, it uses the saved offset to start where it stopped.
                  This prevents the Connector from skipping data or sending duplicate data.
                  </snip>

                  Problem:
                  The DBA wants me to configure the Log Manager to connect at an interval of 80 seconds, because every time Log Manager connects, it
                  reads/query/scan the whole/complete AUD$ Table which causes significant performance issues.

                  My Question here is does it really effect the performance if a Log Manager reads the AUD$ Table at an Interval of 10 seconds ?
                  Here's an example of how bad things can be when you scan the audit table every 30 minutes: http://jonathanlewis.wordpress.com/2010/04/05/failed-login/
                  As from 11.1.0.7 aud$ lost all its indexes (except the LOBINDEXES of course), which changed the way it is handled by Oracle: Re: Updating AUD$ consumes most of the time in AWR report.

                  How do you plan to access only "the data from where you left off" - there is no alternative but to scan everything.

                  A couple of links to comments on how to delete from the aud$ table here: http://jonathanlewis.wordpress.com/2009/09/19/audit/


                  Regards
                  Jonathan Lewis

                  Edited by: Jonathan Lewis on Mar 23, 2013 10:07 AM
                  • 6. Re: reading from AUD$ Table causes the performance issue ?
                    574638
                    As from 11.1.0.7 aud$ lost all its indexes (except the LOBINDEXES of course), which changed the way it is handled by Oracle
                    I mistakenly wrote that the log manager reads from SYS.AUD$.

                    the logs manager does not reads from SYS.AUD$ but from DBA_AUDIT_TRAIL

                    and as per the log manager docs, we did the following before start capturing audit information from Oracle:
                    grant CREATE SESSION to <user>;
                    grant select on v_$session to <user>;
                    grant select on v_$version to <user>;
                    grant select on SYS.DBA_AUDIT_TRAIL to <user>;
                    So my question is, does DBA_AUDIT_TRAIL also lost its indexes ? and the information you provided(links/url etc) are also applied on DBA_AUDIT_TRAIL.
                    Also as per the DBA continuously scanning the DBA_AUDIT_TRAIL by log manager causing performance issues, do you recommend me to configure the log manager to read/scan from DBA_AUDIT_TRAIL at an interval of say 90 Seconds ?

                    Also as suggested by experts, I have written the DBA to Add/create an index on sys.aud$.timestamp# column.

                    Regards

                    Edited by: user571635 on Mar 23, 2013 2:17 AM
                    • 7. Re: reading from AUD$ Table causes the performance issue ?
                      Justin_Mungal
                      Here's an article on deleting from AUD$ with DBMS_AUDIT_MGMT, if you decide to go that route.
                      http://jmoracle.com/DBMSAuditMgmt/index.html
                      • 8. Re: reading from AUD$ Table causes the performance issue ?
                        Justin_Mungal
                        user571635 wrote:
                        As from 11.1.0.7 aud$ lost all its indexes (except the LOBINDEXES of course), which changed the way it is handled by Oracle
                        I mistakenly wrote that the log manager reads from SYS.AUD$.

                        the logs manager does not reads from SYS.AUD$ but from DBA_AUDIT_TRAIL

                        and as per the log manager docs, we did the following before start capturing audit information from Oracle:
                        grant CREATE SESSION to <user>;
                        grant select on v_$session to <user>;
                        grant select on v_$version to <user>;
                        grant select on SYS.DBA_AUDIT_TRAIL to <user>;
                        So my question is, does DBA_AUDIT_TRAIL also lost its indexes ? and the information you provided(links/url etc) are also applied on DBA_AUDIT_TRAIL.
                        Also as per the DBA continuously scanning the DBA_AUDIT_TRAIL by log manager causing performance issues, do you recommend me to configure the log manager to read/scan from DBA_AUDIT_TRAIL at an interval of say 90 Seconds ?

                        Also as suggested by experts, I have written the DBA to Add/create an index on sys.aud$.timestamp# column.

                        Regards

                        Edited by: user571635 on Mar 23, 2013 2:17 AM
                        DBA_AUDIT_TRAIL is just a view that uses aud$. It doesn't have indexes of its own.

                        But don't take my word for it...
                        SQL> set pages 999 long 50000
                        SQL> select view_name, text from dba_views where view_name='DBA_AUDIT_TRAIL';
                        
                        VIEW_NAME
                        ------------------------------
                        TEXT
                        --------------------------------------------------------------------------------
                        DBA_AUDIT_TRAIL
                        select spare1           /* OS_USERNAME */,
                               userid           /* USERNAME */,
                               userhost         /* USERHOST */,
                               terminal         /* TERMINAL */,
                               cast (           /* TIMESTAMP */
                                   (from_tz(ntimestamp#,'00:00') at local) as date),
                               obj$creator      /* OWNER */,
                               obj$name         /* OBJECT_NAME */,
                               aud.action#      /* ACTION */,
                               act.name         /* ACTION_NAME */,
                               new$owner        /* NEW_OWNER */,
                               new$name         /* NEW_NAME */,
                               decode(aud.action#,
                                      108 /* grant  sys_priv */, null,
                                      109 /* revoke sys_priv */, null,
                                      114 /* grant  role */, null,
                                      115 /* revoke role */, null,
                                      auth$privileges)
                                                /* OBJ_PRIVILEGE */,
                               decode(aud.action#,
                                      108 /* grant  sys_priv */, spm.name,
                                      109 /* revoke sys_priv */, spm.name,
                                      null)
                                                /* SYS_PRIVILEGE */,
                               decode(aud.action#,
                                      108 /* grant  sys_priv */, substr(auth$privileges,1,1),
                                      109 /* revoke sys_priv */, substr(auth$privileges,1,1),
                                      114 /* grant  role */, substr(auth$privileges,1,1),
                                      115 /* revoke role */, substr(auth$privileges,1,1),
                                      null)
                                                /* ADMIN_OPTION */,
                               auth$grantee     /* GRANTEE */,
                               decode(aud.action#,
                                      104 /* audit   */, aom.name,
                                      105 /* noaudit */, aom.name,
                                      null)
                                                /* AUDIT_OPTION  */,
                               ses$actions      /* SES_ACTIONS   */,
                               cast((from_tz(cast(logoff$time as timestamp),'00:00') at local) as date)
                                                /* LOGOFF_TIME   */,
                               logoff$lread     /* LOGOFF_LREAD  */,
                               logoff$pread     /* LOGOFF_PREAD  */,
                               logoff$lwrite    /* LOGOFF_LWRITE */,
                               decode(aud.action#,
                                      104 /* audit   */, null,
                                      105 /* noaudit */, null,
                                      108 /* grant  sys_priv */, null,
                                      109 /* revoke sys_priv */, null,
                                      114 /* grant  role */, null,
                                      115 /* revoke role */, null,
                                      aud.logoff$dead)
                                                 /* LOGOFF_DLOCK */,
                               comment$text      /* COMMENT_TEXT */,
                               sessionid         /* SESSIONID */,
                               entryid           /* ENTRYID */,
                               statement         /* STATEMENTID */,
                               returncode        /* RETURNCODE */,
                               spx.name          /* PRIVILEGE */,
                               clientid          /* CLIENT_ID */,
                               auditid           /* ECONTEXT_ID */,
                               sessioncpu        /* SESSION_CPU */,
                               from_tz(ntimestamp#,'00:00') at local,
                                                           /* EXTENDED_TIMESTAMP */
                               proxy$sid                      /* PROXY_SESSIONID */,
                               user$guid                           /* GLOBAL_UID */,
                               instance#                      /* INSTANCE_NUMBER */,
                               process#                            /* OS_PROCESS */,
                               xid                              /* TRANSACTIONID */,
                               scn                                        /* SCN */,
                               to_nchar(substr(sqlbind,1,2000))      /* SQL_BIND */,
                               to_nchar(substr(sqltext,1,2000))      /* SQL_TEXT */,
                               obj$edition                   /* OBJ_EDITION_NAME */,
                               dbid                                      /* DBID */
                        from sys.aud$ aud, system_privilege_map spm, system_privilege_map spx,
                             STMT_AUDIT_OPTION_MAP aom, audit_actions act
                        where   aud.action#     = act.action    (+)
                          and - aud.logoff$dead = spm.privilege (+)
                          and   aud.logoff$dead = aom.option#   (+)
                          and - aud.priv$used   = spx.privilege (+)
                        Just a small piece of advice: before you go off creating indexes, make sure your audit trail only contains what you need it to.
                        • 9. Re: reading from AUD$ Table causes the performance issue ?
                          574638
                          Justin, I will pass your advice to our DBA.

                          Thanks forum
                          • 10. Re: reading from AUD$ Table causes the performance issue ?
                            Jonathan Lewis
                            user571635 wrote:
                            As from 11.1.0.7 aud$ lost all its indexes (except the LOBINDEXES of course), which changed the way it is handled by Oracle
                            I mistakenly wrote that the log manager reads from SYS.AUD$.

                            the logs manager does not reads from SYS.AUD$ but from DBA_AUDIT_TRAIL

                            So my question is, does DBA_AUDIT_TRAIL also lost its indexes ? and the information you provided(links/url etc) are also applied on DBA_AUDIT_TRAIL.
                            Also as per the DBA continuously scanning the DBA_AUDIT_TRAIL by log manager causing performance issues, do you recommend me to configure the log manager to read/scan from DBA_AUDIT_TRAIL at an interval of say 90 Seconds ?

                            Also as suggested by experts, I have written the DBA to Add/create an index on sys.aud$.timestamp# column.
                            If you had read the article I pointed you to, you would have discovered that dba_audit_trail was a view that ultimately sits on top of aud$.
                            And if you had read the article I pointed you to, you would discover that creating an index on timestamp# is a waste of time (I don't see anyone suggesting that, by the way).
                            Here's an example of how bad things can be when you scan the audit table every 30 minutes: http://jonathanlewis.wordpress.com/2010/04/05/failed-login/
                            You need to look at how the data in the table behaves, talk to the dba, and see if reading every 90 seconds then immediately deleting the rows you've read, is what he had in mind - it may be the only sensible strategy.

                            Regards
                            Jonathan Lewis
                            • 11. Re: reading from AUD$ Table causes the performance issue ?
                              574638
                              jgarry wrote:
                              Yes, there are two solutions according to MOS      Performance Issues Caused by DB Collector When AUD$ Is Very Large (Full Table Scan on AUD$) [ID 1356088.1] (You have Oracle support, right?)

                              1. Keep the data small.
                              2. Add an index. However, if that causes its own performance problems, it is not supported.
                              By Add an Index, you mean "Add/create an index on sys.aud$.timestamp# column" .. right ?

                              Sorry as said I am neither a DBA nor own the the Database, I am responsible for the log managere product.

                              But the customer where I installed the log manager, is a Financial Institute, a big Oracle Shop and must have Oracle Support too.

                              I'll ask them to read the " MOS      Performance Issues Caused by DB Collector When AUD$ Is Very Large (Full Table Scan on AUD$) [ID 1356088.1]"

                              For your consideration, the logs manager capture the audit information via following sql query

                              SELECT OS_USERNAME, USERNAME, USERHOST, TERMINAL,
                              TO_CHAR(EXTENDED_TIMESTAMP ,'YYYY/MM/DD HH24:MI:SS') AS EXTENDEDTIME,
                              TO_CHAR(LOGOFF_TIME ,'YYYY/MM/DD HH24:MI:SS') AS LFTIME,     
                              TO_CHAR(TIMESTAMP ,'YYYY/MM/DD HH24:MI:SS') TIMESTAMP,
                              ACTION, ACTION_NAME, SESSIONID, ENTRYID, STATEMENTID, RETURNCODE,
                              OWNER, OBJ_NAME , GLOBAL_UID, SCN, GRANTEE, SQL_BIND, SQL_TEXT,
                              (select MACHINE from v$session where schemaname='SYS' and rownum=1) as DHN,
                              (select * from v$version where BANNER LIKE 'Oracle%') AS DB_VERSION
                              FROM SYS.DBA_AUDIT_TRAIL
                              WHERE TIMESTAMP >= TO_TIMESTAMP ('%s', 'YYYY/MM/DD HH24:MI:SS.FF6') and rownum <= %d
                              ORDER BY TIMESTAMP

                              Thanks,
                              • 12. Re: reading from AUD$ Table causes the performance issue ?
                                Justin_Mungal
                                Do not get ahead of yourself. The very first thing you must do is make sure AUD$ is as small as possible. Please work with the DBA on this first, and then we can continue to investigate this if you're still having performance issues. Keep in mind that adding indexes to AUD$ is not actually supported by Oracle.
                                • 13. Re: reading from AUD$ Table causes the performance issue ?
                                  jgarry
                                  user571635 wrote:
                                  jgarry wrote:
                                  Yes, there are two solutions according to MOS      Performance Issues Caused by DB Collector When AUD$ Is Very Large (Full Table Scan on AUD$) [ID 1356088.1] (You have Oracle support, right?)

                                  1. Keep the data small.
                                  2. Add an index. However, if that causes its own performance problems, it is not supported.
                                  By Add an Index, you mean "Add/create an index on sys.aud$.timestamp# column" .. right ?
                                  No. It's ntimestamp#, and you really should get Oracle support involved, it's never a good sign when they make recommendations they don't support. Follow Jonathan's examples carefully, his genius is he shows how to figure out whether something can work for you.

                                  >
                                  Sorry as said I am neither a DBA nor own the the Database, I am responsible for the log managere product.

                                  But the customer where I installed the log manager, is a Financial Institute, a big Oracle Shop and must have Oracle Support too.

                                  I'll ask them to read the " MOS      Performance Issues Caused by DB Collector When AUD$ Is Very Large (Full Table Scan on AUD$) [ID 1356088.1]"

                                  For your consideration, the logs manager capture the audit information via following sql query

                                  SELECT OS_USERNAME, USERNAME, USERHOST, TERMINAL,
                                  TO_CHAR(EXTENDED_TIMESTAMP ,'YYYY/MM/DD HH24:MI:SS') AS EXTENDEDTIME,
                                  TO_CHAR(LOGOFF_TIME ,'YYYY/MM/DD HH24:MI:SS') AS LFTIME,     
                                  TO_CHAR(TIMESTAMP ,'YYYY/MM/DD HH24:MI:SS') TIMESTAMP,
                                  ACTION, ACTION_NAME, SESSIONID, ENTRYID, STATEMENTID, RETURNCODE,
                                  OWNER, OBJ_NAME , GLOBAL_UID, SCN, GRANTEE, SQL_BIND, SQL_TEXT,
                                  (select MACHINE from v$session where schemaname='SYS' and rownum=1) as DHN,
                                  (select * from v$version where BANNER LIKE 'Oracle%') AS DB_VERSION
                                  FROM SYS.DBA_AUDIT_TRAIL
                                  WHERE TIMESTAMP >= TO_TIMESTAMP ('%s', 'YYYY/MM/DD HH24:MI:SS.FF6') and rownum <= %d
                                  ORDER BY TIMESTAMP

                                  Thanks,
                                  OK, well that looks like audit vault in the note and whatever is making that query aren't doing the same thing, so... perhaps you have some version issue? Looking at the comment Mark posted in the forum thread Jonathan, maybe an upgrade does things different. Definitely time to get support involved. Also see the bug mentioned in the other note in that thread, Bug 6389472 : INDEX ON SYS.AUD$ TO SPEED QUERIES AGAINST TIMESTAMP IN DBA_AUDIT_TRAIL (which also sounds a lot like Jonathan's blog entry), which shows a way to add an index to timestamp#.

                                  This is why DBA's have to get involved, in depth analysis is necessary. People are paid to deal with this stuff, it's not a matter of just following some directions.