Skip to Main Content

SQL & PL/SQL

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Generic Trigger to log changes

mail8mzSep 17 2021 — edited Sep 17 2021

Hello,
I am trying to use trigger to log the change of tables. However, it seems I have to hard code the column name and column value. Is there any generic solution regardless of the table structure?

Table structure:

CREATE table base_table ( id number(10), nm varchar2(10) );
CREATE table log_table ( id number(10), nm varchar2(10), logTs timestamp(6));

Table specific trigger works fine:

CREATE OR REPLACE TRIGGER trigger_spfc
  BEFORE UPDATE ON base_table
  FOR EACH ROW
BEGIN
    INSERT INTO log_table( id, nm, logTs)
      VALUES( :old.id, :old.nm, systimestamp);
 END;

I tried to make a more generic trigger using primary key column(eg, id) only. So that I only need hard code the primary key column, but not all columns in the table.


CREATE OR REPLACE TRIGGER trigger_generic
  BEFORE UPDATE ON base_table
  FOR EACH ROW
BEGIN
    INSERT INTO log_table( id, nm, histTS)
    SELECT id, nm, systimestamp from base_table 
    WHERE ID = :OLD.id;
 END;

Now I have "ORA-04091, table is mutating" error.
Any suggestion is welcome, thanks!

This post has been answered by Jan Gorkow on Sep 17 2021
Jump to Answer

Comments

User_H3J7U

Flashback archive or write code generator of non-generic triggers.

Frank Kulash

Hi, mail8mz
However, it seems I have to hard code the column name and column value. Is there any generic solution regardless of the table structure?
No; column names need to be hard-coded. You could find all the primary key (or all the column names) frim the data dictionary, and then use dynamic SQL, but it's much simpler just to hard code them.
Now I have "ORA-04091, table is mutating" error.
Right; a FOR EACH ROW trigger on base_table can't query base_table itself, but you don't need to in this case If id is unique, then you can say

 INSERT INTO log_table( id,      nm,       histTS)
           VALUES (:OLD.id, :OLD.nm, systimestamp);
Jan Gorkow

Hi mail8mz ,
if you change your logging concept a little bit, you are able to realize that with compound triggers. But with this solution you always log the new values and not the old ones. You don't have to specify any column names of the base table. To make table modifications in the future easier you should place the additional metadata columns (logTs, Username and so on) at the beginning of the column list of the log table. So you're able to add new columns identical to both tables by keeping the full flexibility and no need to modify the trigger. Here's my example, check it out:

CREATE TABLE base_table
(
  id  NUMBER (10),
  nm  VARCHAR2 (10)
);

CREATE TABLE log_table
(
  logts  TIMESTAMP (6),
  id    NUMBER (10),
  nm    VARCHAR2 (10)
);

CREATE OR REPLACE TRIGGER trigger_spfc
  FOR INSERT OR UPDATE
  ON base_table
  COMPOUND TRIGGER
  TYPE coll_urowids IS TABLE OF UROWID
   INDEX BY SIMPLE_INTEGER;

  v_urowids  coll_urowids;

  AFTER EACH ROW
  IS
  BEGIN
   v_urowids (v_urowids.COUNT () + 1) := :new.ROWID;
  END AFTER EACH ROW;

  AFTER STATEMENT
  IS
  BEGIN
   FOR i IN 1 .. v_urowids.COUNT ()
   LOOP
     INSERT INTO log_table
      SELECT LOCALTIMESTAMP AS logts, base_table.*
       FROM base_table
       WHERE base_table.ROWID = v_urowids (i);
   END LOOP;

   v_urowids.delete ();
  END AFTER STATEMENT;
END trigger_spfc;
/

Best regards
Jan

Jan Gorkow

Hi mail8mz ,
even more flexibility can be reached using dynamic statements. In the following example even the name of the base table is determined automatically and the physical order of the columns in the both tables is completely irrelevant.

CREATE OR REPLACE TRIGGER trigger_spfc
  FOR INSERT OR UPDATE
  ON base_table
  COMPOUND TRIGGER
  TYPE coll_urowids IS TABLE OF UROWID
   INDEX BY SIMPLE_INTEGER;

  v_stmt   VARCHAR2 (32767 CHAR);
  v_urowids  coll_urowids;

  AFTER EACH ROW
  IS
  BEGIN
   v_urowids (v_urowids.COUNT () + 1) := :new.ROWID;
  END AFTER EACH ROW;

  AFTER STATEMENT
  IS
  BEGIN
   IF v_urowids.COUNT () > 0
   THEN
      SELECT  'INSERT INTO log_table('
         || LISTAGG ('"' || user_tab_columns.column_name || '"', ',')
            WITHIN GROUP (ORDER BY user_tab_columns.column_id ASC)
         || ',logts) SELECT '
         || LISTAGG ('"' || user_tab_columns.column_name || '"', ',')
            WITHIN GROUP (ORDER BY user_tab_columns.column_id ASC)
         || ', LOCALTIMESTAMP AS logts FROM '
         || user_triggers.table_name
         || ' WHERE rowid = :1'
       INTO v_stmt
       FROM user_triggers, user_tab_columns
      WHERE   user_triggers.trigger_name = $$plsql_unit
         AND user_tab_columns.table_name = user_triggers.table_name
     GROUP BY user_triggers.table_name;

     FOR i IN 1 .. v_urowids.COUNT ()
     LOOP
      EXECUTE IMMEDIATE v_stmt
        USING v_urowids (i);
     END LOOP;

     v_urowids.delete ();
   END IF;
  END AFTER STATEMENT;
END trigger_spfc;
/

Futher if you're able to determine the name of the log table based on the name of the base table you can use this as a template for all your tables to log in your data model. The trigger code remains constant and only have to be created on the different tables to log. In this case I would place the code of the AFTER STATEMENT section in a package procedure to have these code only once.
Best regards
Jan

mail8mz

@jan-gorkow Thanks for your example, that is something I am looking for. I may move the additional timestamp column to the beginning of the column list. However, the logging logic has to remain the same. There log tables are used by some existing components. Can we 'BEFORE STATEMENT' instead of 'AFTER STATMENT' to put the old value into the log table?

Jan Gorkow

No, this is not possible. In the before statement section it is not know which rows will be updated. Further in the before statement section you have no access to the :old values.

Jan Gorkow

Which version of Oracle you're using? Maybe there's another solution based on sql macros...

Jan Gorkow
Answer

Hi mail8mz ,
here's another solution which even more meets your needs. It uses a simple AFTER EACH ROW TRIGGER to log the old values as needed. The old values are determined by a stand alone function wich runs in an autonomous transaction. So it "sees" the row version after the last commit but before the actual change. What do you think about this solution? The column order of log and base tables are irrelevant and the base table is automatically determinded via the name of the trigger. A lot of flexibility, but for the price of some performance...

CREATE OR REPLACE FUNCTION get_old_values (
  trigger_name   user_triggers.trigger_name%TYPE,
  rid      IN UROWID)
  RETURN log_table%ROWTYPE
IS
  PRAGMA AUTONOMOUS_TRANSACTION;
  v_base_table_name  user_tables.table_name%TYPE;
  v_stmt       VARCHAR2 (32767 CHAR);
  v_old_values    log_table%ROWTYPE;
BEGIN
  SELECT user_triggers.table_name
   INTO v_base_table_name
   FROM user_triggers
  WHERE user_triggers.trigger_name = get_old_values.trigger_name;

  SELECT  'SELECT '
     || LISTAGG (
        CASE
          WHEN user_tab_columns.column_name = 'LOGTS'
          THEN
           'LOCALTIMESTAMP AS logts'
          WHEN EXISTS
              (SELECT 1
               FROM user_tab_columns base
               WHERE   base.table_name = v_base_table_name
                  AND base.column_name =
                    user_tab_columns.column_name)
          THEN
           '"' || user_tab_columns.column_name || '"'
          ELSE
           'NULL AS ' || user_tab_columns.column_name
        END,
        ',')
       WITHIN GROUP (ORDER BY user_tab_columns.column_id ASC)
     || ' FROM '
     || v_base_table_name
     || ' WHERE rowid = :1'
   INTO v_stmt
   FROM user_tab_columns
  WHERE user_tab_columns.table_name = 'LOG_TABLE';

  EXECUTE IMMEDIATE v_stmt
   INTO v_old_values
   USING get_old_values.rid;

  RETURN v_old_values;
END get_old_values;
/
CREATE OR REPLACE TRIGGER trigger_spfc
  AFTER UPDATE
  ON base_table
  FOR EACH ROW
DECLARE
  v_old_values  log_table%ROWTYPE;
BEGIN
  v_old_values :=
   get_old_values (trigger_name => $$plsql_unit, rid => :old.ROWID);

  INSERT INTO log_table
    VALUES v_old_values;
END trigger_spfc;
/

Best regards
Jan

Marked as Answer by mail8mz · Nov 24 2021
mail8mz

I am using oracle 19c.

As to the new solution, we need different funtion get_old_values for different table. Can we have a more generic function that can be shared by all tables?

Paulzip

Trigger based auditing is an outdated and extremely poor approach. It has so many negatives, doesn't scale well and is has a huge effect on DB performance.
Have a look at Flashback Data Archiving. It's free for your version, built into Oracle at a low level, very quick to setup and far superior on every metric - speed, flexibility, retention, scalability, maintenance - to name a few benefits. You won't regret it.

Solomon Yakobson

@Jan Gorkow: here's another solution which even more meets your needs.
First of all, this will fail since you must commit/rollback autonomous transaction:

SQL> create or replace procedure p1 is pragma autonomous_transaction; begin insert into tbl1 values(99); end;
  2  /

Procedure created.

SQL> exec p1
BEGIN p1; END;

*
ERROR at line 1:
ORA-06519: active autonomous transaction detected and rolled back
ORA-06512: at "SCOTT.P1", line 1
ORA-06512: at line 1

SQL> create or replace procedure p1 is pragma autonomous_transaction; begin insert into tbl1 values(99); commit; end;
  2  /

Procedure created.

SQL> exec p1

PL/SQL procedure successfully completed.

SQL>

So logging via autonomous tranaction will still insert log rows even if change was rolled back (keep in mind rollbacks can happen implicitly when oracle has to maintain statement level write consistency). In addition we might end up with incorrect log rows. Assume we update same row more than once within same transaction:

update emp set job = 'MANAGER' where id = 123; -- old job was SALESMAN
update emp set sal = 4000 where id = 123; -- old salary was 3000
update emp set comm = null where id = 123; -- old comm was 100

SInce function get_old_values is autonomous transaction it doesn't see any changes made by main transaction. Therefore secod update will still see job SALESMAN and third update will still see salaray as 3000. And even worse situation would be if same column was updated more than once within same transaction.
SY.

Jan Gorkow

Hi @solomon-yakobson ,
I'dont write the logging data in an autonomous transaction. I only get the old values in an autonomous transaction. There are only SELECTs executed in the function and so no commit is needed. I tested this and it worked fine. The log writing takes place in the trigger and so it is done in the main transaction. If the main transaction is rolled back, no logging data will reside out of the rollbacked transaction.
But you're right concerning the fact, that get_old_values does not see any changes done by the main transaction. But in my opinion that is ok because no one other than the user himself sees these uncommitted changes as well. So the logging represents always the constistent / committed state of the rows and not any tiny steps to get from one consistent state to the next. There might be a problem caused by the duplicate logging but this is easy to handle / to resolve.
Best regards
Jan

Solomon Yakobson

There are only SELECTs executed in the function and so no commit is needed.
Ah, I missed that. One more thing I'd consider - trigger owner isn't necessary same as table owner and get_old_values is generic procedure anyway so you most likely will create it once and use it in all logging triggers. Therefore I'd pass table owner and table_name rather than trigger name and would query ALL_TAB_COLUMNS rather than USER_TAB_COLUMNS with directly granted select any table (or select on individual tables) to get_old_values owner.
SY.

mail8mz

Get_old_value returns the log_table.*, that specific to ONE log_table, not generic, isn't it?

If I have base_table2, log_table2, I need another get_old_value function.

Jan Gorkow

Hi altogether,
I brainstormed a little bit more to make my approach more generic. The following solution now determines the base table completely dynamic based on the first rowid updated. Further the log table is determined by a mapping function based on the base table. The old values are selected in an autonomous transaction. So the old values represent always the last commited row version. This may be differ from the last version of the row in the local uncommited transaction if multiple updates occur. It is importand to understand that. In my opinion this is not a problem because uncommited versions of the rows updated by local transaction are also not "seen" by anybody else.
I've developed this to show that it is possible to realize a highly generic solution for the use case mail8mz is faced with based on triggers and some dynamic pl/sql. What do you think about this aproach? Generally I also think that flashback data archive is a better way to do data change logging out of the box. But I'm also a great fan of pl/sql and I like such challenges, irrespective of the question if such aproach is reasonable or not...

CREATE OR REPLACE PACKAGE pkg_log
IS
  --Collection type for the updated row ids
  TYPE coll_row_ids IS TABLE OF VARCHAR2 (18 CHAR);

  --Generic logging method called out of the after statement section of the
  --compound triggers
  PROCEDURE log_old_values (row_ids IN coll_row_ids);
END pkg_log;
/

CREATE OR REPLACE PACKAGE BODY pkg_log
IS
  --Record type for table metadata (owner and name)
  TYPE type_table IS RECORD
  (
   table_owner  all_tables.owner%TYPE,
   table_name   all_tables.table_name%TYPE
  );

  --Determine and return metadata of the base table based on the given rowid
  FUNCTION get_base_table (row_id IN ROWID)
   RETURN type_table
  IS
   v_base_table  type_table;
  BEGIN
   SELECT all_objects.owner, all_objects.object_name
    INTO v_base_table.table_owner, v_base_table.table_name
    FROM all_objects
    WHERE all_objects.object_id =
       dbms_rowid.rowid_object (row_id => get_base_table.row_id);

   RETURN v_base_table;
  END get_base_table;

  --Mapping function to determine the logging table to use based on the given
  --base table
  FUNCTION get_log_table (base_table IN type_table)
   RETURN type_table
  IS
   v_log_table  type_table;
  BEGIN
   v_log_table.table_owner := get_log_table.base_table.table_owner;
   v_log_table.table_name := 'LOG_TABLE';

   RETURN v_log_table;
  END get_log_table;

  --This function returns the old values of the updated rows identified by the
  --given rowids as an opened sys_refcursor. It does this in an autonomous
  --transaction so the old values represent the last commited version of the
  --rows. The structure of the cursor returned is identical to the structure
  --of the log table the data will be written into. Column orders of base and
  --log tables are completely irrelevant. Only columns that exist in the log
  --table are returned. Log metadata columns like LOGTS are filled with the
  --required values, columns which don't exist in the base table are returned
  --with NULL.
  FUNCTION get_old_values (base_table  IN type_table,
              log_table  IN type_table,
              row_ids   IN coll_row_ids)
   RETURN SYS_REFCURSOR
  IS
   --Autonomous transaction to get the last commited version of the rows
   PRAGMA AUTONOMOUS_TRANSACTION;

   v_old_values  SYS_REFCURSOR;
   v_stmt     VARCHAR2 (32767 CHAR);
  BEGIN
   --Create SELECT statement based on the structure of the log table
   SELECT  'SELECT '
       || LISTAGG (
          CASE
           WHEN log_table.column_name = 'LOGTS'
           THEN
             'LOCALTIMESTAMP AS LOGTS'
           ELSE
             NVL2 (base_table.column_name,
                '"' || log_table.column_name || '"',
                'NULL AS "' || log_table.column_name || '"')
          END,
          ',')
        WITHIN GROUP (ORDER BY log_table.column_id ASC)
       || ' FROM "'
       || get_old_values.base_table.table_owner
       || '"."'
       || get_old_values.base_table.table_name
       || '" WHERE rowid IN (SELECT VALUE (rid) FROM TABLE (:1) rid)'
    INTO v_stmt
    FROM all_tab_columns log_table
       LEFT OUTER JOIN all_tab_columns base_table
        ON   base_table.owner =
            get_old_values.base_table.table_owner
          AND base_table.table_name =
            get_old_values.base_table.table_name
          AND base_table.column_name = log_table.column_name
    WHERE   log_table.owner = get_old_values.log_table.table_owner
       AND log_table.table_name = get_old_values.log_table.table_name;


   OPEN v_old_values FOR v_stmt USING get_old_values.row_ids;

   RETURN v_old_values;
  END get_old_values;

  --Generic logging method: The base table is determined by the first rowid of
  --the given collection. The log table is determined by the maping function
  --get_log_table. The old values are returned by the called function
  --get_old_values as a sys_refcursor. The cursor is fetched and the fetched
  --records are written into to log table. This is done in the context of the
  --main transaction.
  PROCEDURE log_old_values (row_ids IN coll_row_ids)
  IS
   v_base_table  type_table;
   v_log_table  type_table;
   v_old_values  SYS_REFCURSOR;
  BEGIN
   IF log_old_values.row_ids.FIRST () IS NOT NULL
   THEN
     v_base_table :=
      get_base_table (
        row_id  =>
         log_old_values.row_ids (log_old_values.row_ids.FIRST ()));
     v_log_table := get_log_table (base_table => v_base_table);
     v_old_values :=
      get_old_values (base_table  => v_base_table,
              log_table  => v_log_table,
              row_ids   => log_old_values.row_ids);

     EXECUTE IMMEDIATE  'DECLARE'
             || '  v_old_values  SYS_REFCURSOR := :1;'
             || '  v_log_record  "'
             || v_log_table.table_owner
             || '"."'
             || v_log_table.table_name
             || '"%ROWTYPE;'
             || 'BEGIN'
             || '  LOOP'
             || '   FETCH v_old_values INTO v_log_record;'
             || '   EXIT WHEN v_old_values%NOTFOUND;'
             || '   INSERT INTO "'
             || v_log_table.table_owner
             || '"."'
             || v_log_table.table_name
             || '"'
             || '      VALUES v_log_record;'
             || '  END LOOP;'
             || 'END;'
      USING IN OUT v_old_values;

     CLOSE v_old_values;
   END IF;
  END log_old_values;
END pkg_log;
/

CREATE OR REPLACE TRIGGER trigger_spfc
  FOR UPDATE
  ON base_table
  COMPOUND TRIGGER
  v_row_ids  pkg_log.coll_row_ids := NEW pkg_log.coll_row_ids ();

  --Collect the updated rowids
  AFTER EACH ROW
  IS
  BEGIN
   v_row_ids.EXTEND (1);
   v_row_ids (v_row_ids.LAST ()) := :old.ROWID;
  END AFTER EACH ROW;

  --Generic logging of the updated rows
  AFTER STATEMENT
  IS
  BEGIN
   pkg_log.log_old_values (row_ids => v_row_ids);
   v_row_ids.delete ();
   v_row_ids := NULL;
  END AFTER STATEMENT;
END trigger_spfc;
/

Best regards
Jan

mail8mz

@jan-gorkow So the concept is make use row trigger to record the old.rowid and use statement trigger to get old value from old.rowid and insert into log table. It will solve the "table mutating error" when using row trigger only.

By the way, you hard code the log table name as log_table. We may derive the name from base_table. Different base table should have different log table.

Jan Gorkow

Hi @mail8mz ,
yes, the concept is to collect the affected rowids in the after each row section of the compound trigger and to get and write the data to log in the after statement section. You could also do this without after statement section and call a method to get and write the log data in the after each row section, but performance is much better doing it the way i explained.
The log_table is only "hard coded" in my example. The function get_log_table is exactly for the mapping you need. There you place your code to get from the base table to the log table. Modify this as needed for your use case.
Best regards
Jan

mail8mz

@jan-gorkow Last question is what is the maximum number of rows in the v_row_ids? Or how many rows can I change in an update statement, so that all row ids can be stored in the v_row_ids?
Thanks for your help!

Sven W.

All audit solutions I have seen so far suffer from the same problem. The audit data that they generate is extremely difficult to read (human readable) afterwards.
Typical questions a user might ask is: Last friday we changed the customers order. We updated the product and the quantity. But at that time, the product description was different than it is today. We would like to see how the original order looked like including everything that was changed in the scope of the order, the customer and the product.
The audit logs will hold the data to answer such questions. But it is extremly decomposed. Especially since several changes were done that need foreign key lookups to find the original value. But those FK need to be followed depending on the correct change times.
Flashback query is a solution, that in typical environments can answer such questions, but only for a very limited time span.

Jan Gorkow

Hi @mail8mz,
because the updated rowids are stored in a nested table collection variable there is no upper limit for the count of elements.
Best regards,
Jan

1 - 20

Post Details

Added on Sep 17 2021
20 comments
5,056 views