Discussions
Categories
- 196.7K All Categories
- 2.2K Data
- 235 Big Data Appliance
- 1.9K Data Science
- 449.8K Databases
- 221.5K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 31 Multilingual Engine
- 549 MySQL Community Space
- 477 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3K ORDS, SODA & JSON in the Database
- 532 SQLcl
- 4K SQL Developer Data Modeler
- 186.8K SQL & PL/SQL
- 21.3K SQL Developer
- 295.4K Development
- 17 Developer Projects
- 138 Programming Languages
- 292.1K Development Tools
- 104 DevOps
- 3.1K QA/Testing
- 645.9K Java
- 28 Java Learning Subscription
- 37K Database Connectivity
- 153 Java Community Process
- 105 Java 25
- 22.1K Java APIs
- 138.1K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 17 Java Essentials
- 158 Java 8 Questions
- 85.9K Java Programming
- 79 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.2K Java SE
- 13.8K Java Security
- 203 Java User Groups
- 24 JavaScript - Nashorn
- Programs
- 394 LiveLabs
- 37 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.6K Other Languages
- 2.3K Chinese
- 170 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 230 Portuguese
Generic Trigger to log changes

Hello,
I am trying to use trigger to log the change of tables. However, it seems I have to hard code the column name and column value. Is there any generic solution regardless of the table structure?
Table structure:
CREATE table base_table ( id number(10), nm varchar2(10) ); CREATE table log_table ( id number(10), nm varchar2(10), logTs timestamp(6));
Table specific trigger works fine:
CREATE OR REPLACE TRIGGER trigger_spfc BEFORE UPDATE ON base_table FOR EACH ROW BEGIN INSERT INTO log_table( id, nm, logTs) VALUES( :old.id, :old.nm, systimestamp); END;
I tried to make a more generic trigger using primary key column(eg, id) only. So that I only need hard code the primary key column, but not all columns in the table.
CREATE OR REPLACE TRIGGER trigger_generic BEFORE UPDATE ON base_table FOR EACH ROW BEGIN INSERT INTO log_table( id, nm, histTS) SELECT id, nm, systimestamp from base_table WHERE ID = :OLD.id; END;
Now I have "ORA-04091, table is mutating" error.
Any suggestion is welcome, thanks!
Best Answers
-
Hi @mail8mz ,
here's another solution which even more meets your needs. It uses a simple AFTER EACH ROW TRIGGER to log the old values as needed. The old values are determined by a stand alone function wich runs in an autonomous transaction. So it "sees" the row version after the last commit but before the actual change. What do you think about this solution? The column order of log and base tables are irrelevant and the base table is automatically determinded via the name of the trigger. A lot of flexibility, but for the price of some performance...
CREATE OR REPLACE FUNCTION get_old_values ( trigger_name user_triggers.trigger_name%TYPE, rid IN UROWID) RETURN log_table%ROWTYPE IS PRAGMA AUTONOMOUS_TRANSACTION; v_base_table_name user_tables.table_name%TYPE; v_stmt VARCHAR2 (32767 CHAR); v_old_values log_table%ROWTYPE; BEGIN SELECT user_triggers.table_name INTO v_base_table_name FROM user_triggers WHERE user_triggers.trigger_name = get_old_values.trigger_name; SELECT 'SELECT ' || LISTAGG ( CASE WHEN user_tab_columns.column_name = 'LOGTS' THEN 'LOCALTIMESTAMP AS logts' WHEN EXISTS (SELECT 1 FROM user_tab_columns base WHERE base.table_name = v_base_table_name AND base.column_name = user_tab_columns.column_name) THEN '"' || user_tab_columns.column_name || '"' ELSE 'NULL AS ' || user_tab_columns.column_name END, ',') WITHIN GROUP (ORDER BY user_tab_columns.column_id ASC) || ' FROM ' || v_base_table_name || ' WHERE rowid = :1' INTO v_stmt FROM user_tab_columns WHERE user_tab_columns.table_name = 'LOG_TABLE'; EXECUTE IMMEDIATE v_stmt INTO v_old_values USING get_old_values.rid; RETURN v_old_values; END get_old_values; / CREATE OR REPLACE TRIGGER trigger_spfc AFTER UPDATE ON base_table FOR EACH ROW DECLARE v_old_values log_table%ROWTYPE; BEGIN v_old_values := get_old_values (trigger_name => $$plsql_unit, rid => :old.ROWID); INSERT INTO log_table VALUES v_old_values; END trigger_spfc; /
Best regards
Jan
-
Hi altogether,
I brainstormed a little bit more to make my approach more generic. The following solution now determines the base table completely dynamic based on the first rowid updated. Further the log table is determined by a mapping function based on the base table. The old values are selected in an autonomous transaction. So the old values represent always the last commited row version. This may be differ from the last version of the row in the local uncommited transaction if multiple updates occur. It is importand to understand that. In my opinion this is not a problem because uncommited versions of the rows updated by local transaction are also not "seen" by anybody else.
I've developed this to show that it is possible to realize a highly generic solution for the use case @mail8mz is faced with based on triggers and some dynamic pl/sql. What do you think about this aproach? Generally I also think that flashback data archive is a better way to do data change logging out of the box. But I'm also a great fan of pl/sql and I like such challenges, irrespective of the question if such aproach is reasonable or not...
CREATE OR REPLACE PACKAGE pkg_log IS --Collection type for the updated row ids TYPE coll_row_ids IS TABLE OF VARCHAR2 (18 CHAR); --Generic logging method called out of the after statement section of the --compound triggers PROCEDURE log_old_values (row_ids IN coll_row_ids); END pkg_log; / CREATE OR REPLACE PACKAGE BODY pkg_log IS --Record type for table metadata (owner and name) TYPE type_table IS RECORD ( table_owner all_tables.owner%TYPE, table_name all_tables.table_name%TYPE ); --Determine and return metadata of the base table based on the given rowid FUNCTION get_base_table (row_id IN ROWID) RETURN type_table IS v_base_table type_table; BEGIN SELECT all_objects.owner, all_objects.object_name INTO v_base_table.table_owner, v_base_table.table_name FROM all_objects WHERE all_objects.object_id = dbms_rowid.rowid_object (row_id => get_base_table.row_id); RETURN v_base_table; END get_base_table; --Mapping function to determine the logging table to use based on the given --base table FUNCTION get_log_table (base_table IN type_table) RETURN type_table IS v_log_table type_table; BEGIN v_log_table.table_owner := get_log_table.base_table.table_owner; v_log_table.table_name := 'LOG_TABLE'; RETURN v_log_table; END get_log_table; --This function returns the old values of the updated rows identified by the --given rowids as an opened sys_refcursor. It does this in an autonomous --transaction so the old values represent the last commited version of the --rows. The structure of the cursor returned is identical to the structure --of the log table the data will be written into. Column orders of base and --log tables are completely irrelevant. Only columns that exist in the log --table are returned. Log metadata columns like LOGTS are filled with the --required values, columns which don't exist in the base table are returned --with NULL. FUNCTION get_old_values (base_table IN type_table, log_table IN type_table, row_ids IN coll_row_ids) RETURN SYS_REFCURSOR IS --Autonomous transaction to get the last commited version of the rows PRAGMA AUTONOMOUS_TRANSACTION; v_old_values SYS_REFCURSOR; v_stmt VARCHAR2 (32767 CHAR); BEGIN --Create SELECT statement based on the structure of the log table SELECT 'SELECT ' || LISTAGG ( CASE WHEN log_table.column_name = 'LOGTS' THEN 'LOCALTIMESTAMP AS LOGTS' ELSE NVL2 (base_table.column_name, '"' || log_table.column_name || '"', 'NULL AS "' || log_table.column_name || '"') END, ',') WITHIN GROUP (ORDER BY log_table.column_id ASC) || ' FROM "' || get_old_values.base_table.table_owner || '"."' || get_old_values.base_table.table_name || '" WHERE rowid IN (SELECT VALUE (rid) FROM TABLE (:1) rid)' INTO v_stmt FROM all_tab_columns log_table LEFT OUTER JOIN all_tab_columns base_table ON base_table.owner = get_old_values.base_table.table_owner AND base_table.table_name = get_old_values.base_table.table_name AND base_table.column_name = log_table.column_name WHERE log_table.owner = get_old_values.log_table.table_owner AND log_table.table_name = get_old_values.log_table.table_name; OPEN v_old_values FOR v_stmt USING get_old_values.row_ids; RETURN v_old_values; END get_old_values; --Generic logging method: The base table is determined by the first rowid of --the given collection. The log table is determined by the maping function --get_log_table. The old values are returned by the called function --get_old_values as a sys_refcursor. The cursor is fetched and the fetched --records are written into to log table. This is done in the context of the --main transaction. PROCEDURE log_old_values (row_ids IN coll_row_ids) IS v_base_table type_table; v_log_table type_table; v_old_values SYS_REFCURSOR; BEGIN IF log_old_values.row_ids.FIRST () IS NOT NULL THEN v_base_table := get_base_table ( row_id => log_old_values.row_ids (log_old_values.row_ids.FIRST ())); v_log_table := get_log_table (base_table => v_base_table); v_old_values := get_old_values (base_table => v_base_table, log_table => v_log_table, row_ids => log_old_values.row_ids); EXECUTE IMMEDIATE 'DECLARE' || ' v_old_values SYS_REFCURSOR := :1;' || ' v_log_record "' || v_log_table.table_owner || '"."' || v_log_table.table_name || '"%ROWTYPE;' || 'BEGIN' || ' LOOP' || ' FETCH v_old_values INTO v_log_record;' || ' EXIT WHEN v_old_values%NOTFOUND;' || ' INSERT INTO "' || v_log_table.table_owner || '"."' || v_log_table.table_name || '"' || ' VALUES v_log_record;' || ' END LOOP;' || 'END;' USING IN OUT v_old_values; CLOSE v_old_values; END IF; END log_old_values; END pkg_log; / CREATE OR REPLACE TRIGGER trigger_spfc FOR UPDATE ON base_table COMPOUND TRIGGER v_row_ids pkg_log.coll_row_ids := NEW pkg_log.coll_row_ids (); --Collect the updated rowids AFTER EACH ROW IS BEGIN v_row_ids.EXTEND (1); v_row_ids (v_row_ids.LAST ()) := :old.ROWID; END AFTER EACH ROW; --Generic logging of the updated rows AFTER STATEMENT IS BEGIN pkg_log.log_old_values (row_ids => v_row_ids); v_row_ids.delete (); v_row_ids := NULL; END AFTER STATEMENT; END trigger_spfc; /
Best regards
Jan
-
Hi @mail8mz,
because the updated rowids are stored in a nested table collection variable there is no upper limit for the count of elements.
Best regards,
Jan
Answers
-
Flashback archive or write code generator of non-generic triggers.
-
Hi, @mail8mz
However, it seems I have to hard code the column name and column value. Is there any generic solution regardless of the table structure?
No; column names need to be hard-coded. You could find all the primary key (or all the column names) frim the data dictionary, and then use dynamic SQL, but it's much simpler just to hard code them.
Now I have "ORA-04091, table is mutating" error.
Right; a FOR EACH ROW trigger on base_table can't query base_table itself, but you don't need to in this case If id is unique, then you can say
INSERT INTO log_table( id, nm, histTS) VALUES (:OLD.id, :OLD.nm, systimestamp);
-
Hi @mail8mz ,
if you change your logging concept a little bit, you are able to realize that with compound triggers. But with this solution you always log the new values and not the old ones. You don't have to specify any column names of the base table. To make table modifications in the future easier you should place the additional metadata columns (logTs, Username and so on) at the beginning of the column list of the log table. So you're able to add new columns identical to both tables by keeping the full flexibility and no need to modify the trigger. Here's my example, check it out:
CREATE TABLE base_table ( id NUMBER (10), nm VARCHAR2 (10) ); CREATE TABLE log_table ( logts TIMESTAMP (6), id NUMBER (10), nm VARCHAR2 (10) ); CREATE OR REPLACE TRIGGER trigger_spfc FOR INSERT OR UPDATE ON base_table COMPOUND TRIGGER TYPE coll_urowids IS TABLE OF UROWID INDEX BY SIMPLE_INTEGER; v_urowids coll_urowids; AFTER EACH ROW IS BEGIN v_urowids (v_urowids.COUNT () + 1) := :new.ROWID; END AFTER EACH ROW; AFTER STATEMENT IS BEGIN FOR i IN 1 .. v_urowids.COUNT () LOOP INSERT INTO log_table SELECT LOCALTIMESTAMP AS logts, base_table.* FROM base_table WHERE base_table.ROWID = v_urowids (i); END LOOP; v_urowids.delete (); END AFTER STATEMENT; END trigger_spfc; /
Best regards
Jan
-
Hi @mail8mz ,
even more flexibility can be reached using dynamic statements. In the following example even the name of the base table is determined automatically and the physical order of the columns in the both tables is completely irrelevant.
CREATE OR REPLACE TRIGGER trigger_spfc FOR INSERT OR UPDATE ON base_table COMPOUND TRIGGER TYPE coll_urowids IS TABLE OF UROWID INDEX BY SIMPLE_INTEGER; v_stmt VARCHAR2 (32767 CHAR); v_urowids coll_urowids; AFTER EACH ROW IS BEGIN v_urowids (v_urowids.COUNT () + 1) := :new.ROWID; END AFTER EACH ROW; AFTER STATEMENT IS BEGIN IF v_urowids.COUNT () > 0 THEN SELECT 'INSERT INTO log_table(' || LISTAGG ('"' || user_tab_columns.column_name || '"', ',') WITHIN GROUP (ORDER BY user_tab_columns.column_id ASC) || ',logts) SELECT ' || LISTAGG ('"' || user_tab_columns.column_name || '"', ',') WITHIN GROUP (ORDER BY user_tab_columns.column_id ASC) || ', LOCALTIMESTAMP AS logts FROM ' || user_triggers.table_name || ' WHERE rowid = :1' INTO v_stmt FROM user_triggers, user_tab_columns WHERE user_triggers.trigger_name = $$plsql_unit AND user_tab_columns.table_name = user_triggers.table_name GROUP BY user_triggers.table_name; FOR i IN 1 .. v_urowids.COUNT () LOOP EXECUTE IMMEDIATE v_stmt USING v_urowids (i); END LOOP; v_urowids.delete (); END IF; END AFTER STATEMENT; END trigger_spfc; /
Futher if you're able to determine the name of the log table based on the name of the base table you can use this as a template for all your tables to log in your data model. The trigger code remains constant and only have to be created on the different tables to log. In this case I would place the code of the AFTER STATEMENT section in a package procedure to have these code only once.
Best regards
Jan
-
@Jan Gorkow Thanks for your example, that is something I am looking for. I may move the additional timestamp column to the beginning of the column list. However, the logging logic has to remain the same. There log tables are used by some existing components. Can we 'BEFORE STATEMENT' instead of 'AFTER STATMENT' to put the old value into the log table?
-
No, this is not possible. In the before statement section it is not know which rows will be updated. Further in the before statement section you have no access to the :old values.
-
Which version of Oracle you're using? Maybe there's another solution based on sql macros...
-
Hi @mail8mz ,
here's another solution which even more meets your needs. It uses a simple AFTER EACH ROW TRIGGER to log the old values as needed. The old values are determined by a stand alone function wich runs in an autonomous transaction. So it "sees" the row version after the last commit but before the actual change. What do you think about this solution? The column order of log and base tables are irrelevant and the base table is automatically determinded via the name of the trigger. A lot of flexibility, but for the price of some performance...
CREATE OR REPLACE FUNCTION get_old_values ( trigger_name user_triggers.trigger_name%TYPE, rid IN UROWID) RETURN log_table%ROWTYPE IS PRAGMA AUTONOMOUS_TRANSACTION; v_base_table_name user_tables.table_name%TYPE; v_stmt VARCHAR2 (32767 CHAR); v_old_values log_table%ROWTYPE; BEGIN SELECT user_triggers.table_name INTO v_base_table_name FROM user_triggers WHERE user_triggers.trigger_name = get_old_values.trigger_name; SELECT 'SELECT ' || LISTAGG ( CASE WHEN user_tab_columns.column_name = 'LOGTS' THEN 'LOCALTIMESTAMP AS logts' WHEN EXISTS (SELECT 1 FROM user_tab_columns base WHERE base.table_name = v_base_table_name AND base.column_name = user_tab_columns.column_name) THEN '"' || user_tab_columns.column_name || '"' ELSE 'NULL AS ' || user_tab_columns.column_name END, ',') WITHIN GROUP (ORDER BY user_tab_columns.column_id ASC) || ' FROM ' || v_base_table_name || ' WHERE rowid = :1' INTO v_stmt FROM user_tab_columns WHERE user_tab_columns.table_name = 'LOG_TABLE'; EXECUTE IMMEDIATE v_stmt INTO v_old_values USING get_old_values.rid; RETURN v_old_values; END get_old_values; / CREATE OR REPLACE TRIGGER trigger_spfc AFTER UPDATE ON base_table FOR EACH ROW DECLARE v_old_values log_table%ROWTYPE; BEGIN v_old_values := get_old_values (trigger_name => $$plsql_unit, rid => :old.ROWID); INSERT INTO log_table VALUES v_old_values; END trigger_spfc; /
Best regards
Jan
-
I am using oracle 19c.
As to the new solution, we need different funtion get_old_values for different table. Can we have a more generic function that can be shared by all tables?
-
Trigger based auditing is an outdated and extremely poor approach. It has so many negatives, doesn't scale well and is has a huge effect on DB performance.
Have a look at Flashback Data Archiving. It's free for your version, built into Oracle at a low level, very quick to setup and far superior on every metric - speed, flexibility, retention, scalability, maintenance - to name a few benefits. You won't regret it.