Discussions
Categories
- 196.8K All Categories
- 2.2K Data
- 239 Big Data Appliance
- 1.9K Data Science
- 450.3K Databases
- 221.7K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 31 Multilingual Engine
- 550 MySQL Community Space
- 478 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3K ORDS, SODA & JSON in the Database
- 544 SQLcl
- 4K SQL Developer Data Modeler
- 187K SQL & PL/SQL
- 21.3K SQL Developer
- 295.8K Development
- 17 Developer Projects
- 138 Programming Languages
- 292.5K Development Tools
- 107 DevOps
- 3.1K QA/Testing
- 646K Java
- 28 Java Learning Subscription
- 37K Database Connectivity
- 155 Java Community Process
- 105 Java 25
- 22.1K Java APIs
- 138.1K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 18 Java Essentials
- 160 Java 8 Questions
- 86K Java Programming
- 80 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.3K Java SE
- 13.8K Java Security
- 204 Java User Groups
- 24 JavaScript - Nashorn
- Programs
- 439 LiveLabs
- 38 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.7K Other Languages
- 2.3K Chinese
- 171 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 232 Portuguese
Generic Trigger to log changes
Answers
-
@Jan Gorkow: here's another solution which even more meets your needs.
First of all, this will fail since you must commit/rollback autonomous transaction:
SQL> create or replace procedure p1 is pragma autonomous_transaction; begin insert into tbl1 values(99); end; 2 / Procedure created. SQL> exec p1 BEGIN p1; END; * ERROR at line 1: ORA-06519: active autonomous transaction detected and rolled back ORA-06512: at "SCOTT.P1", line 1 ORA-06512: at line 1 SQL> create or replace procedure p1 is pragma autonomous_transaction; begin insert into tbl1 values(99); commit; end; 2 / Procedure created. SQL> exec p1 PL/SQL procedure successfully completed. SQL>
So logging via autonomous tranaction will still insert log rows even if change was rolled back (keep in mind rollbacks can happen implicitly when oracle has to maintain statement level write consistency). In addition we might end up with incorrect log rows. Assume we update same row more than once within same transaction:
update emp set job = 'MANAGER' where id = 123; -- old job was SALESMAN update emp set sal = 4000 where id = 123; -- old salary was 3000 update emp set comm = null where id = 123; -- old comm was 100
SInce function get_old_values is autonomous transaction it doesn't see any changes made by main transaction. Therefore secod update will still see job SALESMAN and third update will still see salaray as 3000. And even worse situation would be if same column was updated more than once within same transaction.
SY.
-
Hi @Solomon Yakobson ,
I'dont write the logging data in an autonomous transaction. I only get the old values in an autonomous transaction. There are only SELECTs executed in the function and so no commit is needed. I tested this and it worked fine. The log writing takes place in the trigger and so it is done in the main transaction. If the main transaction is rolled back, no logging data will reside out of the rollbacked transaction.
But you're right concerning the fact, that get_old_values does not see any changes done by the main transaction. But in my opinion that is ok because no one other than the user himself sees these uncommitted changes as well. So the logging represents always the constistent / committed state of the rows and not any tiny steps to get from one consistent state to the next. There might be a problem caused by the duplicate logging but this is easy to handle / to resolve.
Best regards
Jan
-
There are only SELECTs executed in the function and so no commit is needed.
Ah, I missed that. One more thing I'd consider - trigger owner isn't necessary same as table owner and get_old_values is generic procedure anyway so you most likely will create it once and use it in all logging triggers. Therefore I'd pass table owner and table_name rather than trigger name and would query ALL_TAB_COLUMNS rather than USER_TAB_COLUMNS with directly granted select any table (or select on individual tables) to get_old_values owner.
SY.
-
Get_old_value returns the log_table.*, that specific to ONE log_table, not generic, isn't it?
If I have base_table2, log_table2, I need another get_old_value function.
-
Hi altogether,
I brainstormed a little bit more to make my approach more generic. The following solution now determines the base table completely dynamic based on the first rowid updated. Further the log table is determined by a mapping function based on the base table. The old values are selected in an autonomous transaction. So the old values represent always the last commited row version. This may be differ from the last version of the row in the local uncommited transaction if multiple updates occur. It is importand to understand that. In my opinion this is not a problem because uncommited versions of the rows updated by local transaction are also not "seen" by anybody else.
I've developed this to show that it is possible to realize a highly generic solution for the use case @mail8mz is faced with based on triggers and some dynamic pl/sql. What do you think about this aproach? Generally I also think that flashback data archive is a better way to do data change logging out of the box. But I'm also a great fan of pl/sql and I like such challenges, irrespective of the question if such aproach is reasonable or not...
CREATE OR REPLACE PACKAGE pkg_log IS --Collection type for the updated row ids TYPE coll_row_ids IS TABLE OF VARCHAR2 (18 CHAR); --Generic logging method called out of the after statement section of the --compound triggers PROCEDURE log_old_values (row_ids IN coll_row_ids); END pkg_log; / CREATE OR REPLACE PACKAGE BODY pkg_log IS --Record type for table metadata (owner and name) TYPE type_table IS RECORD ( table_owner all_tables.owner%TYPE, table_name all_tables.table_name%TYPE ); --Determine and return metadata of the base table based on the given rowid FUNCTION get_base_table (row_id IN ROWID) RETURN type_table IS v_base_table type_table; BEGIN SELECT all_objects.owner, all_objects.object_name INTO v_base_table.table_owner, v_base_table.table_name FROM all_objects WHERE all_objects.object_id = dbms_rowid.rowid_object (row_id => get_base_table.row_id); RETURN v_base_table; END get_base_table; --Mapping function to determine the logging table to use based on the given --base table FUNCTION get_log_table (base_table IN type_table) RETURN type_table IS v_log_table type_table; BEGIN v_log_table.table_owner := get_log_table.base_table.table_owner; v_log_table.table_name := 'LOG_TABLE'; RETURN v_log_table; END get_log_table; --This function returns the old values of the updated rows identified by the --given rowids as an opened sys_refcursor. It does this in an autonomous --transaction so the old values represent the last commited version of the --rows. The structure of the cursor returned is identical to the structure --of the log table the data will be written into. Column orders of base and --log tables are completely irrelevant. Only columns that exist in the log --table are returned. Log metadata columns like LOGTS are filled with the --required values, columns which don't exist in the base table are returned --with NULL. FUNCTION get_old_values (base_table IN type_table, log_table IN type_table, row_ids IN coll_row_ids) RETURN SYS_REFCURSOR IS --Autonomous transaction to get the last commited version of the rows PRAGMA AUTONOMOUS_TRANSACTION; v_old_values SYS_REFCURSOR; v_stmt VARCHAR2 (32767 CHAR); BEGIN --Create SELECT statement based on the structure of the log table SELECT 'SELECT ' || LISTAGG ( CASE WHEN log_table.column_name = 'LOGTS' THEN 'LOCALTIMESTAMP AS LOGTS' ELSE NVL2 (base_table.column_name, '"' || log_table.column_name || '"', 'NULL AS "' || log_table.column_name || '"') END, ',') WITHIN GROUP (ORDER BY log_table.column_id ASC) || ' FROM "' || get_old_values.base_table.table_owner || '"."' || get_old_values.base_table.table_name || '" WHERE rowid IN (SELECT VALUE (rid) FROM TABLE (:1) rid)' INTO v_stmt FROM all_tab_columns log_table LEFT OUTER JOIN all_tab_columns base_table ON base_table.owner = get_old_values.base_table.table_owner AND base_table.table_name = get_old_values.base_table.table_name AND base_table.column_name = log_table.column_name WHERE log_table.owner = get_old_values.log_table.table_owner AND log_table.table_name = get_old_values.log_table.table_name; OPEN v_old_values FOR v_stmt USING get_old_values.row_ids; RETURN v_old_values; END get_old_values; --Generic logging method: The base table is determined by the first rowid of --the given collection. The log table is determined by the maping function --get_log_table. The old values are returned by the called function --get_old_values as a sys_refcursor. The cursor is fetched and the fetched --records are written into to log table. This is done in the context of the --main transaction. PROCEDURE log_old_values (row_ids IN coll_row_ids) IS v_base_table type_table; v_log_table type_table; v_old_values SYS_REFCURSOR; BEGIN IF log_old_values.row_ids.FIRST () IS NOT NULL THEN v_base_table := get_base_table ( row_id => log_old_values.row_ids (log_old_values.row_ids.FIRST ())); v_log_table := get_log_table (base_table => v_base_table); v_old_values := get_old_values (base_table => v_base_table, log_table => v_log_table, row_ids => log_old_values.row_ids); EXECUTE IMMEDIATE 'DECLARE' || ' v_old_values SYS_REFCURSOR := :1;' || ' v_log_record "' || v_log_table.table_owner || '"."' || v_log_table.table_name || '"%ROWTYPE;' || 'BEGIN' || ' LOOP' || ' FETCH v_old_values INTO v_log_record;' || ' EXIT WHEN v_old_values%NOTFOUND;' || ' INSERT INTO "' || v_log_table.table_owner || '"."' || v_log_table.table_name || '"' || ' VALUES v_log_record;' || ' END LOOP;' || 'END;' USING IN OUT v_old_values; CLOSE v_old_values; END IF; END log_old_values; END pkg_log; / CREATE OR REPLACE TRIGGER trigger_spfc FOR UPDATE ON base_table COMPOUND TRIGGER v_row_ids pkg_log.coll_row_ids := NEW pkg_log.coll_row_ids (); --Collect the updated rowids AFTER EACH ROW IS BEGIN v_row_ids.EXTEND (1); v_row_ids (v_row_ids.LAST ()) := :old.ROWID; END AFTER EACH ROW; --Generic logging of the updated rows AFTER STATEMENT IS BEGIN pkg_log.log_old_values (row_ids => v_row_ids); v_row_ids.delete (); v_row_ids := NULL; END AFTER STATEMENT; END trigger_spfc; /
Best regards
Jan
-
@Jan Gorkow So the concept is make use row trigger to record the old.rowid and use statement trigger to get old value from old.rowid and insert into log table. It will solve the "table mutating error" when using row trigger only.
By the way, you hard code the log table name as log_table. We may derive the name from base_table. Different base table should have different log table.
-
Hi @mail8mz ,
yes, the concept is to collect the affected rowids in the after each row section of the compound trigger and to get and write the data to log in the after statement section. You could also do this without after statement section and call a method to get and write the log data in the after each row section, but performance is much better doing it the way i explained.
The log_table is only "hard coded" in my example. The function get_log_table is exactly for the mapping you need. There you place your code to get from the base table to the log table. Modify this as needed for your use case.
Best regards
Jan
-
@Jan Gorkow Last question is what is the maximum number of rows in the v_row_ids? Or how many rows can I change in an update statement, so that all row ids can be stored in the v_row_ids?
Thanks for your help!
-
All audit solutions I have seen so far suffer from the same problem. The audit data that they generate is extremely difficult to read (human readable) afterwards.
Typical questions a user might ask is: Last friday we changed the customers order. We updated the product and the quantity. But at that time, the product description was different than it is today. We would like to see how the original order looked like including everything that was changed in the scope of the order, the customer and the product.
The audit logs will hold the data to answer such questions. But it is extremly decomposed. Especially since several changes were done that need foreign key lookups to find the original value. But those FK need to be followed depending on the correct change times.
Flashback query is a solution, that in typical environments can answer such questions, but only for a very limited time span.
-
Hi @mail8mz,
because the updated rowids are stored in a nested table collection variable there is no upper limit for the count of elements.
Best regards,
Jan