Scenerio, I have goldgate setup from source to target on certain list of tables.On target side, I am invoking a custom java handler to process the incoming data per transaction.
One transaction can have mulitple tables and they can vary a lot
tables Test1,Test2,Test3 in TX1.
tables Test2,Test3 in TX2.
Question, is there a way to invoke different custom java handlers on target based on different set of incoming tables pattern.
In TX1 , 'Test1' table invoke Test1_Handler, for Test2, Test3 invoke Test2_Handler
In TX2, Only Test2_Hanlder will be invoked.
I am asking this as I have to merging multiple tables into one and re-directing them into seperate handlers will help process them correctly.
Unfortunately this is not as simple as it could be, due to a couple bugs preventing the obvious solutions from working as expected. The most robust and fastest (fasted to implement, and fastest to execute) would be to have a pump that splits tables for handler "one" and tables for handler "two" into two separate trails. (You can use just "table" statements, and/or "tableExclude", and/or "filter" statements.) You can't necessarily do this in the pump running the Java user-exit, though, *unless* you are certain that the two groups of tables will never be part of the same transaction (that's a big "if").
So, if you have one source trail "dirdat/aa", create a single pump "split" that uses "dirdat/aa" as input; inside "split.prm" you have:
extract split -- to configure: -- ggsci> add extract split, extTrailSource dirdat/aa -- ggsci> add extTrail dirdat/bb, extract split, megabytes 500 -- ggsci> add extTrail dirdat/cc, extract split, megabytes 500 sourcedefs dirdef/aa.def extTrail dirdat/bb -- don't forget getUpdateBefores, in case you do have them in the trail... but you can include it anyway even if they aren't there. getUpdateBefores table schema.test1; table schema.test2; extTrail dirdat/cc getUpdateBefores table schema.test3; -- optionally: -- extTrail dirdat/dd -- getUpdateBefores -- tableExclude schema.test1; ...etc
Then use a Java user-exit pump to read from dirdat/bb (instead of dirdat/aa), another to read from dirdat/cc, etc., with the corresponding handlers you want to run against those tables in those trails. You can use the same source-def file in all pumps, though, since it contains a superset of all tables used (eg dirdat/aa.def).
The reason you can't do this type of filtering in the prm file running the Java user-exit is because the transaction indicators ("begin tx", "end tx") are on the records themselves, and if an "end" or "begin" record is somehow excluded, then the Java application can't tell where transactions begin and end. (This limitation will be remedied in a future patch.)
You are absolutely right but what do you say about having after insert trigger on the target tables calling required java handlers. I think that should help him in merging required table data into one set may be into a GTT and then process data from there as needed. What do you say?
No, I'm afraid that doesn't sound like anything that could be implemented. First, there is no target database here afaik, hence, no trigger, no target table. And the GG Java adapter event handlers run outside of the database, like all GoldenGate processes. Triggers just don't enter into the equation here at all (if you are literally talking about database triggers.)