Specify the columns you need in the MAP (much easier to do rather than from Capture, ie extract), so if you need col1 and col2 from tablea, just do
MAP SCHEMA.TABLEA, TARGET SCHEMA.TABLEA, COLMAP (COL1=COL1, COL2=COL2);
In fact the target table need not even need to be the same as source. Example if tablea has 20 cols and tableb (your target) has 2 cols with a completely different name you can do
MAP SCHEMA.TABLEA, TARGET SCHEMA.TABLEB, COLMAP (COLA=COL1, COLB=COL2);
The take up target columns also need not be the same size or type but you will need the OGG conversion functions. The above example assumes the same data structure.
OGG is generating the target as smaller files. As and when there is a update happens in source OGG will generate the target files(file name will have time stamp appended). how can we define these target files as target table. Please advise.
I don't understand this "OGG is generating the target as smaller files.". OGG does not do that unless you replicate DDLs or get extract to execute a script.
Maybe you can show me what you actually do with your parameter and script.
In the approach you suggested consider below scenario
table having 5 columns
col1 col2 col3 col4 col5
user want that col3 and col4 needs to be considered for CDC
with the approach (map columns on target side) you suggested it will bring the data for col3 col4 ALWAYS even if there is any change for that record irrespective of what column is being updated on source side.
sometimes that is applicable if transaction volume is low.
if we specifically mention column names on extract side -- will that resolve these extra transactions to flow to replicat ?
laknar -- I am not saying that Kee suggested wrong thing but this is additional you need to think of
off-course Kee can correct me, if my understanding is wrong.
Mark this post helpful, correct or like it appropriately. This will help others