This site is currently read-only as we are migrating to Oracle Forums for an improved community experience. You will not be able to initiate activity until January 30th, when you will be able to use this site as normal.

    Forum Stats

  • 3,889,825 Users
  • 2,269,775 Discussions
  • 7,916,823 Comments

Discussions

set FLAG=Y in goldengate -kafka replication after image message section for any op_type=U

User_MK6H6
User_MK6H6 Member Posts: 5 Green Ribbon

Hi,

I am looking for solution to implement FLAG=Y for any update happens to column which get replication from Goldengate to kafka in the after image section.

As part of the GG replication, we are receiving all the field in before and after section of the kafka message because the source table in log with all cols. it is creating the problem to identify which field got updated in after image.

So to identify, I am looking to implement logic which will set FLAG=Y for each field which got updated record and others will have FLAG=N or no flag in kafka after image section.

Answers

  • K.Gan
    K.Gan OGG SME Melbourne AustraliaMember Posts: 2,850 Bronze Crown

    This is unclear. Is your source an oracle database and you are replicating the target as Kafka messages? If not specify your source and your target, As you have before and after images for all the columns, do you want to compare each column and if the before is different from the after to set a flag for each column? That is possible, just a lot of work. See the column conversion functions in the reference manual. If the datatype is a varchar, you can use for example flag_columna = @if(@strcmp(@before(columna), @after(columna)), "Y","N").

    You need to compare the correct datatype, therefore for number field you need to use something else. You will need to convert date to a string, etc. And some columns cannot be compared like LOBs.

  • User_MK6H6
    User_MK6H6 Member Posts: 5 Green Ribbon

    Thanks K.Gan for the reply on this.


    we have source as an oracle database and replicating to kafka. As the source table does not have any PK defined and db supplemental logging enabled for all cols, The source extract is capturing all the fields values and the same getting replicated to kafka where before/after images getting published for all field.

    Though certain tables are having UK but somehow source extract is capturing all the fields for any update operation.

    As per the given ex.

    flag_columna = @if(@strcmp(@before(columna), @after(columna)), "Y","N") --is this to be part of replicat param to kafka or is supposed to be included in the extract from source Oracle DB.


    Also the flag is required to be applied to only op_type=U and not any other Operation (insert/delete).

  • K.Gan
    K.Gan OGG SME Melbourne AustraliaMember Posts: 2,850 Bronze Crown

    Not sure if there is a question here or you are ok with the suggestion.

    You said "Though certain tables are having UK but somehow source extract is capturing all the fields for any update operation". That is because you ask it to by logging all columns.

  • User_MK6H6
    User_MK6H6 Member Posts: 5 Green Ribbon

    table at source has following logging enabled.

    select * from ALL_LOG_GROUPS;

    LOG_GROUP_TYPE

    -------------------------

    UNIQUE KEY LOGGING

    FOREIGN KEY LOGGING

    USER LOG GROUP

    PRIMARY KEY LOGGING

    and

    select supplemental_log_data_min, supplemental_log_data_pk, supplemental_log_data_ui, force_logging from v$database;

    SUPPLEME SUP SUP FORCE_LOGGING              

    -------- --- --- ---------------------------------------

    YES   NO NO YES   


    will it cause replication of all fields in after image section?

  • K.Gan
    K.Gan OGG SME Melbourne AustraliaMember Posts: 2,850 Bronze Crown

    This is a global supplemental logging that tells the DB to log as a minimum the keys as you have displayed. But there are 2 others, schema level and table level. The easiest way is to run ggsci, login to the database and do info schematrandata and trandata. They both accept wildcards. So if you have done add schematrandata scott allcols then all tables belonging to scott will log all columns for update and delete. Look up the ggsci info commands in the command line reference.

  • User_MK6H6
    User_MK6H6 Member Posts: 5 Green Ribbon

    yes, I verified trandata for the tables part of the replication and found all the columns are marked by Oracle Goldengate as key columns on table. This must be the cause of the issue logging all the column in after image.

    We tried logging the defined unique key column on the table but got the "Info trandata" message output as " cannot be used due to the inclusion of virtual columns, or user-defined datatypes, or extended long varchar columns, or function-based index columns.".

    Please do let me know what could be the reason and how to correct this.

  • K.Gan
    K.Gan OGG SME Melbourne AustraliaMember Posts: 2,850 Bronze Crown

    I am a little confused by your objective. Your initial message says you want to determine which columns are updated. Therefore by not logging all columns you can now determine that the columns that are present are the ones changed.

    You said that all columns are marked as key columns, that means that there are no PK defined.

    If that is the case, ie. all columns are logged then why are you attempting to log some columns. You also said "the defined unique key column". Either you have a PK or you don't. Do you mean that this column is a logical unique key? And no, there are some datatypes you cannot specifically logged. If all you want is on the replicat side to have some unique logical key, as long as this column is logged by what you said initially, then use the keycols parameter for the MAP clause.

    Currently without doing anything else, does all the columns show up in logdump?

    Also let me know what is your objective?

  • User_MK6H6
    User_MK6H6 Member Posts: 5 Green Ribbon

    Present scenario in our env is given below:

    we have 5 tables part of the replication, out of which 3 of them having UI and 2 tables not having any PK/UK.

    So the col logging for 2 tables are "ALL" and is accepted but for these two tables we need the flag=Y/N to identify which field got the new updated record in kafka.

    And For 3 tables which has UI,

    we are receiving all the fields, even though column logging at source is defined

    UNIQUE KEY LOGGING

    FOREIGN KEY LOGGING

    USER LOG GROUP

    PRIMARY KEY LOGGING

    So the objective is

    ==============

    1. get the flag=Y/N set for 2 tables which are not having PK/UK
    2. get only updated fields in replication for those tables which has UK.
  • K.Gan
    K.Gan OGG SME Melbourne AustraliaMember Posts: 2,850 Bronze Crown

    1.get the flag=Y/N set for 2 tables which are not having PK/UK

    In the previous notes I suggest comparing the before and after values of each column and setting the flag accordingly. As said a lot of work if there are many columns. You may not be able compare them all, I think LOBs may have an issue.

    2.get only updated fields in replication for those tables which has UK.

    Is the UK defined as a PK, ie a physical key instead of a logical key? If not declared it. For these tables make sure you don't supplementally logged any columns, ie. use the COMPRESSUPDATES parameter for extract. Then you will get the PK and only any updated columns. You said "we are receiving all the fields, even though column logging at source is defined". You can turn them off, alter the table or do ggsci delete trandata or delete schematrandata.