4 Replies Latest reply: Apr 27, 2012 2:35 AM by Chrisjenkins-Oracle RSS

    Time taken to replicate.


      I was trying to find out the time taken to replicate changes between 2 node i.e time taken to replicate data from Master or Propagater to Subscriber. I'm using a simple topology of unidirectional flow between master and one subscriber.

      i tired using ttLogHolds, ttXactlog() and ttRepAdmin. But nothing was conclusive.

      Is there any mechanism which exposes this metric or does timesten logs information like... time it received any changes from either Master or Propagater.

        • 1. Re: Time taken to replicate.

          Have you tried using ttRepAdmin -dsn master_DSN -receiver -list command ? It does have latency column which indicates averages over time. This value is only non-zero when there is a continuous set of replication transactions occurring.

          The most reliable way to measure replication latency is to time a transaction (including the commit) using 2-Safe replication.

          Command> timing 1;
          Command> insert statement;
          Command> commit;

          • 2. Re: Time taken to replicate.
            As Rajesh notes, there are only two reliable ways to measure this:

            1. Configure replication to use RETURN TWOSAFE BY REQUEST and then periodically execute an UPDATE against a row in a dummy tables used for just this purpose and commit it equesting TWOSAFe (synchronous) replication and measure (e.g. in your monitor program) the time taken for the commit to complete. This method is only usable in topologies where RETURN TWOSAFE is allowed.

            2. Include a TT_TIMESTAMP column in a replicated table (best to use a dedicated table specially for this purpose) and ensure that when a row is updated the column is set to SYSDATE. Write an XLA program that monitors this table and when it sees an update compare the timestamp value in the updated record versus the current system time. Run this program at the replciation receiver datastore. This method is usable in all topologies but it is not foolproof as there is some latency between the update being applied at the receiver and the record being delivered through XLA so the latency measured will be > than the actual replication latency, possibly by a significant amount.

            • 3. Re: Time taken to replicate.
              Thanks Rajesh and Chris for you answer and guidance.

              I tired the 2 approachs
              1)Timing odbc function calls when the replication schemes was having Return Receipt by Request
              2)XLA event listener mechanism when the replication schemes was having Return Receipt by Request and async.

              I did get fair number of data. The replication latency observed in both the approaches was in sub millisec. Though not accurate but it does give a ball park readings.

              I've one question for Chris:

              you said " Include a TT_TIMESTAMP column in a replicated table (best to use a dedicated table specially for this purpose) and ensure that when a row is updated the column is set to SYSDATE"

              I tired to do this but was not getting the expected outcome.

              This is how my table looks like.

              USER_ID VARCHAR2(60) NOT NULL,
              CREATED_DATE      TT_TIMESTAMP default sysdate,
              PRIMARY KEY     (PARTNER_ID,USER_ID)

              insert in rep_users(partner_id,user_id,created_date) values (12345,'U1');

              When i did this, the CREATED_DATE column was same across all subscriber and master. Isn't the CREATED_DATE column also got replicated by the value which was there in the master ?

              Have i understood correctly what you meant ? Is there any thing different that i need to do ?

              • 4. Re: Time taken to replicate.

                The value of 'CREATED_DATE' for any given row will of course be the same at the master and the subscriber - that's the idea of replication :-)

                The purpose of this is to have some value for the time when the row was updated at the master. In an XLA (or JMS/XLA) program at the subscriber, as one [processed each replciated update to this monitoring table the program would do the following logic:

                1.    Extract the timestamp value from the replicated update applied to the monitoring table (the update time fro mthe master)
                2.    Obtain the current system time (e.g. SELECT SYSDATE FROM DUAL)
                3.    Compare the two; the difference is the total latency for commit at master + replication + XLA cpature and delivery.

                You may find it easier to not use a TT_TIMESTAMP column but instead have the montoring application actually just get the system time using e.g. getttimeofday(0 (C/C++) or System.currentTimeMillis(), or even System.nanoTime() (Java) and store this in a binary column. This avoids the aggravation of having to decode the TT_TIMESTAMP type within the XLA reader code.