Forum Stats

  • 3,875,213 Users
  • 2,266,888 Discussions
  • 7,912,114 Comments

Discussions

Extract process performance

784624
784624 Member Posts: 28
edited Oct 6, 2010 1:46PM in GoldenGate
Hi

We are planning to Capture data from an Oracle OLTP table which has 40 Million DML's per day. The OLTP system is already heavily loaded and we want a light weight process to capture the changes.
Please can you share your experience with golden gate on a similar environment
Single extract process is it efficient enough to capture 40 Million changes per day.
If we have to have multiple extract processes what would be the impact(CPU,Memory per process)


Thanks

Answers

  • User152973
    User152973 Member Posts: 148
    Hi,

    If you use single extract process for 40 million rows then there is a chance of excessive lag times. So you better go for multiple extract and replicat processes to speed up replication and to minimize the lag times. And this would reduce the CPU and memory overhead too.

    Thanks.
  • -joe
    -joe Member Posts: 226
    Hi.

    40M DMLs is arbitrary because one DML could be 100 bytes or 1GB. A better measure is redo/transaction log volume.

    However, 40 000 000 / 86400 = 462 DML operations / second. We usually see throughput in terms of several thousands / second with a single extract.

    Never start testing with more than one extract. Use one first to get a baseline. Then if this is not RAC move to one extract with two threads ("threads 1" with zero as the default first thread). Then try with two if you're experiencing lag but evenly distribute the load across the extracts. Do this by using logdump "count detail ./dirdat/tr*" on the single extract trail you created in your first test. The command will show number of insert, update, deletes and bytes per table. Evenly distribute based on bytes per table.

    Good luck,
    -joe
This discussion has been closed.