3 Replies Latest reply: Sep 5, 2012 11:08 AM by Alex Fatkulin RSS

    Oracle Streams Performance overhead

    862209
      Hi

      Oracle 11gr2

      I just wanted to know what is the performance overhead (database/trandaction level) when we configure streams to replicate few table using local capture . Also , what is the overhead when we enable supplement logging on tables ?

      Thanks
        • 1. Re: Oracle Streams Performance overhead
          415289
          I just wanted to know what is the performance overhead (database/trandaction level) when we configure streams to replicate few table using local capture .
          stream has extra overhead on database it also depend on no of archivelog generation rate
          Also , what is the overhead when we enable supplement logging on tables ?
          not much.
          • 2. Re: Oracle Streams Performance overhead
            Mark Malakanov (user11181920)
            I just wanted to know what is the performance overhead (database/trandaction level) when we configure streams to replicate few table using local capture .
            1. Streams will start CDC Capture process (ultimately, underneath the hood, it will be Log Miner) that will constantly read Redo/Archived Log. This CDC will take some amount of CPU depending on overall amount of data changes in DB, because it has to read them all in terms of finding data changes that belong to your "few tables".

            2. Streams will place captured changes into Advanced Queue. AQ is a bunch of tables with BLOBs. Placing data into these AQ and later removing data from it will cause some more log generation.
            Also , what is the overhead when we enable supplement logging on tables ?
            It depends on level of supplement logging and number of key and other columns in the tables affected. This overhead is minor comparing to mentioned before.
            • 3. Re: Oracle Streams Performance overhead
              Alex Fatkulin
              user11181920 wrote:
              2. Streams will place captured changes into Advanced Queue. AQ is a bunch of tables with BLOBs. Placing data into these AQ and later removing data from it will cause some more log generation.
              This depends. With combined capture/apply (11G) there is no queue involved at all. Buffered queues were available/used for a while and bypass most of the "in-table" AQ overhead as well.