Streams Capture process took an extremely long time to read through archive log files it had already
edited Jun 13, 2010 11:57PM in GoldenGate, Streams and Distributed Database (MOSC) 3 commentsAnswered ✓
Last weekend, we had a situation where a very large delete transaction was executed on the DB. The table that the delete was performed on is not replicated. The capture process that processes transactions for the database took about 2.5 hours to get through all of the archive log files (about 10 of them) that contained this delete transaction. That was ok, but later that day the capture process was stopped and restarted and when it started, the logminer, started processing the archive log file that was created that included the data dictionary which happened to also contain the start of this very
0