This content has been marked as final. Show 9 replies
The agent catches up when the agent/collector restart.
Remember the underlying technology here is Streams.
Thanks, Dan and User602992. I've updated the Auditor's Guide with an explanation about "lost" (or rather, queued) audit data. It will appear the next time the book is refreshed on OTN.
A lurking tech writer. <g> I have sent my positive comments to Francisco.
Will be at HQ last week of January. Contact me off-line if you will be in the neighborhood.
Crumbs, sorry I missed ya! Will resume trolling the discussion forum now, and picking at my docs like the poor little worrisome scabs they be.
Edited by: user462144 on Feb 10, 2010 9:11 AM
does the same thing go for redo collectors?
what happens if the archived logs have already been deleted from the server by the time the redo collectors are started up?
Yes, the REDO collector too picks up the audit trail where it left off when it was shut down. However, just like all other collectors, it can't collect any audit records that have been deleted on the source before they were collected. So, if it needs to go to archive logs to collect records, and the logs were deleted, they are lost, as expected. However, please keep in mind that, for the most part, the collector should not have to go to the archive logs at all, as it tries to stay current as much as possible. Unless the transaction volume on the source is extremely high, the collector should be collecting from the online logs, not needing the archive logs at all.
how is streams used by the redo collector, and where do capture rules come into play?
won't the source db be queueing the messages in the streams tables in the event the redo collector is down?
The REDO collector does use Streams to capture audit records. There's one capture process running on the source. Capture rules are provisioned to the source using Audit Vault's Audit Policy manager, which lets you easily specify capture rules either globally, per-schema, or per-table. Starting the REDO collector starts the capture and propagate processes on the source, and the corresponding apply process on the AV server. Streams uses LogMiner to mine the REDO logs, whether they are online or archived. As LogMiner passes LCRs from the logs to the capture process, the LCRs are evaluated to see if they match any of the provisioned capture rules. If they are, they are sent over to AV where they are converted into audit records and inserted into the audit repository.
Please note, however, that the LCRs are "pulled" from the redo logs by LogMiner; they are not queued (or "pushed") to Streams. In essence, there's only one copy of the LCR that is stored, and that is in the redo logs (whether online or archived). Streams does not store a separate copy for capture. The whole system acts as a pipeline, with the capture process being the driver. There's no queuing involved at all.
Hope this helps.
looking at the audit repository, specifically the avsys.av$rads_view and the avsys.audit_event_fact, how can you determine if the row entry was collected by the REDO_COLL, the DBAUD, or the OSAUD collectors?