Forum Stats

  • 3,837,402 Users
  • 2,262,255 Discussions
  • 7,900,269 Comments

Discussions

Accumulation of Logs and a feq transaction related questions

527197
527197 Member Posts: 27
edited Sep 1, 2006 4:47AM in Berkeley DB
Hallo,
it takes some time but I think I slowly get into the transaction conecept of BerkeleyDB :-). Iam currently experimenting with the different steps between non-transactional and full ACID behaviour. So far Iam very pleased that the insertion time for large (several GB) datasets grows quasi linear. However there are a few questions I didn't find the answer to yet. Could anyone give me a hint on the following ones please?

Accumulation of Log-Files
During an insertion, which is a sequence of relatively small transactions, hundreds of logfiles are building up in the environment directory. I understand that for recovery some persistent backup has to exist. But couldn't BerkeleyDB delete Logs once all transactions related to them are committed (or aborted)? There is the Flag (sth. like) AUTO_REMOVE_LOGS but the docs say that using this feature will likely end up in the disability to recovery from a catastrophy?
=> How can I safely (and preferably automatically) keep the number of Log-Files small?

Uncomitted reads during a transaction
My tests showed that within one transaction I can write and subsequently read the updated record without commiting in between. I need this behaviour to be able to perform several writes and reads (and updates) of a record within one transaction. Iam a bit confused by the several falgs regarding uncomitted read etc. I didn't use any on them.
=> Is this correct or do I have to specify specific Flags to do that?
=> Could I restrict transaction-protection to record-updates only and expect non-transaction saved reads of updated records to give me correct results (or do I get the old ones out of the database until the update-transaction is commited)? e.g:
1 put new record "CusterInformation" (using TransactionA)
2 read record "Custerinformation" (updates one) (NOT using transaction)
3 Add new Information to the updated record which was read in step 2
4 put updated information from step 3 into database (using TransactionA)

Secondary Databases associated to transactional databases
=> Given a transactional primary database- and really using transactions for updates- do I have to set special settings for the secondary databases or is this transparently done by the primary database?

Thank you very much for any adivce you can give.

Best regards,

Peter

Comments

  • Andrei Costache-Oracle
    Andrei Costache-Oracle Member Posts: 625 Employee
    Hello Peter,
    Accumulation of Log-Files
    You can reduce the number of logs in your environment by using checkpoints.
    http://www.sleepycat.com/docs/ref/transapp/checkpoint.html
    Also, by archiving the database an log files you are protected even in the case in which you have to perform catastrophic recovery.
    Log files from the environment can be removed after a checkpoint. When performing a checkpoint, all the changes to the databases found in the log files are written into the backing database files.
    After the database pages are written, you can archive and remove log files from the database environment; the log files will only be needed for catastrophic failure.
    You shouldn't remove any log files involved with active transactions; there must always be at least one log file in your database environment.
    When the DB_ARCH_REMOVE flag is specified against the DB_ENV->log_archive method, log files which are no longer are removed from the environment. Catastrophic recovery (failure recovery) is no longer possible because you will no longer have all the log files in you environment.
    For checkpointing you can use the db_cehckpoint utility, or implement your own checkpoint utility using the DB_ENV->txn_checkpoint function.
    For more information regarding the archiving of database and log files in the perspective of catastrophic recovery, removal of log files and recovery procedures you can get more information on the following links:
    http://www.sleepycat.com/docs/ref/transapp/archival.html
    http://www.sleepycat.com/docs/ref/transapp/logfile.html
    http://www.sleepycat.com/docs/ref/transapp/recovery.html

    Uncommitted reads during a transaction
    => Is this correct or do I have to specify specific
    Flags to do that?
    A degree 1 isolation, if that is what you want, can be achieved by specifying the DB_READ_UNCOMMITTED flag when calling the DB->open method. The reads, DB->get, DB->pget, DBCursor->c_get, DBCursor->c_pget, should also use the DB_READ_UNCOMMITTED flag.
    If you use the above flags you will make dirty reads, thus a degree 1 isolation, meaning that you do not have to commit transactions in order to see the updated records when reading data.


    Secondary Databases associated to transactional
    databases
    => Given a transactional primary database- and really
    using transactions for updates- do I have to set
    special settings for the secondary databases or is
    this transparently done by the primary database?
    Yes, the secondary databases are modified according to the updates in the primary database. When you commit the transactional changes to the primary database the modified records are updated as well in the secondary database associated with the primary one.

    Best regards,

    Andrei Costache
    Berkeley DB
    Oracle Support Services
This discussion has been closed.