6 Replies Latest reply on Jan 29, 2014 9:46 PM by 1519039

    Unable to access large Berkeley DB file.

    1519039

      I am working with a large Berkeley database file(around 300MB). I am copying this file on scanner after lsn reset  using db_load utility.Now when I try to access this database file using a .Net application it takes a very log time to fetch data(around 10 min). However when I re-start my application again the access to database is fast(30 sec). Any guesses why the database access is so slow initially?

        • 1. Re: Unable to access large Berkeley DB file.
          Bogdan Coman-Oracle

          It may be that the cache is empty. Once you load the data into the cache it will be faster. If this is an issue, you may search the forum for options on pre-warming the cache.

          Using the 'db_stat -m' utility will give you details on how the cache behaves.

           

          Thanks,

          Bogdan

          1 person found this helpful
          • 2. Re: Unable to access large Berkeley DB file.
            1519039

            Is this issue related to lsn reset? because if I copy file along with journal folder without doing lsn reset it works fine.

            • 3. Re: Unable to access large Berkeley DB file.
              Bogdan Coman-Oracle

              Why do you reset the LSNs?

               

              If you move the database to the new environment you have to remove the LSNs, this also means a new cache. If you keep your current environment, you use the old cache which is already populated with data.

               

              As I said, you can use the db_stat -m before and after you reset the LSNs. This will tell you how much data got access from the cache and how many times you had to go to disk.

               

              When you start with a fresh environment, this is the method you can use to warm up the cache: Warming the memory pool

               

              Thanks,

              Bogdan

              • 4. Re: Unable to access large Berkeley DB file.
                1519039

                Thanks for your reply.

                 

                Actully we have a desktop .NET app that fetches data from database into csv files which is then loaded into Berkeley db file using BDB sql interface(pragma txn_bulk = 1; import cmd....). As we want to avoid copying log files to scanner we are then performing LSN reset(using db_load -r lsn) and copying the database file to scanner.

                Is there a difference between LSN reset and LSN remove?How can I remove LSNs?.

                 

                Since we are not performing any database transaction or query before transferring the file to scanner, I am not sure how cache is getting used in this scenario.

                On scanner this file is getting accessed by another vb.Net app. The initial query that is causing issue and is very slow simply copies the table schema into a dataset without fetching any data.

                 

                Thanks,

                AB

                • 5. Re: Unable to access large Berkeley DB file.
                  Bogdan Coman-Oracle

                  > Is there a difference between LSN reset and LSN remove?How can I remove LSNs?.

                   

                  They are one and the same thing.

                   

                  Can you locate the db_stat.exe file in the installation directory? Would you mind running "db_stat -m" in the environment directory where you copy the database after the initial query and take a second snapshot later on, when the application ran for a while... and post the results here? This would tell us that the issue is a cold cache. If the issue is a cold cache, then the option we have is to populate the cache and probably the initial query is doing just that. You may want to run a script that not only copies the database but also runs a query.

                   

                  Thanks,

                  Bogdan

                  • 6. Re: Unable to access large Berkeley DB file.
                    1519039

                    Thanks for the reply.

                     

                    1)I ran db_stat -m before copying database file to scanner(windows mobile 6 /pocket pc) and got the following output.

                    ===============================

                    9MB 792KB    Total cache size

                    1    Number of caches

                    1    Maximum number of caches

                    9MB 792KB    Pool individual cache size

                    9MB 792KB    Pool individual cache max

                    0    Maximum memory-mapped file size

                    0    Maximum open file descriptors

                    0    Maximum sequential buffer writes

                    0    Sleep after writing maximum sequential buffers

                    3    Requested pages mapped into the process' address space

                    19M    Requested pages found in the cache (99%)

                    170328    Requested pages not found in the cache

                    72866    Pages created in the cache

                    170328    Pages read into the cache

                    83351    Pages written from the cache to the backing file

                    196429    Clean pages forced from the cache

                    44327    Dirty pages forced from the cache

                    0    Dirty pages written by trickle-sync thread

                    2438    Current total page count

                    2438    Current clean page count

                    0    Current dirty page count

                    1031    Number of hash buckets used for page location

                    1031    Number of mutexes for the hash buckets

                    4096    Assumed page size used

                    49M    Total number of times hash chains searched for a page (49574387)

                    8    The longest hash chain searched for a page

                    0    Total number of hash chain entries checked for page

                    0    The number of hash bucket locks that required waiting (0%)

                    0    The maximum number of times any hash bucket lock was waited for (0%)

                    0    The number of region locks that required waiting (0%)

                    0    The number of buffers frozen

                    0    The number of buffers thawed

                    0    The number of frozen buffers freed

                    0    The number of outdated intermediate versions reused

                    243202    The number of page allocations

                    494413    The number of hash buckets examined during allocations

                    11    The maximum number of hash buckets examined for an allocation

                    1645915    The number of pages examined during allocations

                    25    The max number of pages examined for an allocation

                    0    Threads waited on page I/O

                    0    The number of times a sync is interrupted

                    Pool File: MOBILECONFIG_SCANREF.db

                    4096    Page size

                    0    Requested pages mapped into the process' address space

                    19M    Requested pages found in the cache (99%)

                    170328    Requested pages not found in the cache

                    72866    Pages created in the cache

                    170328    Pages read into the cache

                    83351    Pages written from the cache to the backing file

                    =========================================

                     

                    2) Then I reset the lsn and again ran db_stat -m again.

                    a ) db_load -r lsn database.db

                    b) db_stat -m

                    output

                    =========

                    9MB 792KB    Total cache size

                    1    Number of caches

                    1    Maximum number of caches

                    9MB 792KB    Pool individual cache size

                    9MB 792KB    Pool individual cache max

                    0    Maximum memory-mapped file size

                    0    Maximum open file descriptors

                    0    Maximum sequential buffer writes

                    0    Sleep after writing maximum sequential buffers

                    3    Requested pages mapped into the process' address space

                    19M    Requested pages found in the cache (99%)

                    170328    Requested pages not found in the cache

                    72866    Pages created in the cache

                    170328    Pages read into the cache

                    83351    Pages written from the cache to the backing file

                    196429    Clean pages forced from the cache

                    44327    Dirty pages forced from the cache

                    0    Dirty pages written by trickle-sync thread

                    2438    Current total page count

                    2438    Current clean page count

                    0    Current dirty page count

                    1031    Number of hash buckets used for page location

                    1031    Number of mutexes for the hash buckets

                    4096    Assumed page size used

                    49M    Total number of times hash chains searched for a page (49574387)

                    8    The longest hash chain searched for a page

                    0    Total number of hash chain entries checked for page

                    0    The number of hash bucket locks that required waiting (0%)

                    0    The maximum number of times any hash bucket lock was waited for (0%)

                    0    The number of region locks that required waiting (0%)

                    0    The number of buffers frozen

                    0    The number of buffers thawed

                    0    The number of frozen buffers freed

                    0    The number of outdated intermediate versions reused

                    243202    The number of page allocations

                    494413    The number of hash buckets examined during allocations

                    11    The maximum number of hash buckets examined for an allocation

                    1645915    The number of pages examined during allocations

                    25    The max number of pages examined for an allocation

                    0    Threads waited on page I/O

                    0    The number of times a sync is interrupted

                    Pool File: MOBILECONFIG_SCANREF.db

                    4096    Page size

                    0    Requested pages mapped into the process' address space

                    19M    Requested pages found in the cache (99%)

                    170328    Requested pages not found in the cache

                    72866    Pages created in the cache

                    170328    Pages read into the cache

                    83351    Pages written from the cache to the backing file

                     

                    ======================

                     

                    3) Then I copied the database file to scanner leaving behind the journal folder.

                    As there is no journal folder on scanner db_stat -m did not work.

                     

                    4) I Started vb.net application on scanner to fetch data from database file. It took it around 10 min to start.

                    I ran db_stat -m again but it failed with "db_stat: DB_ENV->open: .: No such file or directory" as there was only one log file in the journal folder(no environment files).

                     

                    Message was edited by: 0bb006c3-5a5c-4ef0-8af0-a1c2b1404335 Note: VB.Net app connects to database using System.Data.SQLite.dll and SQLite.Interop.085.dll