This content has been marked as final. Show 3 replies
A short answer is that the maximum capacity / size of an in-memory Berkeley DB database is limited by the configured cache size. If for example you have configured a cache size of 512MB (524288KB) for your in-memory Berkeley DB application, and your database has a page size of 4KB, the cache will be able to hold in 131072 database pages; in other words, your database will be able to extend up to 512MB in size. See the Keeping the Database in Memory documentation section in the Writing In-Memory Berkeley DB Applications guide.
Of course, if you are interested in how many record items (key / data pairs) can be stored in the database, you'll need to take into account the (average) size of the items (key and data items) and the page fill factor. Review the Disk space requirements, Selecting a cache size and Database limits documentation sections in the Berkeley DB Programmer's Reference Guide for more details.
Thanks for your answer.
But I found a new issue about configure BDB as an in-memory database. Can you help me with that?
My issue is as follows:
My application needs an in-memory RDBMS that has high access efficiency and supports SQL. So I could not use the method proposed in the doc < Writing In-Memory Berkeley DB Applications guide> (am I right?) and I have to use sqlite3 SQL API. By using sqlite3 library, I pass the argument filename as ‘:memory:’ when I called sqlite3_open() function. But it just created a private in-memory database and could not be accessed by multithreads concurrently.
So I tried in the following way:
1. Firstly, built a tmpfs on my OS:
chmod 1777 /dev/shm/tmp
mount --bind /dev/shm/tmp /tmp/bdb_test
2. Then I called the function sqlite3_open() with the argument /tmp/bdb_test/test.db, thus Berkeley DB would create the database test.db in memory, but it is transparent to BerkeleyDB.
But when I ran a multithread test, some error log occurred: DB_LOCK->lock_put: Lock is no longer valid.
So can you give me some advice that I could created an in-memory database which could be accessed by multithreads concurrently?
I'm encountering a similar issue where I can't programmatically increase the cache size once I'm using SQLite as a front-end, ie, I create an in-memory database using sqlite3 methods. Indeed, I'm working with the example file ex_sql_load.c and modified the underlying ex_sql_utils.c to load a larger table (over 100K rows) in memory.
I've even issued the following PRAGMA's and direct manipulation of the heap size:
exec_sql_internal(db, "PRAGMA cache_size=-90000000;", silent);
exec_sql_internal(db, "PRAGMA automatic_index=0;", silent);
exec_sql_internal(db, "PRAGMA temp_store=MEMORY;", silent);
exec_sql_internal(db, "PRAGMA journal_mode=MEMORY;", silent);
sqlite3_db_config(db, SQLITE_CONFIG_HEAP, 10000000, 100);
I'm able to increase page size when using the Berkeley DB api directly, but this obviates SQLite's SQL engine which I also need.
Has anyone encountered this and found a solution?
Settings: BerkeleyDB 5.13.15, gcc 4.1.2