Skip to Main Content

Berkeley DB Family

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

cache fragmentation?

1cd5681a-9bba-4218-bea8-aeeb21102e86Dec 14 2016 — edited Dec 14 2016

We have been using BDB java edition for 5 yrs now and its a great product, it serves our purpose very well.

My question is around cache fragmentation. We have a bdb database with 19 million records. We configure 10gb of heap space. 9gb is consumed. If we make a copy of the database using two separate environments and read from one and write to another and replace the old files with the new and restart our application the amount of memory consumed by the cache drops to 4gb. Is there an explanation for this? Is there a way to compact the data without making a copy (we have dozens of servers with a bdb)? And whats interesting, if I load the cache and dump stats every 100k entries and chart it, not all entries are fragmented (see attached image)

Thanks in advance

Screen Shot 2016-12-14 at 11.16.24 AM.png

orig stats

Cache: Current size, allocations, and eviction activity.

adminBytes=143,773

avgBatchCACHEMODE=0

avgBatchCRITICAL=0

avgBatchDAEMON=0

avgBatchEVICTORTHREAD=0

avgBatchMANUAL=0

cacheTotalBytes=9,438,968,712

dataBytes=9,435,678,791

lockBytes=420

nBINsEvictedCACHEMODE=0

nBINsEvictedCRITICAL=0

nBINsEvictedDAEMON=0

nBINsEvictedEVICTORTHREAD=0

nBINsEvictedMANUAL=0

nBINsFetch=4,017,356

nBINsFetchMiss=2,008,972

nBINsStripped=0

nBatchesCACHEMODE=0

nBatchesCRITICAL=0

nBatchesDAEMON=0

nBatchesEVICTORTHREAD=0

nBatchesMANUAL=0

nCachedBINs=2,008,972

nCachedUpperINs=81,409

nEvictPasses=0

nINCompactKey=884,235

nINNoTarget=1,164

nINSparseTarget=1,210,092

nLNsFetch=19,103,013

nLNsFetchMiss=19,102,106

nNodesEvicted=0

nNodesScanned=0

nNodesSelected=0

nRootNodesEvicted=0

nThreadUnavailable=0

nUpperINsEvictedCACHEMODE=0

nUpperINsEvictedCRITICAL=0

nUpperINsEvictedDAEMON=0

nUpperINsEvictedEVICTORTHREAD=0

nUpperINsEvictedMANUAL=0

nUpperINsFetch=6,350,281

nUpperINsFetchMiss=81,404

requiredEvictBytes=0

sharedCacheTotalBytes=0

stats from copied database:

Cache: Current size, allocations, and eviction activity.

adminBytes=61,341

avgBatchCACHEMODE=0

avgBatchCRITICAL=0

avgBatchDAEMON=0

avgBatchEVICTORTHREAD=0

avgBatchMANUAL=0

cacheTotalBytes=4,313,036,592

dataBytes=4,309,829,103

lockBytes=420

nBINsEvictedCACHEMODE=0

nBINsEvictedCRITICAL=0

nBINsEvictedDAEMON=0

nBINsEvictedEVICTORTHREAD=0

nBINsEvictedMANUAL=0

nBINsFetch=301,191

nBINsFetchMiss=150,409

nBINsStripped=0

nBatchesCACHEMODE=0

nBatchesCRITICAL=0

nBatchesDAEMON=0

nBatchesEVICTORTHREAD=0

nBatchesMANUAL=0

nCachedBINs=150,409

nCachedUpperINs=1,200

nEvictPasses=0

nINCompactKey=151,607

nINNoTarget=11

nINSparseTarget=6

nLNsFetch=19,100,774

nLNsFetchMiss=19,100,389

nNodesEvicted=0

nNodesScanned=0

nNodesSelected=0

nRootNodesEvicted=0

nThreadUnavailable=0

nUpperINsEvictedCACHEMODE=0

nUpperINsEvictedCRITICAL=0

nUpperINsEvictedDAEMON=0

nUpperINsEvictedEVICTORTHREAD=0

nUpperINsEvictedMANUAL=0

nUpperINsFetch=304,360

nUpperINsFetchMiss=1,195

requiredEvictBytes=0

sharedCacheTotalBytes=0

This post has been answered by Greybird-Oracle on Dec 14 2016
Jump to Answer

Comments

Greybird-Oracle
Answer

Hi,

I'm glad to hear that JE has been working well for you in general.

At first I wondered whether all records were loaded into cache in the two cases you mention, but based on the nLNsFetchMiss stats (they are roughly the same), it looks like all records are loaded into cache. Please confirm that you've read all records into cache in both cases, but I'll assume that you have.

I see that before the copy, nCachedBINs=2,008,972, and after the copy, nCachedBINs=150,409. BINs are Btree nodes that contain roughly 100 records per node. Let's say you have 19,100,774 records (based on nLNsFetch=19,100,774 after the copy). That means before the copy there are 9.5 records per BIN, and after the copy there are 127 records per BIN. We expect 127 per BIN after the copy, because you're inserting in key order.

This is indeed fragmentation of the Btree as you guessed. This can be caused by doing many record deletions, but leaving some records in each range of 100 records or so, which prevents the entire BIN from being deleted. This results in BINs which contain a small number of records. If you have an access pattern like this, then that explains it.

Unfortunately JE does not do automatic compaction of the Btree, which is sometimes caused reverse splits. Currently the only way to compact is to copy the records from one Database to another, or (equivalently) dump and reload.

--mark

Marked as Answer by 1cd5681a-9bba-4218-bea8-aeeb21102e86 · Sep 27 2020

thanks Mark for the quick reply.  yes all 19 million records were loaded into the cache in both cases.

the access pattern you describe is exactly what we have, lots of new records on a daily basis that expire(deleted) over time.  many expiring within 30 days but some will stick around for long periods of time.

your explanation of the nCachedBINS is helpful, I did not understand what those numbers meant, we can now monitor those values and compact as necessary.

We will setup a periodic manual compaction process.

Thanks again!

1 - 2
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Jan 11 2017
Added on Dec 14 2016
2 comments
534 views