We are trying to create a database with 2.5M entries and avg key size of 34 bytes, avg value size of 55 bytes, which means around 225 MB of data.
We are using 5.0.55 and 4 cleaner threads.
Initially we had a MAX log file size of 128MB, node Max entries = 512 and minimum utilization factor = 75%. After the initial load we had around 1GB of data and noticed that the cleaner was constantly running (new files being created all the time).
We have two questions:
a) Why is the utilization factor so low (~20%)?
b) Why the cleaner is not able to detect that it will not be able to shrink the database further? We tried the same test with 4.0.92 and it stopped after a few cleans (the utilization factor was not much better).
We also tried with MAX log file size = 16MB and node Max entries = 128 (in separate tests) and the first one made the data go down to ~650 MB but the cleaner is still running all the time.
Thanks for the quick reply. The cache size was huge for all these tests: 25GB, so it should be able to contain all the data, although we are using EVICT_LN.
We are using a shared cache with other tables, that's why it is so big, but all these tests were run with this table only, to try to figure out what the issue is.
Thanks a lot for rapidly looking into this Mark, much appreciated.
The problem was related to the checkpointer interval, it was set to the default 20MB and it was creating too much information and the target utilization factor was never met.
I increased it to 128MB and it solved the issue. With 512MB the database size was even smaller (from 710 to 620 MB).