Short version: cursor.getNext() is taking between 1-3ms which is too long.
I have a database that I am using as a persistent queue. The keys are sorted in priority order so I always want to pick the lowest key (iow - not fifo). This means that each time I dequeue I need to open a new cursor and call getNext exactly once. Furthermore, since other threads are constantly adding to the queue I am opening the cursor in a new transaction with CursorConfig.READ_COMMITTED.
I created a test where I filled the queue with 10,000 items and then dequeued them one at a time in a single thread. What I found was a little surprising. The screen shot linked below shows the results of profiling the dequeue operation.
At this point in the execution, getNext() had been called 2,438 times and consumed 94.8% of execution time. But, what's interesting to me is that CursorImpl.getCurrentAlreadyLatched() had been called 425,935 times and a little deeper IN.isDirty() had been called 52,263,728 times! It sure looks like getNext() is really spinning for all 1-3ms and not blocked on anything.
Could this be a bug?
Is there tuning I can do to eliminate or reduce the cost?
Should I be doing this differently? IOW, should I do something other than open a cursor and call getNext() in order to pull out the entry with the lowest key?
Please let me know what additional information I can provide which would be helpful.
The JE methods in your profile -- in particular queueSlotDeletion -- tell me that what's happening when you call getNext is that JE is skipping over many deleted records before it finds one that is non-deleted. And since you're using the database as a priority queue, it makes sense that many records would be deleted in the key range that you're reading.
When you delete a record, its "slot" in the Btree isn't removed immediately, it's removed asynchronously via a couple different mechanisms. It can't be removed immediately, as part of the record deletion operation, because that would make the implementation of transaction abort problematic, because it would have to re-insert the slot.
So either 1) JE's asynch mechanisms for slot deletion are not keeping up with your deletion of records, or 2) the slots cannot be deleted by these mechanisms because a cursor is positioned on those slots, or slots in the nearby key range (within 100 keys or so).
To check whether the problem is (1), I suggest you try calling Environment.compress -- which synchronously tries to delete all slots for deleted records -- just before opening the Cursor and calling getNext. A more subtle approach may be needed to avoid performance problems, but this is a good experiment to get more info about the problem.
Spot on, as usual, and thank you for the quick reply. I certainly didn't expect to get a reply on a Saturday!
I did some more data collection and charted my getNext execution time. It was a classic saw tooth pattern with execution time increasing linearly with deletes until the cleaner thread (?) ran. After adding env.compress() before opening the cursor the chart was fairly flat (but noisy) and execution times were about 100us. That's what I need.
Our target environment runs on an SSD backed SAN with a decent sized battery backed cache. We intend to run with fsync on in production. We should be ok with being hard on storage if we need to in order to get the transaction rate up. What sort of performance problems might we see when running env.compress() so frequently?
The drawback of calling compress in your app thread is that when it does a bunch of work, that will take time and may impact the latency of the app operation. And if you do this in multiple threads, the other threads calling compress() will block if an expensive compress() call is already in progress.
I would first try setting EnvironmentConfig.COMPRESSOR_WAKEUP_INTERVAL to a smaller amount. The default wakeup interval -- how often it checks for deleted slots that need compression -- is 5 s. The minimum is 1 s, so you could try that.
If that's not sufficient then you may want to call compress() in your own background thread.
Or perhaps call compress() from your worker threads (the threads calling getNext()) but I suggest you do this in a way to prevent multiple threads from calling it concurrently, so they don't all block. If there is only one thread calling getNext(), then of course this isn't a problem.