1 Reply Latest reply: Feb 7, 2014 7:56 AM by Winter Zhang RSS

Bulk write performance

Duncan Grisby Newbie
Currently Being Moderated

Hello,

 

I have some code using the C API, that accesses a btree DUPSORT database. One part of it writes millions of small entries, each about 20 or 30 bytes for both the keys and values. Profiling shows that quite a lot of time is going into multiple cursor put calls for all the insertions, so I tried using the bulk insert operations. I used a single DBT buffer and DB_MULTIPLE_KEY in the call to db->put. (As an aside, it was totally unclear from the documentation whether it was necessary/best to use that or whether DB_MULTIPLE with separate key and data DBTs would work.)

 

Much to my surprise, performance did not improve at all. In fact, it appeared to get slightly slower. Profiling with valgrind/callgrind shows that the previous version of the code calls from my code into __dbc_put_pp 852,000 times. The new version using DB_MULTIPLE_KEY calls __db_put_pp 6,442 times (which is how many input items there are in the test), but that then calls __db_put, which in turn calls __dbc_put 852,000 times. The bulk operation doesn't seem to have saved me any time at all, and the initial creation of the bulk buffer has cost me a bit due to the extra memcpy calls.

 

Should I expect the bulk operations to be faster?  Have I done something wrong?

 

Thanks,

 

Duncan.

  • 1. Re: Bulk write performance
    Winter Zhang Explorer
    Currently Being Moderated

    Hi Duncan,

     

    Using bulk operation will not save the calls for __dbc_put, since it is required for every key-data pair put, and as you have seen, it does save calls for __db_put_pp and __db_put. It may improve performance much if your working dataset are nearly pure in memory.

     

    As to the 'slightly slower' you see, I want to know how you generate your data. If your are using different data(for example, generated randomly), then it is common your see different performance. Even on the same data, very slight degradation is also common because of the system's state.

     

    So, bulk operation may be faster, but not much when there are still many I/O operations. For your case, I suggest you sort your data before putting, and there is no need to sort all your records. Suppose you have 1000000 records to put, you can sort the first 10000 items and then put them, and then next 10000, and then next next 10000, etc. Your can balance the group numbers and the numbers in each small group. You only need to note that, the sort algorithm should be stable, which means that, if two key-data pairs are equal, the sort should not change their relative order. Also, the bt_compare(see DB->set_bt_compare) and dup_compare(see DB->set_dup_compare) functions should be used for comparing keys and data under the same key.

     

    Regards,

    Winter, Oracke Berkeley DB

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points