Using bulk operation will not save the calls for __dbc_put, since it is required for every key-data pair put, and as you have seen, it does save calls for __db_put_pp and __db_put. It may improve performance much if your working dataset are nearly pure in memory.
As to the 'slightly slower' you see, I want to know how you generate your data. If your are using different data(for example, generated randomly), then it is common your see different performance. Even on the same data, very slight degradation is also common because of the system's state.
So, bulk operation may be faster, but not much when there are still many I/O operations. For your case, I suggest you sort your data before putting, and there is no need to sort all your records. Suppose you have 1000000 records to put, you can sort the first 10000 items and then put them, and then next 10000, and then next next 10000, etc. Your can balance the group numbers and the numbers in each small group. You only need to note that, the sort algorithm should be stable, which means that, if two key-data pairs are equal, the sort should not change their relative order. Also, the bt_compare(see DB->set_bt_compare) and dup_compare(see DB->set_dup_compare) functions should be used for comparing keys and data under the same key.
Winter, Oracke Berkeley DB