Skip to Main Content

Berkeley DB Family

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Working with large data

529154Aug 21 2006 — edited Aug 21 2006
I have to create a database with specific distribution of key size and data
size. Key size is some few bytes, and data size varies in a great range from
few bytes to some megabytes with average size near 64K. Overall size of a
database filled by key/data pairs is some gigabytes. One can imagine our
key/data pairs form context index (inverted file) for a large set of
Russian/English texts.

Could you recommend the best way to configure such a data storage? I mean the
best random read speed for key/data pairs in a given database and good enough
write speed (for "context index" updating).

Comments

Alanc-Oracle

The <threads.h> header from C11 is only available on Solaris 11.4. Older versions of Solaris do not have C11 support and require using the POSIX threads API instead.

1 - 1
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Sep 18 2006
Added on Aug 21 2006
1 comment
951 views