- 382.3K All Categories
- 2.1K Data
- 209 Big Data Appliance
- 1.9K Data Science
- 448K Databases
- 220.9K General Database Discussions
- 25 Multilingual Engine
- 520 MySQL Community Space
- 467 NoSQL Database
- 7.8K Oracle Database Express Edition (XE)
- 2.9K ORDS, SODA & JSON in the Database
- 492 SQLcl
- 3.9K SQL Developer Data Modeler
- 186.2K SQL & PL/SQL
- 21K SQL Developer
- 293.3K Development
- 7 Developer Projects
- 128 Programming Languages
- 290K Development Tools
- 95 DevOps
- 3K QA/Testing
- 645.5K Java
- 24 Java Learning Subscription
- 36.9K Database Connectivity
- 149 Java Community Process
- 104 Java 25
- 22.1K Java APIs
- 137.9K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 16 Java Essentials
- 144 Java 8 Questions
- 85.9K Java Programming
- 79 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.2K Java SE
- 13.8K Java Security
- 198 Java User Groups
- 265 LiveLabs
- 36 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.8K Other Languages
- 2.3K Chinese
- 166 Deutsche Oracle Community
- 1.2K Español
- 1.9K Japanese
- 230 Portuguese
I have a doubt about cache.
The Environment is transactional and the configuration is the default options.
I have Primary Index like that:
private PrimaryIndex<Long, EnSomeEntity> pkSomeEntityById;
I did record on it about 100000000 (one hundred million) elements.
A huge amount of data about 80G of data.
1- I did a count on it it returned the one hundred million elements count but id delayed about 15 minutes.
How I did:
2- I did the count again and it returned in 8 seconds.
3- I did restart the machine; now I did the count on it it delayed about 15 minutes again.
4- I did the sarch again (after restarting) the count returned in 8 seconds.
This cache is handled by the Berkeley DB or it is the Disk Cache. (the disk is not SDD, it's a 'normal' SATA).
What's the best way to handle cache with this huge amount of data?