Discussions
Categories
- 385.5K All Categories
- 5.1K Data
- 2.5K Big Data Appliance
- 2.5K Data Science
- 453.4K Databases
- 223.2K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 47 Multilingual Engine
- 606 MySQL Community Space
- 486 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3.2K ORDS, SODA & JSON in the Database
- 584 SQLcl
- 4K SQL Developer Data Modeler
- 188K SQL & PL/SQL
- 21.5K SQL Developer
- 45 Data Integration
- 45 GoldenGate
- 298.4K Development
- 4 Application Development
- 20 Developer Projects
- 166 Programming Languages
- 295K Development Tools
- 150 DevOps
- 3.1K QA/Testing
- 646.7K Java
- 37 Java Learning Subscription
- 37.1K Database Connectivity
- 201 Java Community Process
- 108 Java 25
- 22.2K Java APIs
- 138.3K Java Development Tools
- 165.4K Java EE (Java Enterprise Edition)
- 22 Java Essentials
- 176 Java 8 Questions
- 86K Java Programming
- 82 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.3K Java SE
- 13.8K Java Security
- 208 Java User Groups
- 25 JavaScript - Nashorn
- Programs
- 666 LiveLabs
- 41 Workshops
- 10.3K Software
- 6.7K Berkeley DB Family
- 3.6K JHeadstart
- 6K Other Languages
- 2.3K Chinese
- 207 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 474 Portuguese
locktimeout on secondary database inserts

Hello all,
My current setup uses BDBJE 4.1.10. I have 10 threads trying to write a Keyword object to the database.
The nature of the data is that the secondary keys - stores and items have a lot of duplicates. Meaning, many Keyword objects map to the same store and items entries.
LOCK_N_LOCK_TABLES is 7
public class Keyword implements Serializable {
private String key;
@SecondaryKey(relate = Relationship.MANY_TO_MANY)
private List<Long> stores = new ArrayList<Long>();
@SecondaryKey(relate = Relationship.MANY_TO_MANY)
private List<Long> items = new ArrayList<Long>();
}
About 10% (out of 100,000) of the secondary database inserts fail with a LockTimeoutException.
Looking at the stacktrace, I understand that the thread 6 waited to obtain the lock for 2000ms before giving up and there are 3 more threads waiting.
Essentially, the operation that the Owner thread (thread 10) is performing is taking more than 2000ms. Thread 6 is trying to insert a record in the secondary database items.
Stacktrace:
com.sleepycat.je.LockTimeoutException: (JE 4.1.10) Lock expired. Locker 786527582 -1_fileChannelTaskExecutor-6_ThreadLocker: waited for lock on database=persist#faceted_store#com.xyzq.seo.bdb.entity.Keyword#items LockAddr:43409842 node=281964 type=WRITE grant=WAIT_NEW timeoutMillis=2000 startTime=1538759059393 endTime=1538759061393 Owners: [<LockInfo locker="786774600 -1_fileChannelTaskExecutor-10_ThreadLocker" type="WRITE"/>] Waiters: [<LockInfo locker="1270863098 -1_fileChannelTaskExecutor-9_ThreadLocker" type="WRITE"/>, <LockInfo locker="1244207474 -1_fileChannelTaskExecutor-4_ThreadLocker" type="WRITE"/>, <LockInfo locker="1391625295 -1_fileChannelTaskExecutor-1_ThreadLocker" type="WRITE"/>] at com.sleepycat.je.txn.LockManager.newLockTimeoutException(LockManager.java:608 at com.sleepycat.je.txn.LockManager.makeTimeoutMsgInternal(LockManager.java:567 at com.sleepycat.je.txn.SyncedLockManager.makeTimeoutMsg(SyncedLockManager.java:75 at com.sleepycat.je.txn.LockManager.lockInternal(LockManager.java:385 at com.sleepycat.je.txn.LockManager.lock(LockManager.java:272 at com.sleepycat.je.txn.BasicLocker.lockInternal(BasicLocker.java:134 at com.sleepycat.je.txn.Locker.lock(Locker.java:453 at com.sleepycat.je.dbi.CursorImpl.lockDupCountLN(CursorImpl.java:2768 at com.sleepycat.je.tree.Tree.insertDuplicate(Tree.java:2847 at com.sleepycat.je.tree.Tree.insert(Tree.java:2488 at com.sleepycat.je.dbi.CursorImpl.put(CursorImpl.java:1209 at com.sleepycat.je.Cursor.putAllowPhantoms(Cursor.java:1799 at com.sleepycat.je.Cursor.putNoNotify(Cursor.java:1756 at com.sleepycat.je.Cursor.putNotify(Cursor.java:1689 at com.sleepycat.je.Cursor.putInternal(Cursor.java:1626 at com.sleepycat.je.SecondaryDatabase.insertKey(SecondaryDatabase.java:984 at com.sleepycat.je.SecondaryDatabase.updateSecondary(SecondaryDatabase.java:909 at com.sleepycat.je.SecondaryTrigger.databaseUpdated(SecondaryTrigger.java:41 at com.sleepycat.je.Database.notifyTriggers(Database.java:2016 at com.sleepycat.je.Cursor.putNotify(Cursor.java:1702 at com.sleepycat.je.Cursor.putInternal(Cursor.java:1626 at com.sleepycat.je.Database.putInternal(Database.java:1186 at com.sleepycat.je.Database.put(Database.java:1058 at com.sleepycat.persist.PrimaryIndex.putNoReturn(PrimaryIndex.java:479 at com.sleepycat.persist.PrimaryIndex.putNoReturn(PrimaryIndex.java:442 at com.xyzq.bdb.cache.da.impl.BDBDataAccessor.create(BDBDataAccessor.java:77)
I tried the following
- Reduced the lock time. Hoping that the winner would release the lock as soon as its done insering records into
- Catch LockTimeOutException and retry inserting the keyword later. This works but it's manual and takes time. 10% of keywords fail because of a LockTimeOutException. ~ 10k out of 100k.
I have a few questions:
Is there a limit on the number of threads that can write to the DB or does it depend on the data?
Would it help if I model the entities as pure key value pairs instead of having secondary databases?
Keyword
- key
Item
-id
-key [reference to Keyword Obj]
Store
- id
- key [reference to Keyword obj.]
Thanks!
- K