Forum Stats

  • 3,874,905 Users
  • 2,266,789 Discussions
  • 7,912,003 Comments

Discussions

Coherence ConcurrentMap methods thread safety

3613332
3613332 Member Posts: 11
edited Jan 8, 2018 11:53AM in Coherence Support

Hi!

My questions are:

1. Are put() and other mutating methods of ConcurrentMapimplementations have to be atomic? Do we always have to call put() under lock or only if it is a part of compound transaction? Documentation is not clear about it.

2. If I have two identical instances of the same key class, are they considered the same key by lock() or they are locked on independently?

I will appreciate if links are provided.

Thanks,

Boris

Best Answer

  • Mfalco-Oracle
    Mfalco-Oracle Member Posts: 503
    edited Jan 5, 2018 2:37PM Answer ✓

    Hi Boris,

    Regarding question #1:

    The Coherence com.tangosol.util.ConcurrentMap implementations single key operations (including put) are atomic and you do not need to explicitly lock an item when performing them.  Furthermore you really would only need to use the locking facilities if you wanted to perform multiple ConcurrentMap operations within a single "transaction". 

    Regarding question #2:

    The equality test for the key used in a lock operation would be the same as the equality operation used on any other key based methods.  Thus they only need to have .equals() equivalence, reference equality is not a requirement.  Note if you are using remote (i.e. NamedCache) implementations then the equality test is based on the key's serialized form, i.e. two keys which are .equals() must serialize to identical byte sequences.

    Speaking of remote implementations of ConcurrentMap i.e. NamedCache, when working with these you should really avoid locking and instead look at using the EntryProcessor features of the InvocableMap interface which NamedCache also implements.  EntryProcessor based transactions are both faster and safer then their lock based counterparts.  Locks against a remote implementation are inherently unsafe as we do not roll back changes if the lock holding client crashes mid transaction, though we do release the lock.  But with EntryProcessors we will either atomically commit the transaction, or roll it back, there is no risk of committing a partial transaction.

    thanks,

    Mark

    Oracle Coherence

Answers

  • Mfalco-Oracle
    Mfalco-Oracle Member Posts: 503
    edited Jan 5, 2018 2:37PM Answer ✓

    Hi Boris,

    Regarding question #1:

    The Coherence com.tangosol.util.ConcurrentMap implementations single key operations (including put) are atomic and you do not need to explicitly lock an item when performing them.  Furthermore you really would only need to use the locking facilities if you wanted to perform multiple ConcurrentMap operations within a single "transaction". 

    Regarding question #2:

    The equality test for the key used in a lock operation would be the same as the equality operation used on any other key based methods.  Thus they only need to have .equals() equivalence, reference equality is not a requirement.  Note if you are using remote (i.e. NamedCache) implementations then the equality test is based on the key's serialized form, i.e. two keys which are .equals() must serialize to identical byte sequences.

    Speaking of remote implementations of ConcurrentMap i.e. NamedCache, when working with these you should really avoid locking and instead look at using the EntryProcessor features of the InvocableMap interface which NamedCache also implements.  EntryProcessor based transactions are both faster and safer then their lock based counterparts.  Locks against a remote implementation are inherently unsafe as we do not roll back changes if the lock holding client crashes mid transaction, though we do release the lock.  But with EntryProcessors we will either atomically commit the transaction, or roll it back, there is no risk of committing a partial transaction.

    thanks,

    Mark

    Oracle Coherence

  • 3613332
    3613332 Member Posts: 11
    edited Jan 8, 2018 9:58AM

    Mark,

    Thanks a lot for a comprehensive answer.

    (I am tied to using locks as I am working on existing design)

    Another thing you possibly could clear: Coherence API help for ConcurrentMap says about put and remove that some implementation will attempt to obtain a lock for the key before proceeding with the operation. But if a mutating operation needs a lock to execute atomically then get may attempt to lock (for example, in some rare cases when a Map structure is changed by remove). But help on get says nothing about any possible internal locking.

    Thanks,

    Boris

  • Mfalco-Oracle
    Mfalco-Oracle Member Posts: 503
    edited Jan 8, 2018 10:40AM

    Hi Boris,

    To answer your internal locking questions I'll need to know which implementation of ConcurrentMap you are using, though I think the general answer is that so long as you order your lock acquisition you should be fine, i.e. ensure that any two transactions which will touch multiple keys that they touch those keys in the same order.

    Also again I'll strongly recommend that you re-evaluate your choice of lock based rather then an EntryProcessor based approach if you are using remote maps, i.e. what Coherence is all about.  The performance difference between the two is massive.  Just as a simple example, a lock based transaction would be a minimum of four network round trips (lock, get, put, unlock) where an EntryProcessor would be just a single network round trip.  So not only are you looking at a minimum of a 4x latency benefit, you will also reduce the contention window on a much more dramatic scale.  When an EntryProcessor executes it would block other EntryProcessors against that key for just the time it takes for the first EntryProcessor to run on the cache server CPU, i.e. just a few micro seconds, and then the next would get immediately invoked, thus your per-key throughput could be in the 10s to 100s of thousands of operations per second.  With a lock based approach you your per-key throughput will be directly related to the network latency, and those four round trips, which should total up to about 2ms, so about 500 operations per second, so while the latency difference is about 4x, the throughput difference is more in the ballpark of 1000x.

    thanks,

    Mark

    Oracle Coherence

  • 3613332
    3613332 Member Posts: 11
    edited Jan 8, 2018 11:44AM

    Mark,

    We are using NearCache with Local front and Distributed back.

    The reason for my internal lock question was to get confirmation that it is possible and just missed from the doc on any ConcurrentMap implementation.

    Thanks,

    Boris

  • Mfalco-Oracle
    Mfalco-Oracle Member Posts: 503
    edited Jan 8, 2018 11:53AM

    Hi Boris,

    Ok, so for you it comes down to how DistributedCache locks work.  Basically the user accessible locks, i.e. cache.lock aren't used internally, so you don't have to worry about any lock we take out internally interacting with the locks you've taken out externally.

    thanks,

    Mark

    Oracle Coherence

This discussion has been closed.