I have some processing that need to lock two entries in the same cache. Is explicit locking the only way for doing this? If yes, Coherence Extend cannot be used as it must use node locking instead of thread locking, right?
I am working on equity trading OMS system. An order belongs to a transaction account which has approved credit risk limit and position data.
If an order has been input wrong transaction account, we need to rectify it. In that case, we need to lock both accounts (wrong and correct one) for updating the order and necessary data such as credit left and position within atomic transaction. We are using Coherence 3.7.
Any idea apart from explicit lock. Really need your help. Many thanks.
Perhaps you can use key affinity (or custom key partitioning algorithm algorithm) to ensure that both objects ends up in the same partition and then you may be able to use an update processor (to avoid the costly distributed loking)...
Even with explicit locking, you won't be able to achieve the atomicity you're looking for. I think the best you'll be able to do is achieve eventual consistency. One approach would be to use an EntryProcessor to remove an order from one account and atomically place the order (along the destination account id) into another cache, using key association with the source account id to keep the order in the same partition as the source account. You could then use write-behind to apply the order to the destination account. This approach ensures the order won't get lost as it basically moves from one backed-up cache to another backed-up cache atomically, and then will sit in the fault-tolerant write-behind queue until it gets applied to the destination account. The drawback is that for a short period of time, the order will not be applied to any account, but it eventually will.
No I can see that when reading your question more carefully - sadly I think you have little choice but to use locking (what you describe is the most classic schoolbook example of when to use locking). Coherence does of course have transaction support but as far as I know they dont use any "undocumented APIs" that would be more efficient than performing the locking yourself (probably it is less efficient since
the transaction support also includes XA).
As for your code snippet I supose you are aware that Coherence use what I use to call "cooperative locking" i.e. all participants must follow the lock protocoll (code to order the loccing in key-order or whatever - left out in your example for simplicity I assume) both at READ AND WRITE i.e. just because one process locks an object that will not prevent anotherfrom writing nor reading...
I'm not sure if you're aware of this, but Coherence locking doesn't prevent other cache operations, such as get, put, invoke, etc... A lock only prevents other threads from obtaining the same lock, i.e., it only blocks cache.lock() operations on the same key. This means that, depending on the isolation you want to achieve, you will need to ensure that other cache operations in you application are also guarded by a try/finally lock/unlock block. This can have a significant impact on performance, especially get operations, since each lock and unlock are remote operations.
Even if you were OK with the performance impact and guarded your other cache operations with lock/unlock, you still have a failure scenario to deal with. In your example, the processing logic will likely include put operations against the source and destination accounts. If your client node (the node that owns the locks) happens to fail between the two put calls, Coherence will automatically release both locks while only one of your accounts has been updated. So now it's up to you to recover from this failure. This is why I believe the best you'll be able to achieve is eventual consistency and the approach I outlined before would not be vulnerable to the client failure I just described.
Regarding eventual consistency and the approach you outlined above, do you mean the entry processor should be invoked against source account? If yes, this just implicit locks source account. During the execution of the entryProcessor, other threads can still create order for example under destination account and this may exceed the credit risk limit of destination account. Is this describe the scenario in your approach? Thanks.
But even with some sort of explicit locking approach nothing will stop an order getting in for the destination account before you lock it.
Obviously I don't know you OMS system in detail but presumably your scenario is that an order has been placed against the wrong account and you need to move it to the correct account. But before you do this correction orders can still be made against the destination account that mean when the correction is made the destination account is over its limit. How is this scenario any different to an order being placed against the destination account half way through the correction process.
Yes, that exactly our use case. In simpler terms, we need to update credit risk used, position etc on both accounts in the correction process.
However, entry processor can only invoke on one account, and from which we cant lock another account using backing map method as destination account may reside on different node from source account. Any idea or workaround?
Alternatively you could use explicit partition locking. Lock whole partition, invoke entry processor, unlock whole partition (tip you can lock/unlock key even if there is not associated entry in key).
I in may days before Coherence, I were using following pattern.
Each entity has additional attributes for managing cross entry transactions.
1. Update one entry, but store transaction object (update details for both accounts) as additional attribute.
2. Update second entry, but keep transaction ID in list of commited transactions associated with entry.
3. Remove transaction object from first entry, storing transaction ID in list of completed transactions.
Have recovery process which in monitoring entries with associated uncommited transactions and roll forward uncommited transactions (you can use index to track transaction in process).
Then you applying transaction to entry, check first if this transaction ID already in commited list for that entry.
Simple put, you are keeping write-ahead-transaction log associated with each entry.
General implementation sounds complex but if you tailor it to your use case it may become much simpler.
* Few cavets, lists of commited transactions may grow. You may need clean them (removing transactions confirmed on all participating entries).
* Your transaction object should be deterministic.
* You still have to do with logical concurrency issues, this approach does not support rolling back (at least in simple way).
How will locking a partition be any different to locking keys? You do not even know that the two accounts being modified live in the same partition. If you have some way of associating the two accounts so they can live in the same partition then you can lock that pseudo-key and still do not need to lock a partition.
As Jonathan suggests, the problem you describe has little to do with the transactional approach for correcting an order applied to the wrong account. From what you describe, wouldn't you also have a corresponding problem with the source account, where orders applied after the incorrect one could cause the source account to exceed its credit risk limit, but as part of the correction process, those subsequent orders may be fine? Fundamentally, you have a temporal problem, where earlier orders affect ones that arrive later, which will impact the case where you remove an order from an account with orders that have arrived after the order to be removed as well as the case where you try to apply an order to an account ahead of other orders. What action does your application take when an account has exceeded its credit risk limit? Do you currently handle the case where an order arrives late?