This comment is relevant to 3.7.1. I suppose thing remains the same in 12.1, but haven't verified yet.
There is no contract between "partition level transactions" and cache store.
In practice, in case of write-though
- storeAll() is never called, updated entries are passed to store() call one by one.
in write-behind cace
- it works according documentation (as you quoted) without any consent and "partition level transactions" boundaries.
That comment is still correct for 12.1 and 18.104.22.168.
I've checked Coherence APIs and the ReadWriteBackingMap behavior, and although partition level transactions are atomic, the updated entries will be added one by one to the write behind queue. In each added entry coherence uses current time to calculate when each entry will become ripe, so, there is no guarantee that all entries in the same partition level transaction will become ripe at the same time.
This leads me to another question.
We have a use case where we want to split a large entity we are storing in coherence into several smaller fragments. We use EntryProcessors and partition level transactions to guarantee atomicity in operations that need to update more than one fragment of the same entity. This guarantees that all fragments of the same entity are fully consistent. The cached fragments are then persisted into database using write-behind.
The problem now is how to guarantee that all fragments are fully consistent in the database. If we just relly on coherence write-behind mecanism we will have eventual consistency in DB, but in case of multi-server failure the entity may become inconsistent in database, which is a risk we wouldnt like to take.
Is there any other option/pattern that would allow us to either store all updates done on the entity or no update at all?
Probably if in the EntryProcessor we identify which entities were updated and if we place them in another persistency queue as a whole, we will be able to achieve this, but this is a kind of tricky workaround that we wouldnt like to use.