2 Replies Latest reply: Sep 26, 2013 8:55 AM by jcpsantos RSS

    Write-Behind batch behavior in EP partition level transactions

    jcpsantos

      Hi,

       

      We use EntryProcessors to perform updates on multiple entities stored in the same cache partition. According to the documentation, Coherence handles all the updates in a "sandbox" and then commits them atomically to the cache backing map.

       

      The question is, when using write-behind, does Coherence guarantee that all entries updated in the same "partition level transaction" will be present in the same "storeAll" operation?

       

      Again, according to the documentation, the write-behind thread behavior is the following:

      • The thread waits for a queued entry to become ripe.
      • When an entry becomes ripe, the thread dequeues all ripe and soft-ripe entries in the queue.
      • The thread then writes all ripe and soft-ripe entries either via store() (if there is only the single ripe entry) or storeAll() (if there are multiple ripe/soft-ripe entries).
      • The thread then repeats (1).

       

      If all entries updated in the same partition level transaction become ripe or soft-ripe at the same instant they will all be present in the storeAll operation. If they do not become ripe/soft-ripe in the same instant, they may not be all present.

      So, it all depends on the behavior of the commit of the partition level transaction, if all entries get the same update timestamp, they will all become ripe at the same time.

       

      Does anyone know what is the behavior we can expect regarding this issue?

       

      Thanks.

        • 1. Re: Write-Behind batch behavior in EP partition level transactions
          alexey.ragozin

          Hi,

           

          This comment is relevant to 3.7.1. I suppose thing remains the same in 12.1, but haven't verified yet.

           

          There is no contract between "partition level transactions" and cache store.

          In practice, in case of write-though

          - storeAll() is never called, updated entries are passed to store() call one by one.

          in write-behind cace

          - it works according documentation (as you quoted) without any consent and  "partition level transactions" boundaries.

           

          Regards,

          Alexey

          • 2. Re: Write-Behind batch behavior in EP partition level transactions
            jcpsantos

            Hi,

             

            That comment is still correct for 12.1 and 3.7.1.10.

            I've checked Coherence APIs and the ReadWriteBackingMap behavior, and although partition level transactions are atomic, the updated entries will be added one by one to the write behind queue. In each added entry coherence uses current time to calculate when each entry will become ripe, so, there is no guarantee that all entries in the same partition level transaction will become ripe at the same time.

             

            This leads me to another question.

            We have a use case where we want to split a large entity we are storing in coherence into several smaller fragments. We use EntryProcessors and partition level transactions to guarantee atomicity in operations that need to update more than one fragment of the same entity. This guarantees that all fragments of the same entity are fully consistent. The cached fragments are then persisted into database using write-behind.

            The problem now is how to guarantee that all fragments are fully consistent in the database. If we just relly on coherence write-behind mecanism we will have eventual consistency in DB, but in case of multi-server failure the entity may become inconsistent in database, which is a risk we wouldnt like to take.

             

            Is there any other option/pattern that would allow us to either store all updates done on the entity or no update at all?

            Probably if in the EntryProcessor we identify which entities were updated and if we place them in another persistency queue as a whole, we will be able to achieve this, but this is a kind of tricky workaround that we wouldnt like to use.

             

            Thanks.