2 Replies Latest reply on Jun 18, 2010 2:21 PM by 701681

    PRP forces use of  PublishingCacheStore, how do I write to my database.

      In an active-active PRP setup (where many users can edit same data in multiple regions), how does one write to a database (I have a database in each region and want all regions to be in sync). PRP forces use of PublishingCacheStore, so i cannot use a regular Database cache store.

      I can think of a few ways.

      1) Could I just add another "database" publisher. Delving further it looks like I would need a local site DatabasePubliser and a remote one that some how incorporates conflict resolution (as I would not be using a LocalCachePublisher). Means I would be publising dup data over the WAN, one for the remote caches (i.e. normal prp RemoteInvocationPublisher(LocalCachePublisher)), and one for the remote database (RemoteInvocationPublisher(DatabasePublisher). (seems wasteful on bandwidth and complex to incorporate conflict resolution)

      2) Create another cluster in each region, that is a PRP spoke of that region only, and that writes using a normal Database cache store to database. (seems over kill to have another cluster just for database writes)

      3) Create a trigger on each cache that writes to another cache eg: database-mycachename (in another cache service to avoid threading issues) in the same cluster, and that database-mycachename is backed by a database cache store.

      I am sure this issue must have been thought about and solved, can you advise?

        • 1. Re: PRP forces use of  PublishingCacheStore, how do I write to my database.
          I would go with solution one: write a database publisher that is a peer to the remote cluster publisher.
          At first glance I don't think you would have to worry about complicating conflict resolution since any
          writes seen by the database publisher by definition would have been resolved by the CR solution
          you wrote for publishing to another cluster. CR is done at the target site by the "local" batch
          publisher using the "front door" into Coherence, so all entries that are artifacts of CR go through
          the standard paths if CR decides to not to discard the entry coming from another cluster but
          either retain the entry "as is" or merge it with the local entry value.

          Anyway by the time a database publisher sees something to be published, CR has been done.

          What is good about defining a database publisher is that it can have its own set of publishing
          attributes tuned for publishing to a DB rather than a network. It also makes writing to the
          DB asynchronous from writes to the cache.
          • 2. Re: PRP forces use of  PublishingCacheStore, how do I write to my database.
            Hi Bob,

            I started out with option 1, but, it does not actually work with PRP.

            The PRP event makes it across from Site A to Site B, but the Database publisher attached to Site B does not get fired. The only time the database publisher gets fired on site B, is if the event was generated in site B.

            I wonder in your internal ping/pong logic (similiar concept you used to have with SafePublishingCacheStore) is stopping the entry being published to the Site B database publisher.

            Anyhow I ended up going for option 3:

            PRP event makes it across from Site A to Site B
            Conflict resolution is run, and value is written to my cache.
            A trigger which is attached to this cache fires, and I write to another cache (with high-units=0, so as not to take up extra cluster space), this then has a normal database cache store attached, seems to work well.