This discussion is archived
4 Replies Latest reply: May 7, 2013 6:57 AM by 1006314 RSS

Update events with same old and new values.

1006314 Newbie
Currently Being Moderated
Hello.
I'm using simple replicated cache and subscribing to update events to it with ObservableMap#addMapListener(MapListener) method.
The problem is that in received update event old and new values are identical by == operator.
Values are updated by next scenario:
MyEntity e = (MyEntity) cache.get(myKey);
e.setName("new name");
cache.put(myKey, e);

I have such problem only if there is a single node in the cluster, adding of new nodes resolves issue.

How can I fix this behavior for single node server?

Currently I'm using copy constructor to put new object with changed fields to get both objects in my listener. Like this:
MyEntity e = new MyEntity((MyEntity) cache.get(myKey));
e.setName("new name");
cache.put(myKey, e);
-----
Cache configuration:
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>MyReplScheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
<replicated-scheme>
<scheme-name>MyReplScheme</scheme-name>
<service-name>MyReplService</service-name>
<backing-map-scheme>
<local-scheme/>
</backing-map-scheme>
</replicated-scheme>
</caching-schemes>
</cache-config>
-----
Coherence version 3.6.0.2.
JDK version: 1.6.0_21 x86, 1.6.0_26 x86_64

Edited by: sigito on 30-Apr-2013 04:55
Added temporal solution.
  • 1. Re: Update events with same old and new values.
    Jonathan.Knight Expert
    Currently Being Moderated
    Hi,

    I suspect what you are seeing is the same as you would if you were using any of the standard Java Map implementations instead of a Replicated cache. If you only have a single process, then that process owns the entries in the replicated cache. When you do a get, what you receive is the reference to the actual value in the cache. When you mutate that value you are changing the value in the cache so when you do a put you are putting the same actual Object back into the cache, which is why == is true. When you have multiple cache nodes then Coherence is serializing the values between the nodes so you do not necessarily get back the actual reference.

    I hope that makes sense.

    JK
  • 2. Re: Update events with same old and new values.
    1006314 Newbie
    Currently Being Moderated
    Thanks for your reply, JK.

    For sure the problem is that coherence doesn't serialize objects with single node. But this behavior is a little confusing as the cache works in different way on different environment.
    I assumed that in any case I should receive old and new values and coherence should support this behavior.

    So this is not a bug but is a feature of a single node cluster?
  • 3. Re: Update events with same old and new values.
    Jonathan.Knight Expert
    Currently Being Moderated
    Hi,

    Yes, I suppose you could say this is a "feature" of a single node cluster. But given that nobody uses Coherence as a single node cluster I would say this is not a problem; even your testing should not be done on a single node cluster. The whole point of Coherence is that is scales out to hold data across many JVMs, if you can hold all your data in a single JVM then use a HashMap and save the cost and complexity of using Coherence.

    JK
  • 4. Re: Update events with same old and new values.
    1006314 Newbie
    Currently Being Moderated
    Anyway it is strange.

    Imaging such situation when there is a cluster of 3 Coherence nodes, and at some moment two of them fails. Now with single node I need to have another logic for event handling according to previously described issue.
    Also we can start new coherence nodes on demand when some limit reached or event occurred starting from 1 node.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points