I'm using simple replicated cache and subscribing to update events to it with ObservableMap#addMapListener(MapListener) method.
The problem is that in received update event old and new values are identical by == operator.
Values are updated by next scenario:
MyEntity e = (MyEntity) cache.get(myKey);
I have such problem only if there is a single node in the cluster, adding of new nodes resolves issue.
How can I fix this behavior for single node server?
Currently I'm using copy constructor to put new object with changed fields to get both objects in my listener. Like this:
MyEntity e = new MyEntity((MyEntity) cache.get(myKey));
I suspect what you are seeing is the same as you would if you were using any of the standard Java Map implementations instead of a Replicated cache. If you only have a single process, then that process owns the entries in the replicated cache. When you do a get, what you receive is the reference to the actual value in the cache. When you mutate that value you are changing the value in the cache so when you do a put you are putting the same actual Object back into the cache, which is why == is true. When you have multiple cache nodes then Coherence is serializing the values between the nodes so you do not necessarily get back the actual reference.
For sure the problem is that coherence doesn't serialize objects with single node. But this behavior is a little confusing as the cache works in different way on different environment.
I assumed that in any case I should receive old and new values and coherence should support this behavior.
So this is not a bug but is a feature of a single node cluster?
Yes, I suppose you could say this is a "feature" of a single node cluster. But given that nobody uses Coherence as a single node cluster I would say this is not a problem; even your testing should not be done on a single node cluster. The whole point of Coherence is that is scales out to hold data across many JVMs, if you can hold all your data in a single JVM then use a HashMap and save the cost and complexity of using Coherence.
Imaging such situation when there is a cluster of 3 Coherence nodes, and at some moment two of them fails. Now with single node I need to have another logic for event handling according to previously described issue.
Also we can start new coherence nodes on demand when some limit reached or event occurred starting from 1 node.