This content has been marked as final. Show 3 replies
899446 wrote:I would guess the effect you see is because of the cache keys being serialized as the backing map keys would be in binary format.
We've recently switched some of our caches from Local to Replicated schemes. See config below. I would expect no change in "get" performance, since the data will be local to the member. However, we've seen a slight degradation of performance since making the change. The change is small (microseconds), but being a low-latency application we are feeling the affects.
Is there any reason why this would be the case?
A follow-on question: I've specified a POF serializer for the caches. However, I would really prefer not to have the entries stored in a serialized form. Rather, we would see better performance if they were in a deserialized, in-memory state. Is this possible? I understand that removing the <serializer> element will cause the cache to default to using Java Serialization.Replicated caches keep entry values which the local code already saw in Object form. Incoming updates are stored in binary form until they are accessed locally at which point they are going to be replaced with the Java object form. This way you don't pay for deserializing versions of cache values which are never actually accessed. You should be aware though, that multiple threads accessing the same cached value get the same Java object reference so you must not mutate that object, you must clone it before mutating it.
Serialization at writes is mandatory however if you have other nodes in the replicated cache service, as data needs to be sent to them. If you have only one node in the replicated cache service, serialization of values does not take place until the second node in the service joins. You see it is as optimized as you can realistically get...
Thanks Rob. It's a shame we don't get more advanced control over this behaviour - our system is very sensitive to read-latency, but doesn't have such high concerns about writes or storage space.
Thanks Rob. It's a shame we don't get more advanced control over this behaviour - our system is very sensitive to read-latency, but doesn't have such high concerns about writes or storage space.If you are are so conscious about read latency so that you would rather take the hit on incoming updates even which are not going to be looked at, then you may try to investigate around replacing the backing map class with something which eagerly deserializes Binary objects put into its entry values.
But it will possibly affect the stability of that replicated cache if you have a somewhat moderate update rate.