This content has been marked as final. Show 5 replies
Well I think the replicated cache gives you (near) zero latency with get actions as all the data can be retrieved locally (from RAM).
Note that in the case of a replicated cache all the data is replicated across the cluster, thus with puts it is not that efficient to say the least,
as this depends on the size of your cluster.
With a distributed cache the data is at the most one network hop away. Coherence partitions the data in such a way that by issuing a get
operation at the most you will need one network hop, which is of course slower than in the replicated case.
Note that in the case of replication all the data is present on all nodes. In this case you are also in the disadvantage that you can store less
data than in the distributed case where the whole data set is distributed across the whole cluster - think garbage collection times
that are a function of the live data on the heap.
You can also use a near cache, which consists of a small local cache and a distributed backing cache. In this manner you get the best of both.
Some more information is provided here: http://middlewaremagic.com/weblogic/?p=7226 and the references therein.
I guess our situation is perhaps a little different to some of the other use-cases out there: we are after extreme-low-latency, and so we need our get() operations to do zero network hops in all cases. This is why we have chosen to use the Replicated Scheme traditionally.
In terms of zero-latency reads, I'm aware that a get() operation on a Replicated scheme will lead to some overhead, due to leases and serialization:
Performance - Local vs. Replicated Cache
Re: The effects of "leases" on the read-performance of Replicated Caches
The second link looks like it helped you out
- Re: The effects of "leases" on the read-performance of Replicated Caches
Also if you are looking at (near) zero latency when garbage collections occur, you might want to have at the Zing JVM
Layency due to garbage collection (i.e., introduced pause times) could break your (extreme) low-latency demand.
You could try the CQC with the always filter:
The preceding code will result in a locally materialized view of the cache data that satisfies the specified filter. By default, both keys and values will be cached locally.
NamedCache cache = CacheFactory.getCache("somecache"); ContinuousQueryCache localCache =new ContinousQueryCache(cache, AlwaysFilter.INSTANCE);
If you want to cache only keys and retrieve values from the back cache as needed, which might be the best option if the values are large and accessed infrequently, or
if you only care about having an up-to-date keyset locally, you can pass false as the third argument to the CQC constructor.
To get data from the CQC you can use
Iterator<Map.Entry<Integer, Klant>> data = localCache.entrySet().iterator();
"The Continuous Query Cache (CQC) is conceptually very similar to a near cache.
For one, it also has a zero-latency front cache that holds a subset of the data, and a
slower but larger back cache, typically a partitioned cache, that holds all the data.
Second, just like the near cache, it registers a listener with a back cache and updates
its front cache based on the event notifications it receives."
So yes, there is more space required.