This content has been marked as final. Show 4 replies
Personally I would not do what you are suggesting unless we are talking about a web application with only a few users.
A CQC holds deserialized copies of the underlying data so the more CQCs you have the bigger your heap requirements. A CQC also registers a couple of Map Listeners with the underlying caches so the more CQCs you have the more listeners and the more event processing you will have, especially if the underlying data changes frequently. Presumably you have also thought about how to clean up orphaned CQCs where the client goes away as http is connectionless.
From your description it sounds like you are looking at something to push changes to the user if the data they are looking at changes. I have seen this done before but instead of a CQC or listener per user the web application just had a single listener to the cache. The web app then kept track of what data a particular user was looking at so when it received an event from the cache it could push that to all the relevant users.
Thanks for your reply. The problem with listening MapListener for all the data is that there is a significant amount of it (this is what we were doing originally but the application could not keep up - the Extend Client would not handle messages quickly enough - due to a slow connection between regional environments that we cannot do anything about).
Perhaps an alternative would be a map listener (or CQC) that was actively managed by what the users were looking at (but I think this would introduce a bottleneck?).
-Edit- To make myself clearer, what I mean is single (or pool) of CQC that is released and reinitialized based on the cumulative set of items being viewed by multiple users.
The user base is likely to be less than 100, and if each cache has a maximum of 50 items, from what you say it would probably be sufficient? The data is fairly small and is likely to be distinct between each user (if two caches have the same data, is it two different instances of the deserialized object?)
Edited by: 991865 on 27-Mar-2013 07:30
So 100 CQCs is probably not too excessive depending on the configuration of the process instantiating the CQCs and the cluster size etc.
Each CQC will hold its own set of deserialized keys and values, so yes they are distinct objects, although a CQC of 50 entries would not be very big.
One query I have - you mention that this is a Web Application but you also mention an Extend Client. Is your Web App and Extend Client of the main cluster? Is there are reason why you did this, most people would make a Web App a storage disabled cluster member so it would perform a bit better. Providing the Web App sits on a server that is very close in network terms to the cluster (i.e. same switch) then I would make it part of the cluster - or is the Web App the thing that is in the "regional environment".
If you are running CQCs over Extend then there used to be some issues with this if the Extend connection was lost. AFAIK this is supposed to be fixed in later patches so I would get 18.104.22.168 and make sure you test that the Web App continues to work and properly fails over if you kill its Extend connection. When the CQC fails over it will reinitialize all its data so you will need to cope with that if you are pushing changes based on the CQC.
To clarify, the webapp is an extend client of the main cluster. This was done "for security reasons" i.e. the initiator of a connection must be from the DMZ not to the DMZ (or something like that...large organisation, no choice). But yes, the webapp is hosted in a regional environment.
We are using 22.214.171.124 but I will check as you suggested