This content has been marked as final. Show 2 replies
Coherence partitioned cache is designed for linear scalability and it does it quite well. I don't see any reasons of performance degrations with increase in data size given, you have enough cores and memory for processing the requests and managing the data.
There are some details about sizing in the Administrators Guide here: http://docs.oracle.com/cd/E24290_01/coh.371/e22838/deploy_checklist.htm#CHDDDHBJ
As a general rule you can start with 1/3 of your total heap size as a rough calculation for the amount of data you can hold. So if your cluster is 100 storage enabled JVMs of 2GB each you have a total of 200GB of heap so can store 66GB of data.
Things that can affect this are how many indexes you have and how much in-cluster processing you do. A lot of indexes obviously need more heap and a lot of in-cluster processing can require more spare heap to run those tasks.
Coherence also has various off-heap storage mechanisms that allow you to scale to more data. Some of these are not too performant but we have been using the elastic data (flash scheme) functionality in 3.7 for a while now and it appears to work well. We used it to basically double the amount of data we hold in some of our clusters and we are about to start testing this with much bigger amounts of data.