3 Replies Latest reply on Feb 20, 2013 6:23 AM by sdv0967

    Pre-Loading data from database on server start

      Hi All,
      I am using Oracle Coherence 3.7.1.
      What i want is, whenever i start my DefaultCacheServer, data from database should get loaded into cache automatically.

      Can some one help me with the configuration/implementation of above said problem ?

      on this link: http://docs.oracle.com/cd/E18686_01/coh.37/e18692/cohjdev.htm#BABBEJAC
      it is mentioned that with DBCacheStore you can load data into cache from database.
      but this will load data from database to cache for those keys which are not present in cache.
        • 1. Re: Pre-Loading data from database on server start

          Although there are a few ways to try to get Coherence to load your data "automatically" in all the years I have been building Coherence systems the most reliable way to pre-load a cluster has always been to write a separate process that you run after the cluster has started. This process jst connects to the DB, reads the data out and bulk loads it into your caches.

          • 2. Re: Pre-Loading data from database on server start
            Hi Jonathan,
            Thank you for your reply. I also was thinking about building a new process for bulk load to cache from database.
            But still i m interested to know few other ways to try to get Coherence to load your data "automatically".
            • 3. Re: Pre-Loading data from database on server start

              You can do the following (I used this way in several projects successfully):

              1. Register member-listener for your cache. In my case it is a Spring bean which is configured in app context:
              2. Register and configure the Listener in app-context:
                  <bean id="distPopulationListener" scope="prototype" class="my.cache.population.CachePopulationListener">
                      <constructor-arg name="clusterSize" value="4"/>
                      <constructor-arg name="invocationServiceName" value="PopulationService"/>
                      <property name="populators">
                              <ref bean="esPopulator"/>
                              <ref bean="sePopulator"/>
                              <ref bean="ushPopulator"/>
              3. And Populator/CacheStore beans assigned to it:
                  <bean id="sePopulator" class="my.cache.population.DistributedCachePopulator">
                      <constructor-arg name="cacheName" value="my-cache-name"/>
                      <property name="batchSize" value="25000"/>
                      <property name="cacheStore" ref="seCacheStore"/>
                  <bean id="seCacheStore" class="my.cache.store.SECacheStore"
              4. Develop CachePopulationListener class - it must implement MemberListener interface and must react on memberJoined/memberLeft events properly. In my case when number of nodes reaches the configured clusterSize the Listener start Populatiors from its list. Populators are Invocable and run on separate InvocationService threads.
                   public void memberJoined(MemberEvent event) {
                        //when number of members is ok
                        Member local = event.getService().getCluster().getLocalMember();
                           Set<Member> localSet = Collections.singleton(local);
                        for (AbstractInvocable proc : populators) {
                             popService.execute(proc, localSet, null);
              5. Develop DistributedCachePopulator class - it must implement Invocable. In run method it take keys to populate from its underlying CacheStore and loads only these which belongs to the current cache member. I do this in batches:
                   public void populate(NamedCache cache) {
                      List allKeys = getStore().getDataKeys();
                      if (allKeys != null && allKeys.size() > 0) {
                           CacheService svc = cache.getCacheService(); 
                           Member local = svc.getCluster().getLocalMember();
                           PartitionedService psvc = (PartitionedService) svc;
                           List keys = new ArrayList(getBatchSize());
                           for (Object key : allKeys) {
                                if (psvc.getKeyOwner(key) == local) {
                                     if (keys.size() == getBatchSize()) {
                                          Map result = cache.getAll(keys);
                           if (keys.size() > 0) {
                                  Map result = cache.getAll(keys);
              This schema can be used not only on initial cluster startup, but also when too many nodes left cluster and data has to be re-populated automatically.

              HTH, Denis.