This discussion is archived
8 Replies Latest reply: May 23, 2013 12:38 PM by user738616 RSS

Cache Database failure

rshanker Newbie
Currently Being Moderated
Hi all,

Im running cache Database configuration. Here is the issue being observed:

1. If is run Default cache server and cluster (seperate JVM) im observing that the storing to DB is failing with Connection Exception
2. where-as if run only the cluster program alone without the Cache Server im not facing any issues, the data is getting store properly, Unsure what is going wrong here ?

Here is the configuration snippets:

cache-config.xml
<?xml version="1.0"?>

<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>dbexample</cache-name>
<scheme-name>distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
<distributed-scheme>
<scheme-name>distributed</scheme-name>
<service-name>DistributedCache</service-name>
<thread-count>4</thread-count>
<request-timeout>60s</request-timeout>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<scheme-name>SampleMemoryScheme</scheme-name>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>com.coherence.KnDBCacheStore</class-name>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<local-scheme>
<scheme-name>LocalSizeLimited</scheme-name>
<eviction-policy>LRU</eviction-policy>
<high-units>1000</high-units>
<expiry-delay>1h</expiry-delay>
</local-scheme>
</caching-schemes>
</cache-config>

==========================

tangosol-coherence-override.xml

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd">
<cluster-config>
<member-identity>
<cluster-name system-property="tangosol.coherence.cluster">kn_test</cluster-name>
</member-identity>
<unicast-listener>
<well-known-addresses>
<socket-address id="1">
<address>192.168.7.3</address>
<port>8088</port>
</socket-address>
<socket-address id="2">
<address>192.168.7.4</address>
<port>8088</port>
</socket-address>
</well-known-addresses>
</unicast-listener>
</cluster-config>

<configurable-cache-factory-config>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value system-property="tangosol.coherence.cacheconfig">cache-config.xml</param-value>
</init-param>
</init-params>
</configurable-cache-factory-config>
</coherence>


package com.coherence;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Collection;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;

import com.tangosol.net.cache.CacheStore;
import com.tangosol.util.Base;

public class KnDBCacheStore extends Base implements CacheStore {

     protected Connection conn;
     protected String tableName = "DG.SUBSCRIBERINFO";
     String localIPAddr = "192.168.7.3";
     String localDBDSN = "DG_010031";
     String localDBUid = "dbuser";
     String localDBPwd = "dbuser";
     String localDBPort = "53389";

     private static final String DB_DRIVER = "com.timesten.jdbc.TimesTenDriver";
     String localConnURL = "jdbc:timesten:client:TTC_Server=" + localIPAddr
               + ";TTC_Server_DSN=" + localDBDSN + ";UID=" + localDBUid + ";PWD="
               + localDBPwd + ";TCP_PORT=" + localDBPort + ";TTC_Timeout=180";

     protected void configureConn() {
          try {
               Class.forName(DB_DRIVER);
               System.out.println("DB Connection URL :" + localConnURL);
               conn = DriverManager.getConnection(localConnURL);
          } catch (Exception e) {
               e.printStackTrace();
          }
     }

     /**
     * Obtain the name of the table this CacheStore is persisting to.
     *
     * @return the name of the table this CacheStore is persisting to
     */
     public String getTableName() {
          return tableName;
     }

     /**
     * Obtain the connection being used to connect to the database.
     *
     * @return the connection used to connect to the database
     */
     public Connection getConnection() {
          return conn;
     }

     @Override
     public Object load(Object key) {
          Object value = null;
          Connection conn = getConnection();
          String sqlQry = "SELECT SUBSCRNAME FROM " + getTableName()
                    + " WHERE MDN=?";
          try {
               PreparedStatement pStmt = conn.prepareStatement(sqlQry);
               pStmt.setString(1, String.valueOf(key));
               ResultSet rs = pStmt.executeQuery();
               if (rs.next()) {
                    value = rs.getString("SUBSCRNAME");
               }
          } catch (Exception e) {
               e.printStackTrace();
          }
          return value;
     }

     @Override
     public Map loadAll(Collection arg0) {
          throw new UnsupportedOperationException();
     }

     @Override
     public void erase(Object key) {
          Connection con = getConnection();
          String sSQL = "DELETE FROM " + getTableName() + " WHERE MDN=?";
          try {
               PreparedStatement stmt = con.prepareStatement(sSQL);

               stmt.setString(1, String.valueOf(key));
               stmt.executeUpdate();
               stmt.close();
          } catch (SQLException e) {
               throw ensureRuntimeException(e, "Erase failed: key=" + key);
          }

     }

     @Override
     public void eraseAll(Collection arg0) {
          throw new UnsupportedOperationException();

     }

     @Override
     public void store(Object key, Object value) {
          Connection con = getConnection();
          String sTable = getTableName();
          String sSQL;

          // the following is very inefficient; it is recommended to use DB
          // specific functionality that is, REPLACE for MySQL or MERGE for Oracle
          if (load(key) != null) {
               // key exists - update
               sSQL = "UPDATE " + sTable + " SET SUBSCRNAME = ? where MDN = ?";
          } else {
               // new key - insert
               sSQL = "INSERT INTO " + sTable + " (SUBSCRNAME, MDN) VALUES (?,?)";
          }
          try {
               PreparedStatement stmt = con.prepareStatement(sSQL);
               int i = 0;
               stmt.setString(++i, String.valueOf(value));
               stmt.setString(++i, String.valueOf(key));
               stmt.executeUpdate();
               stmt.close();
          } catch (SQLException e) {
               throw ensureRuntimeException(e, "Store failed: key=" + key);
          }

     }

     @Override
     public void storeAll(Map arg0) {
          throw new UnsupportedOperationException();

     }

     /**
     * Iterate all keys in the underlying store.
     *
     * @return a read-only iterator of the keys in the underlying store
     */
     public Iterator<Object> keys() {
          Connection con = getConnection();
          String sSQL = "SELECT MDN FROM " + getTableName();
          List<Object> list = new LinkedList<Object>();

          try {
               PreparedStatement stmt = con.prepareStatement(sSQL);
               ResultSet rslt = stmt.executeQuery();
               while (rslt.next()) {
                    Object oKey = rslt.getString(1);
                    list.add(oKey);
               }
               stmt.close();
          } catch (SQLException e) {
               throw ensureRuntimeException(e, "Iterator failed");
          }

          return list.iterator();
     }

}
  • 1. Re: Cache Database failure
    user639604 Journeyer
    Currently Being Moderated
    I don't see the configureConn() method got called in any place, how could you possible obtain a valid connection instance for load/store operation????

    Beside, when you mentioned "Default cache server and cluster (seperate JVM)", that not quite right.

    There is no such thing as one JVM as cluster.

    * A Coherence cluster is a group of nodes (or you could have a one node cluster).
    * Each started DefaultCacheServer instance count as one node.
    * One JVM normally start one DefaultCacheServer thus count as one node (you could start multiple nodes within same JVM if you want but you better off not doing it now).

    It's like this.

    1. You started JVM A, the 1st node. Now a one node cluster formed.
    2. You started JVM B and join the cluster, now the cluster contains two nodes.
    3. You shutdown JVM A, now the cluster become a one node cluster again.
    4. You shutdown JVM B, now the cluster is gone.
  • 2. Re: Cache Database failure
    rshanker Newbie
    Currently Being Moderated
    Oops typo error the below snippet was missing from the above java program.

    public KnDBCacheStore(){
    System.out.println(" Invokding DB Cache Store Constructor");
    configureConn();
    System.out.println("Acquired Connection " + conn);
    }


    As per the above,
    The cluster that is started by the default cache server is not working as expected, im observing few are getting dropped and few are getting entry successfully into the DB.
    Do we need any other extra configuration while running default cache server for data base??

    where-as the cluster tat is started without default cache server but as java program is working fine..

    ~Ravi Shanker
  • 3. Re: Cache Database failure
    user639604 Journeyer
    Currently Being Moderated
    You might want to explain a little bit what do you mean by "where-as the cluster tat is started without default cache server"?

    As a Coherence novice, you normally and should use the DefaultCacheServer class to join/or form cluster. Are you sure your "where-as the cluster tat is started without default cache server" is part of the Coherence cluster?

    What's the main method in that "where-as the cluster tat is started without default cache server" looks like?
  • 4. Re: Cache Database failure
    user738616 Pro
    Currently Being Moderated
    Hi,

    You need to ensure that the jar containing the src code for KnDBCacheStore is in the classpath of all your servers.

    Cheers,
    _NJ                                                                                                                                                                                                                                                                                   
  • 5. Re: Cache Database failure
    rshanker Newbie
    Currently Being Moderated
    I have shutdown the Default Cache Server and ran the below java class with the above configuration.

    The following is working fine, by entering every record into the DB but the moment i start the Cache Server, it started failing.


    package com;
    import java.io.*;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.common.loader.KnInitializer;

    public class HelloWorld {
    public static void main(String[] args) {
    KnInitializer.getInstance(); // initializes the c3p0 Connection pool.
    Long key = 919886652000l;
    String valConst = "DB_Example";
    String value = "DB_Example1";

    CacheFactory.ensureCluster();
    NamedCache cache = CacheFactory.getCache("dbexample");
    String cacheKey = String.valueOf(key);
    cache.put(cacheKey, value);
    System.out.println("cache Key " + cacheKey + " is " + (String)cache.get(cacheKey));

    for (int i=0; i<=1000; i++) {
    key = key + 1;
    value = valConst + key;
    cacheKey = String.valueOf(key);
    cache.put(cacheKey, value);
    try {
    Thread.sleep(2000);
    System.out.println("cache Key " + cacheKey + " is " + (String)cache.get(cacheKey));
    Thread.sleep(3000);
    }catch(Exception e){
    e.printStackTrace();
    }
    System.out.println("cache Key " + cacheKey + " is " + (String)cache.get(cacheKey));
    }
    //CacheFactory.shutdown();
    }
    }
  • 6. Re: Cache Database failure
    user738616 Pro
    Currently Being Moderated
    Hi,

    Paste the logs from the DefaultCacheServer and remember you need to modify your DefaultCacheServer classpath by adding classes folder to it.

    HTH

    Cheers,
    _NJ                                                                                                                                                                                                                                                                                                                                                   
  • 7. Re: Cache Database failure
    rshanker Newbie
    Currently Being Moderated
    Hi NJ,

    Default cache server is started with the cache-config.xml and tangosol-coherence-override.xml files ( as provided above).
    The failure im observing is, in the cache server instance Im observing connection not available to the Data Store.

    basically im setting few system property values which are required for the initialization of the c3p0 connection pool manager and these values are getting read by the defined cache store , but during run time these values are not getting set i guess which is leading to failures.

    /usr/java/jdk1.7.0_09/bin/java -DactiveRelDir="/DG/activeRelease/dat" -DactiveReleaseDir="/DG/activeRelease/dat" -DLog4jPropsFile="/home/RAVI/Coherence/log.properties" -DWatchDelay="600000" com.tangosol.net.DefaultCacheServer

    Defined Cache Store must read these params and launch the application, but its not working.

    Please correct if i have done anything wrong.

    ~Ravi Shanker
  • 8. Re: Cache Database failure
    user738616 Pro
    Currently Being Moderated
    Hi Ravi,

    Not only parameters but the actual implementation of CacheStore and c3p0 connection pool manager should be placed in the classpath of your DefaultCache Server by using "-cp".

    HTH

    Cheers,
    _NJ                                                                                                                                                                                                                                                                                                                                                                                                                                   

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points