0 Replies Latest reply: Mar 25, 2014 2:51 AM by michael gerber2 RSS

    Extend client startup much slower using F5 BigIp connection load balancing

    michael gerber2

      Hi,

       

      Extend client applications are taking much longer to start up when they are pointed to an BigIp virutal host for proxy service than if they go direct to proxies. Once a client has successfully started, the latency and throughput are fine, pretty much exactly the same as for direct connection. It is only much slower for startup and then an app normally taking 20s to get from start to ready takes ~70s. Another that normally takes 1min takes over 3mins.

       

      The clients use near caches and CQCs.

      The test setup has 2 identical client instances. The control points directly to a single proxy node:

       

                      <remote-cache-scheme>

                              <scheme-name>distributed</scheme-name>

                              <service-name>ExtendTcpCacheService</service-name>

                              <initiator-config>

                                      <tcp-initiator>

                                              <remote-addresses>

                                                      <socket-address>

                                                              <address>mycohserver</address>

                                                              <port>9100</port>

                                                      </socket-address>

                                              </remote-addresses>

                                              <connect-timeout>10s</connect-timeout>

                                      </tcp-initiator>

                                      <outgoing-message-handler>

                                              <heartbeat-interval>3s</heartbeat-interval>

                                              <heartbeat-timeout>10s</heartbeat-timeout>

                                              <request-timeout>20s</request-timeout>

                                      </outgoing-message-handler>

                              </initiator-config>

                      </remote-cache-scheme>

       

      The other one points to virtual host on the big Ip on port 9099 and that host has a single pool member which is on the same machine as above but exposing port 9099. The client conf is as above but with:

       

                                                              <address>myvirtualhost</address>

                                                              <port>9099</port>

       

      replacing the address shown.

      Swapping the remote-addresses of the 2 clienbt instances shifts the problem from one client to the other i.e. it does not appear to be an environmental difference.

       

      Both proxy nodes are in the same cluster. The cluster is set to use client side load balancing (so direct connections to a proxy never get balanced).

      The cluster nodes and the client apps run on vitual servers (vmware).

       

      I can see in the logs that the connection to the cluster is established very early during application startup and it take milliseconds:

       

      2014-03-24 16:03:56,252 [Logger@1492560408 3.7.1.5] DEBUG Coherence 2014-03-24 16:03:56.251/5.200 Oracle Coherence GE 3.7.1.5 <D5> (thread=localhost-startStop-1, member=n/a): Connecting Socket to myvirtualhost:9099

      2014-03-24 16:03:56,254 [Logger@1492560408 3.7.1.5] INFO  Coherence 2014-03-24 16:03:56.254/5.203 Oracle Coherence GE 3.7.1.5 <Info> (thread=localhost-startStop-1, member=n/a): Connected Socket to myvirtualhost:9099

       

       

      Why is it slower?

      Can I do anything about it?