Extend client applications are taking much longer to start up when they are pointed to an BigIp virutal host for proxy service than if they go direct to proxies. Once a client has successfully started, the latency and throughput are fine, pretty much exactly the same as for direct connection. It is only much slower for startup and then an app normally taking 20s to get from start to ready takes ~70s. Another that normally takes 1min takes over 3mins.
The clients use near caches and CQCs.
The test setup has 2 identical client instances. The control points directly to a single proxy node:
The other one points to virtual host on the big Ip on port 9099 and that host has a single pool member which is on the same machine as above but exposing port 9099. The client conf is as above but with:
replacing the address shown.
Swapping the remote-addresses of the 2 clienbt instances shifts the problem from one client to the other i.e. it does not appear to be an environmental difference.
Both proxy nodes are in the same cluster. The cluster is set to use client side load balancing (so direct connections to a proxy never get balanced).
The cluster nodes and the client apps run on vitual servers (vmware).
I can see in the logs that the connection to the cluster is established very early during application startup and it take milliseconds:
2014-03-24 16:03:56,252 [Logger@1492560408 18.104.22.168] DEBUG Coherence 2014-03-24 16:03:56.251/5.200 Oracle Coherence GE 22.214.171.124 <D5> (thread=localhost-startStop-1, member=n/a): Connecting Socket to myvirtualhost:9099
2014-03-24 16:03:56,254 [Logger@1492560408 126.96.36.199] INFO Coherence 2014-03-24 16:03:56.254/5.203 Oracle Coherence GE 188.8.131.52 <Info> (thread=localhost-startStop-1, member=n/a): Connected Socket to myvirtualhost:9099
Why is it slower?
Can I do anything about it?