1 2 Previous Next


17 posts

SailFin CAFE, like other converged application development frameworks has changed the converged application development paradigm. Having used CAFE APIs so far, if one thought that application development has never been so fast and so easy, things  just got better with v1 b28. Communications (conversation, conference, imconversation, imconference...) that were created by applications were managed by the framework,  and were presented to the Communications Bean method when an event occurred. This was perfectly fine if the application component was a Communication Bean, because it is called for action only when an event occured. Extensive application code was required (including maintaining the communications that were created by this application), if a Http servlet wanted to query details about a communication that was created from a Communicaton Bean. The communication search API in the CommunicationService was created specifically to address such use cases.

These APis that have been introduced in v1 b28, allow application components to search communications in a variety of ways including communication name, sender and receiver, type of communication etc..

For e.g

To get all the conversations that were created by this application, one would do

   Collection<C> comms =

 The communicationservice is available to a Http servlet, and can be obtained in the following way

(CommunicationService) communicationService =

 or by injecting the communication service

@Context CommunicationService communicationService;

The communication service is one per application and this provides  isolation to applications, i.e the search is valid only for communication objects created from that application, either through the CommunicationBean or through a HttpServlet.

For more details, please refer to the APIs


and try it with b28





Converged (Http/SIP) applications gives users the flexibility of creating or accessing information about their communications (call/conferences/im ...) over the web. To make this possible a typical converged application would contain an entry point for all the Http requests, which is mostly an Http Servlet. This servlet would return back appropriate responses by accessing the corresponding communication (SIP) sessions. Every communication application that is deployed would need one (or more) Http servlets in order to support Web clients (as shown in figure 1). But, the fact is that most of the operations that are required by web clients are quite similar and do not differ much on the server side with respect to the implementation of the http servlet. Common tasks a web client might do include setting up a call (CallSetup), querying status of calls, terminating a call, modifying a call etc... From a developers perspective its desirable if the converged application development framework provides a means of exposing these common tasks rather than having to write a htttp servlet for it. It also helps if these are exposed as an API, that is portable as well as open standards compliant. 



Figure 1


SailFin CAFE solves this problem by exposing these through a REST API. When a CAFE application is deployed (with some information added to web.xml descriptor), all the regular tasks of creating and managing the communication are exposed as REST URIs  and the required resources provisioned automatically. Build v1-b24  of CAFE  comes with REST resources implementation of Call and Conference. By including the following lines in the web.xml deployment descriptor the REST resources are provisioned and made available for your CAFE application

<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
        <servlet-name>CAFE REST Service</servlet-name>
        <servlet-name>CAFE REST Service</servlet-name>

The REST resources are implemented using the Jersey JSR 311 (Figure 2) implementation that is bundled with SailFin. The deployment descriptor above indicates that the CAFE REST resources that are available under the org.glassfish.cafe.rest.impl package should be made availble through Jersey. Once a CAFE application with such a deployment descriptor is deployed, Web clients would be able to access the  REST URIs, for their application


Figure 2


For e.g to create a call from the  web client (assuming the SIP UAs of Alice and Bob have registered)


where contextroot is the context root of the deployed application and callid is the callid for the call. And server/client refers to the nature of the APIs. Currently only server APIs are available , and these are synchronous where the client has to wait for a response on the same connection.

For a complete list of REST URIs and the content schema please see


The WADL is available here


Currently, (at the time of writing this article) only sychronous (server API, client has to wait for the response on the same connection) support is available, keep watching this space, i will update as soon as we have async APIs.



Lots of fixes have gone into SailFin 2.0, some of these fixes are related to functionality whereas others are to improve performance. The changes sometimes required creation of new user configurable properties in order to extract the optimal-performance/desired-behavior depending on the users deployment. This article tries to explain some of the properties/attributes that were introduced in SailFin 2.0 to address specific issues.

Note : Not all these properties have been tested and certified.  Hence use  them at your own risk. Please refer to the product documentation if you are using a supported version.

Configuration related to DNS failover functionality :

DNS failover is implemented in SailFin as per RFC 3263, it ensures that if the SIP message cannot be sent to the first host obtained from a DNS lookup then the next in the list has to be used to send the message. In case of UDP, to detect a message delivery failure at the network layer, we rely on the fact that an ICMP error response was received for the host that is not reachable. If for some reason (firewall or other) the ICMP error responses do no arrive , then we have to fall-back to the mechanism based on Timer B/F (defaults to 32) expiry to perform the retry. For sake of brevity lets call the former as fail-fast and the later as fail-safe. To implement the fail-fast solution, SailFin maintains a list of hosts for which a ICMP eror was received, and before sending a UDP message the destination address is checked with this list to find out if its reachable, if its not then a 503 is returned back which results in a DNS lookup and a new host being picked up. There is a duration for which a unreachable host lives in this list, after which it is removed and ready to be used, and this duration by default is 0 seconds (or the fail-fast is turned off). To enable the fail-fast DNS failover , one has to configure the stale connections timeout in the configuration

asadmin set server-config.sip-service.property.udpStaleConnectionsTimeout=5

The above command sets the connections to be removed after 5 seconds. This property is owned by the network layer and is used by it to track failed targets. Other modules in sailfin (like the resolver manager) have their own mechanism of tracking failed targets, like the one mentioned below.

For deployments where the ICMP responses do not reach the SailFin instance, one has to rely on the fail safe approach for accomplishing DNS failover.  When sending of a UDP message times out with Timer B firing then it is added to a global quarantine list of failed hosts, this is to ensure that other requests do not use the same failed target, and again a host is quarantined only for a certain duration.
The quarantine time is made configurable and split into two defaults; one for 503's received from the network layer (as descirbed above) and one for 408's from Timer B/F. The reason is that 408's already did a lot of retries and expectation is that such a situation will last longer.

Following command sets the quarantime timeout for 503s,

   asadmin set server-config.sip-container.property.defaultQuarantineTime=5

Following command sets the quarantine timeout for 408s

asadmin set server-config.sip-container.property.timeoutBasedQuarantineTime=5

Refer to issues

Converged Load Balancer configuration (for Http):The converged load balancer proxy creates TCP connections from a front-end (instance which receives the request) to the back-end (instances that processes it) to proxy the Http request. And this connection is pooled and re-used once the response has been sent back to the client. Having one connection to proxy all the requests may not scale well and allowing the proxy to create unlimited number of connections is also not an optimal solution. So, the number of connections that will be created and pooled can be configured by the user using the following system property (jvm-options) in SailFin.

asadmin create-jvm-options --target <your-config>  "\-Dorg.jvnet.glassfish.comms.clb.proxy.maxfebeconnections=10"

Default value is 20
This should be  an integer value of number of http proxy connections from the  front end to backend.
This is a developer level property and may not be supported officially in the supported version of SailFin.

Converged Load Balancer configuration - Responses over UDP

UDP responses from the backend are routed back to the client through the front-end. If the deployment topology permits then it would be desirable to send the UDP responses directly back to the client (UA) from the backend where it gets processed, this would save one network hop (from backend to frontend). The below property can be used to achieve the functionality of by-passing the front-end for responses on UDP transport.

asadmin set domain.converged-lb-configs.<your-clb-config>.property.sendUDPResponseFromBackend=true.

This is a developer level property and may not be supported officially in the supported version of SailFin.

Network/Transaction related  - Using UDP listener address for outgoing requests:

By default SailFin uses the listener port as the source port to sends out UDP packets. But this could cause some issues in certain OSs. To disable this functionality, the user can set the above system property.

asadmin create-jvm-options --target <your-config> "\-Dorg.jvnet.glassfish.comms.disableUDPSourcePort=true"

This is a developer level property and may not be supported officially in the supported version of SailFin.

Transaction related : Drop invite re-transmissions.

UAC sends INVITE and SailFin responds back with 100, but before the 100 reaches the UAC, the UAC retransmits the INVITE. But before the re-transmitted invite reaches SailFin and can be processed, the transaction corresponding to the first invite is completed and a 200 is sent to the UAC. Thus the retransmitted invite results in  a new transaction in SailFin and when it reaches the servlet it ends up a creating a new INVIte to the UAS . The below property should be used if the user encounters the following case and wants to ensure that the re-transmitted invite is detected and ignored by SailFin.

asadmin create-jvm-options --target <your-config>


This is a developer level property and may not be supported officially in the supported version of SailFin.

Please refer to issue

Network Related : SSL handshake timeout.

asadmin set server-config.sip-service.property.sslHandshaketimeout =15

This integer value (in seconds) determines how long should the network layer in SailFin wait to complete the handshake with an SSL client. Default value is 10 seconds.

Ignoring user=phone parameter

The following is to avoid strict processing of user=phone parameter.
For more information, please see https://sailfin.dev.java.net/issues/show_bug.cgi?id=1716.

asadmin create-jvm-options --target <your-config>  \-Dorg.glassfish.sip.ignoreUserParameter=true


Microsoft OCS compatibility

System property to switch on some extensions to support microsoft OCS interoperability. It make
sure that the callid created by sailfin is less than 32 characters.
More information at https://sailfin.dev.java.net/issues/show_bug.cgi?id=1611

asadmin create-jvm-options --target <your-config> \ -Dorg.glassfish.sip.ocsInteroperable=true


Optional JSR 289 features - Modification of from/to headers.

System property to enable the optional 289 feature to modify from/to headers. More information at https://sailfin.dev.java.net/issues/show_bug.cgi?id=1641


asadmin create-jvm-options --target <your-config>  \-Dorg.glassfish.sip.allowFromChange=true



This is a debug aid to print debug information about a response created in the sailfin VM
(either by application). For the specified response code, sailfin will print debug information
including a stack trace when the response is created.

asadmin create-jvm-options --target <your-config>  \-Dorg.glassfish.sip.debugResponse=XXX

High availability in SailFin can be achieved by deploying a cluster of instances and configuring the load balancer and the replication modules as per the user's needs. Apart from the basic configuration of these modules, SailFin (2.0) also allows users to separate the intra-cluster traffic (resulting from the load-balancer, replication and the group management service modules) from the external traffic, which allows users to maintain/configure their network in way that best suits their traffic needs. Traffic separation also allows the users to plan their network and augment certain parts of it when required. This following steps describes how SailFin 2.0 can be configured on multiple interfaces (IP addresses), The instructions assume that the user wants to separate the cluster internal traffic (CLB and GMS only) from the external SIP/Http traffic (from the UAs).

Machine setup:

In order to separate the traffic, the machines should have atleast 2 IP addresses, which ideally would belong to different networks. There are different ways of multi-homing a system which are out of scope of the discussion here. For the sake of simplicity we would assume the machine on which this configuration is created has 2 IP addresses which are on different networks (one may not be reachable from the other). We will call the first IP as the external ip and the second one as internal IP. The objective is to expose the external IP (through a h/w load balancer) to the UAs,so that all the traffic from the UAs would be through them. The internal IP is used only by the SailFin cluster instances for the intra-cluster communication.

On some machines (especially the ones that are dual-stack enabled), it is mandatory to configure the multicast routing  rule.
E.g # route add -net netmask dev eth2

Configuration :

Create  a cluster of N instances where each instance is running on a separate machine, N being 3 in the example below. Let us call the cluster mh-cluster

The following commands have to be executed to achieve  traffic separation for mh-cluster,

Step 1:

Create the property tokens for the external listener (corresponds to the external IP), which would be the public address of that machine, The tokens are used because the external address of every machine would be different and these would be resolved based on the machine specific values that we would configure later.


These listeners exist by default in the configuration, we are just modifying the address property.

> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.sip-service.sip-listener.sip-listener-1.address=\${EXTERNAL_LISTENER_ADDRESS}

> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.http-service.http-listener.http-listener-1.address=\${EXTERNAL_LISTENER_ADDRESS}


> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.sip-service.sip-listener.sip-listener-2.address=\${EXTERNAL_LISTENER_ADDRESS}


 > asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.http-service.http-listener.http-listener-2.address=\${EXTERNAL_LISTENER_ADDRESS}


Step 2:

Set the listener type of the public listeners to "external". This denotes that these listeners should be used only for handling UA traffic and not by the clb for proxying.

> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.http-service.http-listener.http-listener-1.type=external

 > asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.sip-service.sip-listener.sip-listener-1.type=external

> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.http-service.http-listener.http-listener-2.type=external

 > asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.sip-service.sip-listener.sip-listener-2.type=external

Step 3:

Create the system properties corresponding to the tokens that would be used for IP address resolution in the respective instances


INTERNAL_LISTENER_ADDRESS would be used by the internal listeners that are created in the next step.

> asadmin create-system-properties --user admin --port 4848 --passwordfile passwordfile --target mh-cluster EXTERNAL_LISTENER_ADDRESS=


> asadmin create-system-properties --user admin --port 4848 --passwordfile  passwordfile --target server EXTERNAL_LISTENER_ADDRESS=

Step 4 :

Create new listeners that will be used by clb for prxying fe-be traffic, this is done by setting  the type of the listener as "internal"

> asadmin create-http-listener --user admin --port 4848 --passwordfile passwordfile --target mh-cluster --listeneraddress --defaultvs server --listenerport 28080 internal-http-listener

> asadmin create-sip-listener --user admin --port 4848 --passwordfile  passwordfile --target mh-cluster --siplisteneraddress --siplistenerport 25060 internal-sip-listener

Modify the address attribute so that it points to the internal address property

> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.sip-service.sip-listener.internal-sip-listener.address=\${INTERNAL_LISTENER_ADDRESS}

> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.http-service.http-listener.internal-http-listener.address=\${INTERNAL_LISTENER_ADDRESS}


> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.sip-service.sip-listener.internal-sip-listener.type=internal

>asadmin set --user admin --port 4848 --passwordfile  passwordfile  mh-cluster-config.http-service.http-listener.internal-http-listener.type=internal

Step 5:

Configure GMS bind address so that GMS communication happens through a specific interface

# Note that this workaround is required because the GMS in DAS does not bind to the specified address if this (default-cluster) is not present.

> asadmin set --user admin --port 4848 --passwordfile passwordfile  default-cluster.property.gms-bind-interface-address=\${INTERNAL_LISTENER_ADDRESS}

> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster.property.gms-bind-interface-address=\${INTERNAL_LISTENER_ADDRESS}

Step 6:

Configure the IP addresses of the cluster instances

> asadmin create-system-properties --user admin --port 4848 --passwordfile passwordfile --target instance101 EXTERNAL_LISTENER_ADDRESS=

 >asadmin create-system-properties --user admin --port 4848 --passwordfile passwordfile --target instance102 EXTERNAL_LISTENER_ADDRESS=

> asadmin create-system-properties --user admin --port 4848 --passwordfile passwordfile --target instance103 EXTERNAL_LISTENER_ADDRESS=


Once all the above commands have executed succesffuly , please restart the nodeagents and cluster for the changes to take effect, restart of cluster is required because changing the type (only the type attribute) of a listener dynamically is not supported.


Verify (using netstat) if the listeners are bound to the correct IPs.

Step 7 (optional) :

There might be a h/w load balancer that fronts this entire SailFin cluster, which is typically used for spraying the sip traffic to the individual instances. And when a request is sent out from SailFin, its the address of this h/w load balancer that has to be put in the contact and via headers, this would enable the client to reach the load balancer when it sends a response after address resolution.


This address of the load balancer has to be configured in the cluster so that the instances can pick it up when they are creating an outgoing request. One way to do this would be to configure it under the sip-container-external-sip-address attribute, but this would mean that there can only be one load balancer that is fronting all the listeners. To make this configuration more flexible in 2.0, now every listener (that is external) can take the external-sip-address and port attributes,


This can be configured the following way

> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.sip-service.sip-listener.sip-listener-1.external-sip-address=<yourh/w load balancer IP>

> asadmin set --user admin --port 4848 --passwordfile passwordfile  mh-cluster-config.sip-service.sip-listener.sip-listener-1.external-sip-address=<yourh/w load balancer port>


Generic JMS RA 2.0 Blog

Posted by rampsarathy Sep 18, 2008
Throughput, stress and longevity metrics of the SailFin SIP container depends on a few factors that can be controlled and configured by the end user, and getting the right size of the buffers used internally is one of them. Bulk of the work in processing a SIP message involves reading the message (bytes) out of the socket channel and parsing it to frame a valid SIP message. And in a typical IMS (IP Multimedia Subsystems) setup, the application server receives lot of  traffic from a CSCF (Call Session Control Function), which is spread across few TCP socket channels. Under these conditions, it is very important to allocate the right buffers for reading/writing the SIP messages. Following is my understanding of how these buffers are allocated. The Grizzly socket connector used by SailFin associates a byte buffer with each thread (WorkerThread from the ServersThread pool). And this bytebuffer is used to read data from the socket channel. Its important to tune this buffer appropriately so that it does not starve or overflow. The SailFin specific filter that reads the data from the socket channel reconfigures this byte buffer's (which is initially allocated by Grizzly) size to be equal to the socket's receive buffer size. This would ensure that we are able to read whatever has been buffered in the underlying socket's buffer. The following APIs return the socket's receive buffer size. ((SocketChannel) channel).socket().getReceiveBufferSize(); (for TCP) ((DatagramChannel) channel).socket().getReceiveBufferSize(); (for UDP) This is a platform specific operation and depends on the socket buffer sizes that have been configured in the operating system. For e.g On a Suse Linux system the tcp buffer size of the OS can be configured using thr following command (sets to 67MB). sysctl -w net.ipv4.tcp_mem="67108864 67108864 67108864" (Please note that though you set the OS tcp buffers to 67 MB, you might see a different value in Java API that gets the receive buffer size) The receive buffer size optimization is done automatically, and therefore not possible to override this in SailFin There is an attribute in domain.xml that controls this byte buffer size :           <request-processing header-buffer-length-in-bytes="8192"> But the network manager in SailFin overrides this by tuning the byte buffer to the socket's receive buffer size. So, remember to tune your buffers if your load is high. The byte buffers that are used to send responses and new SailFin requests (when a UAC), are picked up from a pool. This pool of buffers is initially  empty when SailFin starts up and fills up when the demand for buffers (load) increases. This is an unbounded pool of buffers which cannot be configured. Though the pool itself cannot be configured, the size of the byte buffers in this pool can be. Since these buffers are going to hold a sip request/response message, their optimum size depends on the application. One byte buffer is used per outgoing message, and during times of high load the number of buffers in the pool can be in the order of thousands.  The default size of this is 8K, and can be configured using the following property in domain.xml           <connection-pool send-buffer-size-in-bytes="8192"/> That is pretty much how SailFin uses/manages the byte buffers.

Filter Blog

By date: