If you do not know what OpenDS is, then you can simply learn about it by looking at its website located at http://www.opends.org But a brief description is: OpenDS is a high performance, feature rich and pure Java based, directory server which is under active development by Sun Microsystems.

Today I grabbed OpenDS 1.3 build 1 to see what is new and check its replication and fail-over recovery. You can grab a copy at https://www.opends.org/promoted-builds/latest/. First thing that I noticed is the new control panel which replaces the old likely status panel. You can see an screen-shot of the control panel right here.


Although the control panel has good set of features and functionalities and it is very good to have a built-in LDAP browser and management utility coming with the OpenDS but this control panel is not user friendly enough. I think we will see some changes for this control panel in new future. for example some better menu design, tab based content pane instead of opening new window,... To run the control-panel application you can run control-panel script either from bat or bin directory


Down to business, I though I can test the replication and fail over recovery of OpenDS replication by some simple Java code and the new control panel application. To install OpenDS in an specific directory, extract the zip file in that directory and run setup script. I installed first instance of OpenDS server in /home/masoud/opends1.3/OpenDS-1.3.0-inst01. The installation process is quite simple, you just need to execute the setup script, it opens a GUI setup application which guide you through the installation. Following screenshots shows the installation process for first instance.

Welcome page:
Server Setting page: I used admin as password
Topology Options
Directory Data
Installation review page
Installation finished

Installation application will open the control-panel application, the control panel needs administration credentials to connect to the directory server. administration credentials arecn=Directory Manager as the bind DN and admin as password. (If you used anything else then you should use your own credentials)

Now we should install the second directory server instance, this instance will form a replication group with instance 02, I extracted the zip file into /home/masoud/opends1.3/OpenDS-1.3.0-inst02 and then execute the setup script to commence with the installation. Following screen shots shows the installation process:

Welcome page:
Server Setting page: I used admin as password, as you can see port numbers are different because default ports are in use and setup application try to use new port numbers instead.

Topology Options: Here we are adding this server instance to a replication topology which already has one member. We connect this instance to another instance in the topology by providing the setup application with host name, administration port and administration credentials of that server. In my case both instances are installed on the same machine.


Global Administration: A administration credentials which can be used to manage the whole replication topology. I used admin/admin for username/password


Data Replication: As we want to have a replica of our remote server we should select "Create local instance of existing base DNs and....", And we should select the Base DNs which we want to replication



Review: review the installation and if you found anything wrong you can go back and fix it


As both installation tasks are finished we have our replication topology installed, and configured.

So far, we should have two control-panel open. Each one of them can manage one of our installation and if it comes to data management, if we change data in one control-panel we can see the change in other control panel.

To test the replication configuration, in one of the control-panel applications, under the Directory Data tab, select manage entries and then delete some entries, now go to the other control-panel and you will those entries are gone. To make the test more understandable about fail-over recover, stop one server, delete some entries in other server, start the server which you have stopped and you should see that all deleted record are deleted in the new server as soon as it has been started.

Directory Data tab:
Deleting some entries:

We have a replication topology configured and working, what we can do with this topology from a Java application? The answer is simple: as we can not afford to see our client applications stopped working because a directory server machine is down or a router which route the clients to one of the server is not working and so on... we need to have our Java application uses any available server instead of depending on one server and then stop working when the server is down.

following sample code shows how we can use these two instances to prevent our client application stop working when one instance is down.

import java.util.Date;
import java.util.Hashtable;

import java.util.Timer;
import java.util.TimerTask;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.Attributes;
import javax.naming.directory.InitialDirContext;

public class Main {

    public static void main(String[] args) throws NamingException {

        final Hashtable env = new Hashtable();
        env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
        env.put(Context.PROVIDER_URL, "ldap:// ldap://");
        env.put(Context.SECURITY_PRINCIPAL, "cn=Directory Manager");
        env.put(Context.SECURITY_CREDENTIALS, "admin");
        env.put(Context.SECURITY_AUTHENTICATION, "simple");
        Timer t = new Timer();
        TimerTask ts = new TimerTask() {

            public void run() {
                try {
                    InitialDirContext ctx = new InitialDirContext(env);
                    Attributes attrs = ctx.getAttributes("");
                    final NamingEnumeration enm = attrs.getAll();
                } catch (NamingException ex) {

        t.schedule(ts, 1000, 1000);


The output of the sample code is as follow, as long as one of the servers is up and running.

objectClass: top, ds-root-dse
objectClass: top, ds-root-dse
objectClass: top, ds-root-dse
objectClass: top, ds-root-dse
objectClass: top, ds-root-dse
objectClass: top, ds-root-dse
objectClass: top, ds-root-dse
objectClass: top, ds-root-dse
objectClass: top, ds-root-dse
objectClass: top, ds-root-dse

During the sample code execution, you can stop one of the servers and expect to see the same output which you saw when both servers where running. start the stopped server and stop the other one to ensure that downtime of any server will not affect the application execution. Although it shows some level of high availability but we are still have load balancing problem in our way. because using current sample code will put all of the incoming load on the first server which is listed in the server list.

To learn internals of OpenDS you can take a look at: http://www.opends.org and to learn more about OpenDS replication internals and how it works you can take a look at https://www.opends.org/wiki/page/Replication. You can access OpenDS documentation at: https://www.opends.org/wiki/page/OpenDSUserDocumentation