1 2 3 Previous Next


96 posts

GlassFish 3.1 supports creating and managing instances on multiple hosts from a central location (the DAS).


GlassFish 3.1 supports creating and managing instances on multiple hosts from a central location (the DAS). The server software uses SSH to communicate to the remote systems where the instances reside and Joe's blog contains useful information on setting up SSH in a way that GlassFish can take advantage.    In this blog I talk about managing those instances when the user sets the master password to something other than the default.

It is recommended that users change the default master password for security reasons.  Since GlassFish never transmits the master password or associated file over the network, the user must take action on the remote hosts to allow the system to manage the instances from a central location.  Commands such as start-instance do not have a mechanism that allows the user to enter the master password but they do look for a master password file in the agent directory of the node associated with that instance.  This means that each instance on that node uses the same master password.  We have updated the command change-master-password so that it creates the master-password file for a node. Commands with the --savemasterpassword option will create or update the master-password file.

Let's look at an example.  In this case, I create a new domain setting the master password to 'welcome1'  and start the domain.  I create an SSH node for the remote host I plan to use for the instances.   I then create an instance on a remote node using the command create-instance which I run on the DAS.    Note that I can create the instance from the DAS but I can not start it unless the master password for the instance matches the master password for the DAS.  At that point I have to go to the instance machine and run the change-master-password command with the --savemasterpassword option set to true so that the master-password file is created in the node's agent directory.  Once I do that I can go to the DAS machine and manage the instance.  Since the master pasword is associated with the node I can then create additional instances from the DAS machine and start or stop them without having to go to the remote host.  I have added the commands that need to be run below.

1) Create and start a domain with the master-password set to "welcome1" using the command .  Note that I did not set a password for admin user.

asadmin create-domain --savemasterpassword true domain2
asadmin start-domain domain2

2) Create an SSH node

 asadmin create-node-ssh --nodehost glassfish1.sfbay.sun.com --installdir /export/glassfish3 node2

3) Create an instance from the DAS.  This creates the instance configuration information and the instance file system.

asadmin create-instance --node node2 ins2

4) At this point the instance is created but it can not be started by the start-instance command because there is no master-password file in the agent directory for that node. That file must exist and it must have the same password as the master password on the DAS. To create that file run the following command on the instance machine.  If I try to start the instance I get the following error:

asadmin start-instance ins2
remote failure: Could not start instance ins2 on node node2 (glassfish1.sfbay.sun.com).

Command failed on node node1 (glassfish1.sfbay.sun.com): The Master Password is required to start the domain.  No console, no prompting possible.  You should either create the domain with --savemasterpassword=true or provide a password file with the --passwordfile option.Command start-local-instance failed.

To complete this operation run the following command locally on host glassfish1.sfbay.sun.com from the GlassFish install location /export/glassfish3:

 asadmin  start-local-instance --node node2 --sync normal ins2
Command start-instance failed.

Go to the instance machine (glassfish1.sfbay.sun.com in this case)  and create the master password file for node2 by typing the following command.

asadmin change-master-password --savemasterpassword true --nodedir /export/glassfish3/glassfish/nodes node2

Important note: At the prompt I have to enter the old master password ('welcome1') which is what I had set when I created domain2 on the DAS. It is not the default master password 'changeit'  because the keystore was copied over when the instance was created and it is encrypted with the master password from the DAS. So the passwords are the same but since start-instance doesn't have an option to take the master password it looks for a file called master-password in the agent directory to access the keystores. Once that file is created, start-instance can be run centrally (from the DAS).

5) Start the instance from the DAS

asadmin start-instance ins2

At this point you can create additional instances from the DAS and start them without going to the instance machine. 

A slightly different scenario is below.  In this case I will begin by creating a domain with the master password set to 'welcome1' as in the previous example, create an SSH node to point to the remote host where the instance will run but I will create the instance locally on the instance machine.  At some future time I want to manage the instance from the DAS so I still need the master-password file created in the node's agent directory. 

On DAS machine:

1) Create  and start a domain with the master-password set to "welcome1" using the command

asadmin create-domain --savemasterpassword true domain2
asadmin start-domain domain2

2) Create an ssh node  pointing to the remote host where the instances will run.

asadmin create-node-ssh --nodehost glassfish1.sfbay.sun.com --installdir /export/glassfish3 node2

Now we move to the instance machine and create the instance locally and as long as there is no master-password file in the node we need to create one. The command create-local-instance can do that for us.

asadmin --host DASHost create-local-instance --node node2 --savemasterpassword true insL2

In this case, the master password for the keystore in the instance is 'changeit' or the default. Nothing was copied over from the DAS so the password is what is on the instance machine. Again, once the file master-password has been created with the passwordthat matches the one on the DAS, then instance insL2 can be administered from the DAS. Additional instances can be created, started and stopped from the DAS machine.

If the master password is changed on the DAS then you must go to each  instance machine and run the change-master-password command as in step 4 above to reset the master  password file for each node.


One of the main features in GlassFish 3.1 is clustering and for m2 we have added support for creating and starting instances on remote hosts.  The underying GlassFish 3.1 code uses SSH to connect to the remote hosts and introduces the concept of a node which is used by the system to deterimine where the instances will be created or started. At this time the only connection type supported is SSH.  Users now have a few new commands to manage nodes. 

  • create-node-ssh  creates a node that describes the hostname where the instance will run and location of GlassFish installation.  
  • delete-node-ssh and list-nodes are useful in deleting and listing nodes respectfully. 

Below is a simple example of creating a cluster, creating an instance and starting the instance all from the administration host or the DAS (Domain Administration System). 

First, some assumptions about the setup for GlassFish.  For m2, users will have to install and start GlassFish on all hosts that are part of the cluster. We do not currently support installing or starting GlassFish on a remote host and this is planned for a future release.  Second, SSH needs to be setup on both hosts as it is the underlying mechanism that is used to run commands on the remote hosts.  Currently we have only tested on UNIX (MacOs, Ubuntu, and OpenSolaris) but for m3 we will be including Windows as a tested platform.  There are many blogs that talk about setting up SSH so I won't go into all details here. I found this blog useful.  To summarize how I set up the authentication key,  I used ssh-keygen -t dsa to create the key file in my .ssh dir.   Note: a limitation for m2 is that we don't support encrypted key files so you must  not set a passphrase when creating keys.  I then used scp to copy the key file id_dsa.pub to the host I want to log in to.  I put it in the .ssh dir and called it authorized_keys2.  Also I had the same username on both systems which further simplified things.  At that point I can ssh into the remote host without supplying a password.  This is a good test to see if you are set up correctly before you try the commands below.

In this example, we will create a cluster with two machines, foo and barfoo will be the DAS which has all the information about the servers running in the cluster.  Recall that in this release we have introduced a new CLI command create-node-ssh to create a node which is used to locate the host for a particular instance. 

create-node-ssh has three required parameters, 

  1. --nodehost:  the name of the host where the instance lives
  3. --nodename:  the GlassFish installation directory
  5. --name:  name of the node being created 

All other parameters will default to reasonable values.  We default the ssh port to 22. If no username is provided we default to the user running the command and we look for the key file in the home directory of that user.  All instances are now required to reference a node element which GlassFish uses to determine where the instance will be created or started.  This means that we have added a --node option to the create-instance command. As a convience we have a default node for localhost so if the  node option is not specified when the instance is created a reference is automatically added  to the localhost node.  The localhost node contains only a node name of localhost.  We can get the GlassFish installation directory from the server. 


Let's see how this works.  All commands are run on the DAS machine and as long as there is SSH access to the other host we will be able to create and start instances.


Install and start GlassFish 3.1 m2 on foo and bar.  

On host foo (the DAS) we run all the commands. 

$asadmin create-cluster c1

Command create-cluster executed successfully.

$asadmin create-node-ssh --nodehost=bar --nodehome=/home/cmott/glassfishv3/glassfish nodebar

Command create-node-ssh executed successfully.

$asadmin list-nodes 

Command list-nodes executed successfully.

$asadmin create-instance --cluster=c1 --node=nodebar instance1

Command create-instance executed successfully.

$asadmin create-instance --cluster=c1 instance2

Command create-instance executed successfully.

$asadmin list-instances
instance2 not running
instance1 not running

Command list-instances executed successfully.

$asadmin start-cluster c1

Command start-cluster executed successfully.

$asadmin list-instances

instance2 running
instance1 running

Command list-instances executed successfully.


Notice that when creating instance2 I did not specify a node and so the default node lcoalhost is used.  In a future release of GlassFish, create-node-ssh will test if a connection can be made to the remote host when the node is created.  If not reachable the user can create the node if the --force option is set to true.



This blog highlights some of the changes that are part of GlassFish v3 logging.  Since Prelude I have added 3 asadmin commands related to logging. I have updated the set-log-level command and changed the syntax. See below for details. The new commands are:

    * asadmin rotate-log
    * asadmin list-logger-levels
    * asadmin set-log-level

The first command rotates the log files immediately.  Users can now use native scheduling software such as cron to rotate the logs.  We still support rotating logs based on file size or elapsed time since the last log rotation and this new command allows more flexibility when rotating log files.  The second command lists the loggers and their current level.  This command reports on all the known loggers that are listed in logging.properties file.  Note that in some cases the loggers may not have been created by the respective containers however they will still appear in the list. The third command,  asadmin set-log-level, sets the level for aone or more loggers.   For example, to set the log level of the web container logger level to WARNING  simply type:  asadmin set-log-level  javax.enterprise.system.container.web=WARNING. This command updates the logging.properties file which means that the values are persisted across server restart.  Use asadmin list-logger-levels  to get the names of the loggers.

Finally, I've added a property to logging.properties file in the domain config directory that controls  the number of message written to the server log file.  GlassFish v3 logging code writes messages to a queue instead of directly to  the server log file.  Log messages are taken off the queue and written to the server log file as cycles become available.  The property name is com.sun.enterprise.server.logging.GFFileHandler.flushFrequency.  This propertys controls the number of messages that are taken off the queue and written to a file a time.  The actual number written is the value of this property or less depending on the number of messages in the queue.  The default value is 1.


jMaki at JavaOne Blog

Posted by carlavmott Apr 28, 2008

Filter Blog

By date: