Radu Dobrinescu's article shows you how WebLogic Scripting Tool (WLST) automation, together with Docker virtualization, can significantly improve the management of WebLogic development environments, and may inspire the adoption of a new strategy for managing these complex environments or the change management system for the WebLogic-based systems of the organization.
Introduction
Among the main challenges in developing and administering J2EE applications are the management of several sets of libraries and application versions, as well as configuring and then sharing different development environment setups. Applications can have many requirements and dependencies, such as specific WebLogic domain definitions with more or fewer resources, sub-component versions, JDK and binaries versions. Keeping track of all these while also making them easily available within the organization can be difficult.
Wouldn’t it be great to be able to pack an entire development server configuration, including the desired JDK and binaries versions, along with any custom files? Or have the ability to maintain a copy of a baseline image of this configuration and keep track of the differences in subsequent versions of this image?
The Docker virtualization solution based on Linux containers promises just that. And combined with the power of WLST for automating almost every task in WebLogic administration, the task of generating, maintaining and sharing various configurations of a WebLogic environment becomes much easier—and even enjoyable.
This article aims to show how WebLogic Scripting Tool (WLST) automation together with Docker virtualization can significantly improve the management of WebLogic development environments, and may inspire the adoption of a new strategy for managing these complex environments or the change management system for the WebLogic-based systems of the organization.
In the first section of the article, a complete WebLogic environment will be automatically built from scratch using the automation tools provided by the Docker engine (specifically, Dockerfile). Thanks to WLST offline capabilities, these actions include every installation step—from fulfilling operating system requirements to WebLogic domain creation, This complete environment will be stored as a Docker image in a local repository on the Docker host machine. Modifications to the image will be stored in subsequent versions, and will be easily made available from a central private (or public) Docker repository.
In the second part of the article, WebLogic servers are being dockerized, meaning that the services are started in their own isolated Docker containers, which in turn are based on the initially built image. Each WebLogic instance will run in its own container and, besides starting an administration server and a managed server, the extreme scalability of WebLogic 12c will be demonstrated by showing how easy it is to add a new instance in an existing WebLogic cluster simply by starting up a new Docker container.
An important advantage of the Docker containers over conventional VMs is that containers share the operating system with the host and do not need to boot their own operating system when started up. However, Docker containers still ensure (a certain degree of) isolation; this affects WebLogic clustering, since it will be similar to having each instance of a cluster running in its own physical machine. The third part of the article shows how the WebLogic managed servers connect to the admin and how their lifecycle can be controlled with docker commands, by starting and stopping the containers.
Building Docker Images for WebLogic Development Environments
Conventionally, building a new development environment requires a lot of time for preparing the operating system, installing and configuring WebLogic, and deploying dependencies. Distributing the exact same environment is yet another challenge. It is a good idea then to build the environment once, pack it in a Docker image and make it available in a centralized repository. Perhaps the best thing about Docker images is that they are working on a layering concept, so they can be created based on other existing images. Building an image can be automated by the use of a Dockerfile, which is a set of instructions that the Docker engine executes in order to generate the desired image.
To build the image, it is enough to copy all the necessary files to one directory on the host machine:

The files are as follows:
- Installation files for JDK and WebLogic:
fmw_12.1.3.0.0_wls.jar
jdk-7u75-linux-x64.tar.gz
- WLST scripts and properties file:
createDomain.py
setEnvironment.sh
startWLS12c_admin.py
startWLS12c_managed.py
startWLS12c.sh
NewDomain.properties
- Files required for the silent install of the WebLogic Server:
oraInst.loc
responseFile
The content of the Dockerfiles is a series of key words compiled by the Docker engine that will execute several commands in order to build the desired image starting from a predefined one, specified in the FROM clause. So, using the Docker file below, a new image containing a WebLogic 12c domain will be built starting from the official Oracle Linux OL6 image, which is publicly available in the Docker hub.
################################ Dockerfile #######################################
FROM oraclelinux:6.6
MAINTAINER Radu Dobrinescu - <radu.dobrinescu@oraclemiddlewareblog.com>
RUN groupadd oinstall -g 501 && \
useradd -m -u 501 -g oinstall -d /home/oracle oracle && \
mkdir -p /oracle/ && \
mkdir -p /oracle/fmwhome/wlst_custom/ && \
chown -R oracle:oinstall /oracle/ && \
chown -R oracle:oinstall /oracle/fmwhome && \
chown -R oracle:oinstall /home/oracle && \
chown -R oracle:oinstall /oracle/fmwhome/wlst_custom
ENV SCRIPT_PATH /oracle/fmwhome/wlst_custom/
ADD jdk-7u75-linux-x64.tar.gz /oracle/fmwhome/
COPY oraInst.loc setEnvironment.sh responseFile /oracle/
COPY startWLS12c.sh NewDomain.properties *.py $SCRIPT_PATH
RUN chown oracle:oinstall /oracle/oraInst.loc && \
chown -R oracle:oinstall $SCRIPT_PATH && \
chmod +x $SCRIPT_PATH/startWLS12c.sh && \
chmod +x $SCRIPT_PATH/startWLS12c_managed.py && \
chmod +x $SCRIPT_PATH/startWLS12c_admin.py
USER oracle
ENV JAVA_HOME=/oracle/fmwhome/jdk1.7.0_75/ PATH=$PATH:/oracle/fmwhome/jdk1.7.0_75/bin MW_HOME=/oracle/fmwhome/wls12c/ SCRIPT_PATH=/oracle/fmwhome/wlst_custom/
ENV CONFIG_JVM_ARGS -Djava.security.egd=file:/dev/./urandom
COPY fmw_12.1.3.0.0_wls.jar /oracle/
RUN java -jar /oracle/fmw_12.1.3.0.0_wls.jar -silent -invPtrLoc /oracle/oraInst.loc -responseFile /oracle/responseFile && \
. ./oracle/setEnvironment.sh && \
java -Djava.security.egd=file:/dev/./urandom weblogic.WLST $SCRIPT_PATH/createDomain.py && \
rm -rf /oracle/fmw_12.1.3.0.0_wls.jar
##################################################################################
Basically, what the Docker engine does when building from the Dockerfile above is:
- Download the base oraclelinux:6.6 image (if not already available in the local registry).
- Create the oracle user and oinstall group in the operating system as well as custom directories. These will be the owner of the installed software and related installation directories.
- Copy the JDK, a response file for a silent installation, and a oraInst.log file to specify the oraInventory location, as well as custom scripts to startup the services. The JDK archive will be automatically uncompressed and then deleted by the ADD command.
- Set environment variables required for installation, such as JAVA_HOME and the MW_HOME where WebLogic server should be installed.
- Execute a silent installation of the WebLogic server.
- Run the WLST offline script to create the new domain.
- Clean up the image by removing the installation files to make the total size of the image as small as possible.
Once the files are in place, the image is built by running:
# docker build -t <IMAGE_NAME>:[TAG] <PATH_TO_FILES>

The build process should be quite fast, showing a confirmation of each step executed, and should end in a confirmation that the image has been successfully built:
Successfully built 5296ac8fa62e
The new image can then be checked in the local Docker repository:

The image should serve as a baseline for the new WebLogic environments. Each server in the domain will run within a Docker container that is based on this image. Any changes in the WebLogic domain configuration that need to be persisted should be saved in a subsequent version of the image, either by creating a new Dockerfile that includes these changes, or by committing the changes to a new image, if they have been executed within a running container, as shown in the following section about containers lifecycle.
When the image reaches a final version, it can be shared publicly or privately by being pushed into a Docker repository. Users can then retrieve this image locally by running a simple#docker pull IMAGE_NAME command. More on the PUSH and PULL commands for working with images and repositories can be found in the Docker documentation. This makes it possible to distribute entire WebLogic domains with all dependencies simply by distributing the Docker image.
Running WebLogic in Docker Containers
The Docker image is a read-only template for the WebLogic environment, while the Docker containers are similar to directories, based on this image. They represent an isolated platform in which each WebLogic server instance has everything it needs in order to run.
For example, to start the administration server of the WebLogic domain that was created during the image build, it is enough to start a Docker container with the following command:
#docker run -itd -e SERVER_TYPE=AdminServer --name AdminServer -p 7001:7001 radudobrinescu/otn_build bash -c "/oracle/fmwhome/wlst_custom/startWLS12c.sh && /bin/bash"
The container will execute the startWLS12c.sh script, which, based on the SERVER_TYPE environment variable value, will call the startWLS12c_admin.py script, which in turn will start the administration server. The container is started up in the background using the ‘–d’ flag, and an intuitive name is attached by the –name parameter. The Docker engine will assign a new IP address to each newly started container, in a bridged network that has been configured as part of the Docker installation on the host machine. So, to make the WebLogic server running inside the container visible from the outside world, the parameter “–p 7001:7001” has been used to map the port 7001 of the host machine to port 7001 within the container.
Once the Admin server has booted, the administration console will be accessible at
http://<DOCKER_HOST_IP_ADDRESS>:7001/console

The container will live only for as long as the command it is executing is running. As soon as the shell opened within the container is exited, the container will stop and the administrator console will no longer be accessible. The container will still be present on the machine, in a stopped state. All containers, regardless of their current state, can be viewed by running “#docker ps –a”:

WebLogic Clustering with Docker Containers; Extreme Scaling with WebLogic 12c
Similar to the AdminServer, new Docker containers will be instantiated for running WebLogic managed servers. However, there are two important differences in the “docker run” command to start a managed server container: the SERVER_TYPE environment should be set to “ManagedServer” and a new environment variable, ADMIN_HOSTNAME, must be defined. This will be used by the custom script to connect to the AdminServer container in order to add new managed servers in the domain configuration and, eventually, to boot these newly created servers. Since Docker assigns a new IP address each time the container is started, the ADMIN_HOSTNAME value will be set to an alias pointing to the AdminServer container, set up by the --link parameter.
A new container running a managed server can then be started:
docker run -itd -e SERVER_TYPE=ManagedServer -e ADMIN_HOSTNAME=AdminServerHost --link AdminServer:AdminServerHost -p 7101:7101 --name ManagedServer1 radudobrinescu/otn_build bash -c "/oracle/fmwhome/wlst_custom/startWLS12c.sh && /bin/bash"
The same startWLS12c.sh script will now call startWLS12c_managed.py, based on the value of the SERVER_TYPE variable. This WLST script takes advantage of the new features introduced in WebLogic 12c to maximize the scalability of the cluster by creating and starting up new managed servers to join the existing cluster.

These new 12c features, such as dynamic clusters and per-domain node manager, make it very easy to add new managed servers that are defined by a server template and to have them take over some of the load by being started automatically.
The container is using the Docker linking system to establish connection between the managed server and admin server, specified by the “--link AdminServer:AdminServerHost”. Basically, this will create environment variables in the managed server container that will resolve the ‘AdminServerHost’ hostname to the IP of the AdminServer container. All containers will be assigned an IP from the same virtual network, so managed servers within the cluster can communicate for features like sessions replication, Java Naming and Directory Interface (JNDI) tree replication, etc.
The startWLS12c_managed.py script will first connect to the admin server to add a new machine defined by the hostname of the container, and will start a node manager process listening on the calculated IP address of the container. The domain is then enrolled with the local node manager; each container will act as a separate machine, so it will run its own node manager process. It is then sufficient to increment the value of the dynamic cluster’s MaximumDynamicServerCount MBean to have a new managed server instance created in the cluster, based on the template that has been defined in the domain creation script. Once the domain configuration is saved and activated, the new managed server can be started via the node manager.
Additional managed servers can be added to the cluster by starting similar containers, only by incrementing the mapped port and assigning an intuitive name to the container:
docker run -itd -e SERVER_TYPE=ManagedServer -e ADMIN_HOSTNAME=AdminServerHost --link AdminServer:AdminServerHost -p 7102:7102 --name ManagedServer2 radudobrinescu/otn_build bash -c "/oracle/fmwhome/wlst_custom/startWLS12c.sh && /bin/bash"

#################### startWLS12c_managed.py #######################################
import socket
import os
import time
SCRIPT_PATH=os.environ['SCRIPT_PATH']
loadProperties(SCRIPT_PATH+'/NewDomain.properties');
hostname = socket.gethostname()
admin_url = os.environ["ADMIN_HOSTNAME"]+':'+AdminServerPort
ip_addr = os.popen("/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'").read()
def addMachine():
edit()
startEdit()
cd('/')
create(hostname,'UnixMachine')
cd('Machines/'+hostname)
cd('NodeManager/'+hostname)
set('ListenAddress',ip_addr)
set('NMType','Plain')
save()
activate()
def changeMachineIP():
edit()
startEdit()
cd('/')
cd('Machines/'+hostname+'/NodeManager/'+hostname)
set('ListenAddress',ip_addr)
save()
activate()
def createMS():
edit()
startEdit()
cd('/')
cd('Clusters/'+DynamicClusterName+'/DynamicServers/'+DynamicClusterName)
serverCount = cmo.getMaximumDynamicServerCount()+1
set('MaximumDynamicServerCount',serverCount)
print "The number of Dynamic Servers in the cluster has been increased to: "+str(serverCount)
save()
activate()
def startNM():
try:
nmConnect(AdminUser,AdminServerPassword,str(ip_addr),5556,DomainName,DomainPath+DomainName,'plain')
except:
print "It seems Node Manager is not running. Starting Node Manager..."
startNodeManager(NodeManagerHome=DomainPath+DomainName+'/nodemanager',PropertiesFile=DomainPath+DomainName+'/nodemanager/nodemanager.properties',verbose=false)
print "Waiting 60 seconds for Node Manager to start..."
Thread.sleep(60000)
nmConnect(AdminUser,AdminServerPassword,str(ip_addr),5556,DomainName,DomainPath+DomainName,'plain')
print "IP address of this Docker container is "+ip_addr
connect(AdminUser,AdminServerPassword,'http://'+admin_url);;/)
nmEnroll(DomainPath+DomainName,DomainPath+DomainName+'/nodemanager');
cd('/')
cd('Machines')
machines = ls()
if (machines.find(hostname) != -1):
print "Machine with name "+hostname+" already exists in this domain"
changeMachineIP()
startNM()
cd('/')
cd('Clusters/'+DynamicClusterName+'/DynamicServers/'+DynamicClusterName)
serverCount = cmo.getMaximumDynamicServerCount()
print serverCount
start(ManagedServerPrefix+str(serverCount),'Server')
else:
print "Adding a new machine named "+hostname+" to the domain..."
addMachine()
print "Adding a new Dynamic Server to the <"+DynamicClusterName+"> cluster"
createMS()
cd('/')
cd('Clusters/'+DynamicClusterName+'/DynamicServers/'+DynamicClusterName)
serverCount = cmo.getMaximumDynamicServerCount()
print serverCount
startNM()
start(ManagedServerPrefix+str(serverCount),'Server')
exit();
Container Lifecyle
Once the servers in the domain are up and running, they can be centrally controlled from the docker command line, simply by stopping or starting the respective Docker containers. Since the containers have already been created, they can now be controlled simply by referencing them by name.
To stop the managed server: docker stop ManagedServer1

The managed server shows up as shutdown in the console:

The AdminServer container can be shut down and restarted in the same way: docker stop AdminServer.

To start up the servers again, there is no need to specify the container parameters again. It is enough to issue “docker start AdminServer” to boot up the admin server.

Once the admin finishes booting, the administration console is again accessible:

Starting up the managed server again is just as simple: docker start ManagedServer1



Two things are worth mentioning here:
-
Any changes in the domain configuration that are made during the containers’ lifetime will be persisted only at container level. This means that if the container is stopped and started, modifications are preserved in the container. But if the container is removed and recreated (“docker rm” and “docker run”), the changes are lost and everything will revert to the setting in the initial image. However, all changes can be saved to a new image by committing the container with the “docker commit” command. For example, any change to the domain configuration (e.g., a new deployment) can be preserved in a new image by running
“docker commit AdminServer radudobrinescu/otn_build:2.0”:

Note: The latest tag has nothing to do with the actual time of the image creation; it is the default tag assigned to an image when no other is explicitly specified.
-
At each startup, a container will be assigned a new IP address by the Docker engine. However, since the connection between the managed servers and admin server is defined via the linking system (similar to name resolution), the connection between managed servers and admin is always ensured.
Conclusion
The proposed solution should ease the management of Fusion Middleware environments, and could represent a first step in implementing an auto-scalable solution to running middleware applications on top of the WebLogic 12c platform. Other benefits of this approach are that development and testing environments should be easy to generate based on specific organization demands, and applications can be tested in various setups by easily starting up Docker containers over specific images. Load testing the applications would also benefit from this proposed setup, as scaling the WebLogic cluster for additional load resumes only when spawning new Docker containers that will start up WebLogic instances which automatically join the cluster.
About the Author
Radu Dobrinescu is a certified expert in Oracle Fusion Middleware, and a project-based consultant with hands-on experience with business critical production environments. His specialties include Oracle Weblogic Server (11g, 12c); Oracle Coherence; High Availability, Failover and Disaster Recovery Architectures; Fusion Middleware Performance Tuning, and more..
This article represents the expertise, findings, and opinion of the author. It has been published by Oracle in this space as part of a larger effort to encourage the exchange of such information within this Community, and to promote evaluation and commentary by peers. This article has not been reviewed by the relevant Oracle product team for compliance with Oracle's standards and practices, and its publication should not be interpreted as an endorsement by Oracle of the statements expressed therein.