WebLogic Scripting for Ansible [Article]

Version 16

    Oracle ACE Director René van Wijk, an expert in Fusion Middleware and other technologies, offers code samples galore in this detailed guide to writing WLST scripts for Ansible, the open source automation tool.


     

    by René van Wijk ACED.gif

     

    In order to write WebLogic scripts for automation tools such as Ansible, we need to take care of idempotency. Before, we proceed with the scripts, we give a small introduction into the WebLogic Scripting Tool (WLST).

     

    Introducing WLST

     

    To enable JMX clients to control MBean life cycles, WebLogic MBeans contain operations that follow the design pattern for Java bean factory methods: for each child, a parent MBean contains a create<MBEAN_NAME> and a destroy<MBEAN-NAME> operation, where <MBEAN-NAME> is the short name of the MBean's type (the short name is the MBean's unqualified type name without the MBean suffix, for example, createCluster). The parent also contains a lookup<MBEAN_NAME> operation. The DomainMBean (or DomainMBean) is an example of a parent bean. To create a cluster, we call createCluster(String name). To see if a cluster already has been created, we use lookupCluster(String name). Instead of using lookup<MBEAN_NAME>, we can also use getPath and getMBean. In this case, we have to understand how WebLogic registers the MBeans, i.e., how the object names are structured. For example, to obtain an instance of the ClusterMBean, we can use:

     

    cluster_bean_path = getPath('com.bea:Name=' + application_cluster_name + ',Type=Cluster');
    cluster = getMBean(cluster_bean_path);

     

    The WebLogic Scripting Tool (WLST) can be used as the command-line equivalent to the WebLogic Server Administration Console (WLST online) or as the command-line equivalent to the Configuration Wizard (WLST offline). WLST offline has a few restrictions though, one them being: "offline edits are ignored by running servers", so in the following we will be using WLST online.

    The Administration Server provides some common access points:

     

    • The DomainRuntimeServiceMBean provides a common access point for navigating to all runtime and configuration MBeans in the domain as well as to MBeans that provide domain-wide services (such as controlling and monitoring the life cycles of servers and message-driven EJBs and coordinating the migration of migratable services).
    • The EditServiceMBean provides the entry point for managing the configuration of the current WebLogic Server domain.

     

    These access points can be used in Java classes, as was done in the post Automatic Scaling and Deployment Plans. Note that in the example presented in the post, we interact with the MBeans directly, without the use of a proxy by using the MBeanServerInvocationHandler (newProxyInstance) of which an example can be found here. The advantage of using the proxy approach is that we can access MBeans as they were local objects.

     

    The proxy approach is what the WebLogic Scripting Tool offers out of the box, i.e., once we are connected to the Administration Server, for example by using...

     

    def connect_to_admin_server():
        print 'CONNECTING TO ADMIN SERVER';
        
        if os.path.isfile(admin_server_config_file) and os.path.isfile(admin_server_key_file):
            print '- using config and key file';
            connect(userConfigFile=admin_server_config_file, userKeyFile=admin_server_key_file, url=admin_server_url);
        else:
            print '- using username and password';
            connect(admin_username, admin_password, admin_server_url);

    ...we can interact with the access points directly.

    [weblogic@machine1 bin]$ ./wlst.sh
    # connect to the admin server
    wls:/offline> connect(admin_username, admin_password, admin_server_url);
    
    # once connected the current management object (cmo) is initially a proxy of the DomainMBean
    wls:/tryout_domain/serverConfig> print cmo;
    [MBeanServerInvocationHandler]com.bea:Name=tryout_domain,Type=Domain
    
    # from the DomainMBean proxy, we can obtain (by using straightforward accessor methods) the configured clusters
    wls:/tryout_domain/serverConfig> clusters = cmo.getClusters();
    
    # again by using accessor methods we can obtain information about the cluster and also other MBeans that are linked to the cluster
    wls:/tryout_domain/serverConfig> for cluster in clusters:
    ... print cluster.getName();
    ... print cluster.getDynamicServers().getMaximumDynamicServerCount();
    ... print cluster.getDynamicServers().getServerTemplate().getStagingMode();
    ...
    application_cluster
    2
    stage
    
    # in Java the above would look like (using mbean server weblogic.management.mbeanservers.domainruntime)
    ObjectName domainRuntimeServiceName = new ObjectName("com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");
    DomainRuntimeServiceMBean domainRuntimeService = (DomainRuntimeServiceMBean) MBeanServerInvocationHandler.newProxyInstance(connection, domainRuntimeServiceName, DomainRuntimeServiceMBean.class, false);
    DomainMBean domainConfiguration = domainRuntimeService.getDomainConfiguration();
    ClusterMBean[] clusters = domainConfiguration.getClusters();
    for (ClusterMBean cluster: clusters) {
        System.out.println(cluster.getName());
        System.out.println(cluster.getDynamicServers().getMaximumDynamicServerCount());
        System.out.println(cluster.getDynamicServers().getServerTemplate().getStagingMode());
    }
    # note that MBeanServerInvocationHandler is an instance of weblogic.management.jmx.MBeanServerInvocationHandler
    
    # in the same way as using cmo, we can also use the domainRuntimeService
    wls:/tryout_domain/serverConfig> print domainRuntimeService
    [MBeanServerInvocationHandler]com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean
    
    # the domainRuntimeService is also an access point to the domain runtime
    wls:/tryout_domain/serverConfig> domain_runtime = domainRuntimeService.getDomainRuntime();
    wls:/tryout_domain/serverConfig> print domain_runtime;
    [MBeanServerInvocationHandler]com.bea:Name=tryout_domain,Type=DomainRuntime
    
    # from the domain runtime we can obtain the ServerLifeCycleRuntimeMBean (that provides methods that transition servers from one state to another)
    wls:/tryout_domain/serverConfig> server_life_cycles = domain_runtime.getServerLifeCycleRuntimes();
    wls:/tryout_domain/serverConfig> for server_life_cycle in server_life_cycles:
    ... print server_life_cycle.getName() + ', ' + server_life_cycle.getState();
    ...
    server_2, SHUTDOWN
    AdminServer, RUNNING
    server_1, SHUTDOWN
    
    # in Java the above would look like (using mbean server weblogic.management.mbeanservers.domainruntime)
    DomainRuntimeMBean domainRuntime = domainRuntimeService.getDomainRuntime();
    ServerLifeCycleRuntimeMBean[] serverLifeCycles = domainRuntime.getServerLifeCycleRuntimes();
    for (ServerLifeCycleRuntimeMBean serverLifeCycle: serverLifeCycles) {
        System.out.println(serverLifeCycle.getName() + ", " + serverLifeCycle.getState());
    }
    
    # in the same way as using domainRuntimeService, we can also use the editService
    wls:/tryout_domain/serverConfig> print editService;
    [MBeanServerInvocationHandler]com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean
    
    # obtain the ConfigurationManagerMBean
    wls:/tryout_domain/serverConfig> config = editService.getConfigurationManager();
    wls:/tryout_domain/serverConfig> print config;
    [MBeanServerInvocationHandler]com.bea:Name=ConfigurationManager,Type=weblogic.management.mbeanservers.edit.ConfigurationManagerMBean
    
    # use the ConfigurationManagerMBean's startEdit() operation to start an edit session
    wls:/tryout_domain/serverConfig> domain = config.startEdit(60000,120000);
    
    # create a cluster
    wls:/tryout_domain/serverConfig> cluster = domain.createCluster('some_cluster');
    wls:/tryout_domain/serverConfig> cluster.setClusterMessagingMode('unicast');
    wls:/tryout_domain/serverConfig> cluster.setWeblogicPluginEnabled(java.lang.Boolean('true'));
    
    # save the changes
    wls:/tryout_domain/serverConfig> config.save();
    
    # use the ConfigurationManagerMBean's activate() operation to activate the saved changes
    wls:/tryout_domain/serverConfig> activate = config.activate(120000);
    
    # by using the ActivationTaskMBean we get information about the change
    wls:/tryout_domain/serverConfig> print activate.getDetails();
    Activation Task started at 1445334386652
    User that initiated this task weblogic
    Changes that are being activated are:
        weblogic.management.mbeanservers.edit.internal.ChangeImpl@4cfabd4e
        weblogic.management.mbeanservers.edit.internal.ChangeImpl@535f766b
    Status of this activation per server:
        ServerName : AdminServer
        Status : COMMITTED
    
    # in Java the above would look like (using mbean server weblogic.management.mbeanservers.edit)
    ObjectName editServiceName = new ObjectName("com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean");
    EditServiceMBean editService = (EditServiceMBean) MBeanServerInvocationHandler.newProxyInstance(editConnection, editServiceName, EditServiceMBean.class, false);
    ConfigurationManagerMBean configurationManager = editService.getConfigurationManager();
    DomainMBean domainEdit = configurationManager.startEdit(60000, 120000);
    if (domainEdit.lookupCluster("some_cluster") == null) {
        ClusterMBean clusterEdit = domainEdit.createCluster("some_cluster");
        clusterEdit.setClusterMessagingMode("unicast");
        clusterEdit.setWeblogicPluginEnabled(true);
    }
    configurationManager.save();
    Change[] changes = configurationManager.getUnactivatedChanges();
    if (changes.length > 0) {
        ActivationTaskMBean activationTask = configurationManager.activate(120000);
        System.out.println(activationTask.getDetails());
    } else {
        configurationManager.cancelEdit();
    }
        
    # the domainRuntimeService is also an entry point to the domain configuration    
    wls:/tryout_domain/serverConfig> domain_config = domainRuntimeService.getDomainConfiguration();
    wls:/tryout_domain/serverConfig> clusters = domain_config.getClusters();
    wls:/tryout_domain/serverConfig> for cluster in clusters:
    ... print cluster.getName();
    ...
    application_cluster
    some_cluster

    To edit the domain, we can also use the WLST Editing Commands, for example:

    def start_edit_mode():
        print 'START EDIT MODE';
        edit();
        startEdit();
    
    def save_and_active_changes():
        print 'SAVE AND ACTIVATE CHANGES';
        save();
        activate(block='true');

    Note that in order to edit configuration beans, we must be connected to the Administration Server, and we must navigate to the edit tree and start an edit session, as shown above in the start_edit_mode method.

     

    WLST scripts

     

    To reflect a form of idempotency in WLST scripts, we first check if a certain MBean exists, after which we create it.

     

    Cluster

     

    To create a cluster, we can use...

     

    def create_cluster():
        print 'CREATING CLUSTER';
        
        if cmo.lookupCluster(application_cluster_name) is None:
            print '- creating cluster ' + application_cluster_name;
            cluster = cmo.createCluster(application_cluster_name);
            cluster.setClusterMessagingMode('unicast');
            cluster.setWeblogicPluginEnabled(java.lang.Boolean('true'));
        else:
            print '- cluster ' + application_cluster_name + ' already exists and will not be created.';

    ...in which we set the messaging mode to unicast, and enable the WebLogic Plugin setting.

     

    Machine

     

    Machines can be created equally by using the MachineMBean. In the example below, we use the UnixMachineMBean, which extends the MachineMBean with properties specific to the UNIX platform. To create machines, we can use...

    def create_machines():
        print 'CREATING MACHINES';
        
        for i in range(len(machine_listen_addresses)):
            machine_name = 'machine_' + machine_listen_addresses[i].split('.')[0];
            if cmo.lookupMachine(machine_name) is None:
                print '- creating machine ' + machine_name;
                machine = cmo.createUnixMachine(machine_name);
                machine.setPostBindUIDEnabled(java.lang.Boolean('true'));
                machine.setPostBindUID(machine_user_id);
                machine.setPostBindGIDEnabled(java.lang.Boolean('true'));
                machine.setPostBindGID(machine_group_id);
                machine.getNodeManager().setListenAddress(machine_listen_addresses[i]);
                machine.getNodeManager().setNMType(node_manager_mode);
            else:
                print '- machine ' + machine_name + ' already exists and will not be created.';

    ...in which we set some Unix specific settings, and configure the node manager associated with the machine.

     

    Server

     

    To create WebLogic Servers, we use the ServerMBean (which extends the ServerTemplateMBean). Here, we need the created cluster and machines, which we obtain by respectively using getPath and getMBean, and the current management object (cmo).

     

    def create_servers():
        print 'CREATING SERVERS';
        
        cluster_bean_path = getPath('com.bea:Name=' + application_cluster_name + ',Type=Cluster');
        cluster = getMBean(cluster_bean_path);    
        machines = cmo.getMachines();
        
        for i in range(len(machines)):
            for j in range(number_of_managed_servers_per_machine):
                managed_server_listen_port = managed_server_listen_port_start + j;
                managed_server_server_name = 'server_' + repr(i) + '_' + repr(j);
                if cmo.lookupServer(managed_server_server_name) is None:
                    print '- creating managed server ' + managed_server_server_name;
                    server = cmo.createServer(managed_server_server_name);
                    server.setListenPort(managed_server_listen_port);
                    server.setListenAddress(machine_listen_addresses[i]);
                    server.setCluster(cluster);
                    server.setMachine(machines[i]);
                    overload_protection = server.getOverloadProtection();
                    overload_protection.setFailureAction('force-shutdown');
                    overload_protection.setPanicAction('system-exit');
                    overload_protection.createServerFailureTrigger();
                    overload_protection.getServerFailureTrigger().setMaxStuckThreadTime(600);
                    overload_protection.getServerFailureTrigger().setStuckThreadCount(0);
                    server_log = server.getLog();
                    server_log.setRotationType('bySize');
                    server_log.setFileMinSize(5000);
                    server_log.setNumberOfFilesLimited(java.lang.Boolean('true'));
                    server_log.setFileCount(10);
                    server_log.setLogFileSeverity('Info');
                    server_log.setStdoutSeverity('Error');
                    server_log.setDomainLogBroadcastSeverity('Error');
                    web_server_log = server.getWebServer().getWebServerLog();
                    web_server_log.setLoggingEnabled(java.lang.Boolean('false'));
                    web_server_log.setRotationType('bySize');
                    web_server_log.setFileMinSize(5000);
                    web_server_log.setNumberOfFilesLimited(java.lang.Boolean('true'));
                    web_server_log.setFileCount(10);
                    if configure_ssl == 'yes':
                        managed_server_ssl_listen_port = managed_server_ssl_listen_port_start + j;
                        server.setKeyStores(config_type);
                        server.setCustomIdentityKeyStoreFileName(key_store_file_name);
                        server.setCustomIdentityKeyStoreType(store_type);
                        server.setCustomIdentityKeyStorePassPhrase(key_store_pass_phrase);
                        server.setCustomTrustKeyStoreFileName(trust_store_file_name);
                        server.setCustomTrustKeyStoreType(store_type);
                        server.setCustomTrustKeyStorePassPhrase(trust_store_pass_phrase);
                        ssl = server.getSSL();
                        ssl.setEnabled(java.lang.Boolean('true'));
                        ssl.setListenPort(managed_server_ssl_listen_port);
                        ssl.setServerPrivateKeyAlias(private_key_alias);
                        ssl.setServerPrivateKeyPassPhrase(private_key_pass_phrase);
                        ssl.setHostnameVerificationIgnored(java.lang.Boolean('true'));
                        ssl.setHostnameVerifier(None);
                        ssl.setTwoWaySSLEnabled(java.lang.Boolean('false'));
                        ssl.setClientCertificateEnforced(java.lang.Boolean('false'));
                else:
                    print '- server ' + managed_server_server_name + ' already exists and will not be created.';

     

    ...in which we configure overload protection by using the OverloadProtectionMBean, configure logging by using the LogMBean, and eventually configure SSL (when needed) by using the SSLMBean.

     

    Dynamic cluster

     

    In the same manner as we created a 'normal' cluster, we can also create a dynamic cluster. For example:

     

    def create_dynamic_cluster():
        print 'CREATING DYNAMIC CLUSTER';
    
        cluster = cmo.lookupCluster(application_cluster_name)
        if cluster is None:
            print '- creating dynamic cluster ' + application_cluster_name;
            cluster = cmo.createCluster(application_cluster_name);
            cluster.setClusterMessagingMode('unicast');
            cluster.setWeblogicPluginEnabled(java.lang.Boolean('true'));
            server_template_name = application_cluster_name + '_server_template';
            print '- creating server template ' + server_template_name
            cmo.createServerTemplate(server_template_name);
            server_template = cmo.lookupServerTemplate(server_template_name);
            server_template.setListenPort(managed_server_listen_port_start);
            server_template.setCluster(cluster);
            overload_protection = server_template.getOverloadProtection();
            overload_protection.setFailureAction('force-shutdown');
            overload_protection.setPanicAction('system-exit');
            overload_protection.createServerFailureTrigger();
            overload_protection.getServerFailureTrigger().setMaxStuckThreadTime(600);
            overload_protection.getServerFailureTrigger().setStuckThreadCount(0);
            server_log = server_template.getLog();
            server_log.setRotationType('bySize');
            server_log.setFileMinSize(5000);
            server_log.setNumberOfFilesLimited(java.lang.Boolean('true'));
            server_log.setFileCount(10);
            server_log.setLogFileSeverity('Info');
            server_log.setStdoutSeverity('Error');
            server_log.setDomainLogBroadcastSeverity('Error');
            web_server_log = server_template.getWebServer().getWebServerLog();
            web_server_log.setLoggingEnabled(java.lang.Boolean('false'));
            web_server_log.setRotationType('bySize');
            web_server_log.setFileMinSize(5000);
            web_server_log.setNumberOfFilesLimited(java.lang.Boolean('true'));
            web_server_log.setFileCount(10);
            if configure_ssl == 'yes':
                server_template.setKeyStores(config_type);
                server_template.setCustomIdentityKeyStoreFileName(key_store_file_name);
                server_template.setCustomIdentityKeyStoreType(store_type);
                server_template.setCustomIdentityKeyStorePassPhrase(key_store_pass_phrase);
                server_template.setCustomTrustKeyStoreFileName(trust_store_file_name);
                server_template.setCustomTrustKeyStoreType(store_type);
                server_template.setCustomTrustKeyStorePassPhrase(trust_store_pass_phrase);
                ssl = server_template.getSSL();
                ssl.setEnabled(java.lang.Boolean('true'));
                ssl.setListenPort(managed_server_ssl_listen_port_start);
                ssl.setServerPrivateKeyAlias(private_key_alias);
                ssl.setServerPrivateKeyPassPhrase(private_key_pass_phrase);
                ssl.setHostnameVerificationIgnored(java.lang.Boolean('true'));
                ssl.setHostnameVerifier(None);
                ssl.setTwoWaySSLEnabled(java.lang.Boolean('false'));
                ssl.setClientCertificateEnforced(java.lang.Boolean('false'));
            
            print '- adding server template to the cluster ' + application_cluster_name;
            cluster.getDynamicServers().setServerTemplate(server_template);
            dynamic_server_count = len(machine_listen_addresses) * number_of_managed_servers_per_machine;
            cluster.getDynamicServers().setMaximumDynamicServerCount(dynamic_server_count);
            cluster.getDynamicServers().setMachineNameMatchExpression('machine*');
            cluster.getDynamicServers().setServerNamePrefix('server_');
            cluster.getDynamicServers().setCalculatedListenPorts(java.lang.Boolean('true'));
            cluster.getDynamicServers().setCalculatedMachineNames(java.lang.Boolean('true'));
            cluster.getDynamicServers().setCalculatedListenPorts(java.lang.Boolean('true'));
        else:
            print '- cluster ' + application_cluster_name + ' already exists and will not be created.';        
            print '- checking if the number of dynamic servers needs to be changed';
            dynamic_server_count = len(machine_listen_addresses) * number_of_managed_servers_per_machine;
            if cluster.getDynamicServers().getMaximumDynamicServerCount() != dynamic_server_count:
                print '- changing the number of dynamic servers';
                cluster.getDynamicServers().setMaximumDynamicServerCount(dynamic_server_count);

     

    The main difference is that that we also include a server template. Note that the number of servers in the dynamic cluster is controlled by the maximum dynamic server count, which can be changed after the cluster has been created. Important to keep in mind is that the number of servers cannot be lowered when all the servers are running (in other   words servers in a dynamic cluster can only be removed when they are shutdown).

     

    Coherence cluster

     

    A Coherence cluster is represented as a single system-level resource, which can be created by using the CoherenceClusterSystemResourceMBean.

    def create_coherence_cluster_system_resource():
        print 'CREATING COHERENCE CLUSTER SYSTEM RESOURCE';
        
        coherence_cluster_system_resource = cmo.lookupCoherenceClusterSystemResource(coherence_cluster_system_resource_name);
        if coherence_cluster_system_resource is None:
            print '- creating coherence cluster system resource ' + coherence_cluster_system_resource_name;
            coherence_cluster_system_resource = cmo.createCoherenceClusterSystemResource(coherence_cluster_system_resource_name);
            coherence_cluster_resource = coherence_cluster_system_resource.getCoherenceClusterResource();
            coherence_cluster_params = coherence_cluster_resource.getCoherenceClusterParams();
            coherence_cluster_params.setClusteringMode('unicast');
            coherence_cluster_params.setUnicastListenAddress('localhost');
            coherence_cluster_params.setUnicastListenPort(coherence_cluster_listen_port);
            coherence_cluster_params.setUnicastPortAutoAdjust(java.lang.Boolean('true'));
            coherence_cluster_params.setMulticastListenAddress('231.1.1.1');
            coherence_cluster_params.setMulticastListenPort(7777);
            cd('/CoherenceClusterSystemResources/' + coherence_cluster_system_resource_name + '/CoherenceClusterResource/' + coherence_cluster_system_resource_name + '/CoherenceClusterParams/' + coherence_cluster_system_resource_name + '/CoherenceClusterWellKnownAddresses/' + coherence_cluster_system_resource_name);
            for i in range(len(well_known_addresses)):            
                well_known_address_name = 'wka_' + well_known_addresses[i].split('.')[0];
                print '- creating well known address ' + well_known_address_name;
                cmo.createCoherenceClusterWellKnownAddress(well_known_address_name);
                address = cmo.lookupCoherenceClusterWellKnownAddress(well_known_address_name);
                address.setListenAddress(well_known_addresses[i]);
                address.setListenPort(coherence_cluster_listen_port);
            cd('/');
            
            print '- associate the coherence cluster system resource ' + coherence_cluster_system_resource_name + ' to the coherence cluster ' + coherence_cluster_name;
            coherence_cluster_bean_path = getPath('com.bea:Name=' + coherence_cluster_name + ',Type=Cluster');
            coherence_cluster_bean = getMBean(coherence_cluster_bean_path);
            coherence_cluster_bean.setCoherenceClusterSystemResource(coherence_cluster_system_resource);
            if use_coherence_web == 'yes':
                coherence_cluster_bean.getCoherenceTier().setCoherenceWebLocalStorageEnabled(java.lang.Boolean('true'));
                coherence_cluster_bean.getCoherenceTier().setLocalStorageEnabled(java.lang.Boolean('false'));
            else:
                coherence_cluster_bean.getCoherenceTier().setCoherenceWebLocalStorageEnabled(java.lang.Boolean('false'));          
                coherence_cluster_bean.getCoherenceTier().setLocalStorageEnabled(java.lang.Boolean('true'));
    
            print '- associate the coherence cluster system resource ' + coherence_cluster_system_resource_name + ' to the application cluster ' + application_cluster_name;
            application_cluster_bean_path = getPath('com.bea:Name=' + application_cluster_name + ',Type=Cluster');
            application_cluster_bean = getMBean(application_cluster_bean_path);
            application_cluster_bean.setCoherenceClusterSystemResource(coherence_cluster_system_resource);
            application_cluster_bean.getCoherenceTier().setCoherenceWebLocalStorageEnabled(java.lang.Boolean('false'));
            application_cluster_bean.getCoherenceTier().setLocalStorageEnabled(java.lang.Boolean('false'));
        else:
            print '- coherence cluster system resource ' + coherence_cluster_system_resource_name + ' already exists and will not be created.';
            print '- checking if new well known address need to be created.'
            cd('/CoherenceClusterSystemResources/' + coherence_cluster_system_resource_name + '/CoherenceClusterResource/' + coherence_cluster_system_resource_name + '/CoherenceClusterParams/' + coherence_cluster_system_resource_name + '/CoherenceClusterWellKnownAddresses/' + coherence_cluster_system_resource_name);
            for i in range(len(well_known_addresses)):            
                well_known_address_name = 'wka_' + well_known_addresses[i].split('.')[0]
                if cmo.lookupCoherenceClusterWellKnownAddress(well_known_address_name) is None:
                    print '- creating well known address ' + well_known_address_name;
                    cmo.createCoherenceClusterWellKnownAddress(well_known_address_name);
                    address = cmo.lookupCoherenceClusterWellKnownAddress(well_known_address_name);
                    address.setListenAddress(well_known_addresses[i]);
                    address.setListenPort(coherence_cluster_listen_port);
                else:
                    print '- well known address ' + well_known_address_name + ' already exists and will not be created.';
            cd('/');

     

    Here, we set the cluster mode to unicast, and add well known addresses. Note that well known addresses must use the same port as the unicast port (which in the script above is represented by the coherence_cluster_listen_port variable).

     

    In general, a Coherence cluster is made-up of a data-tier and a application-tier. The data-tier in the script above is a WebLogic cluster defined by the coherence_cluster_name variable. The application-tier is a WebLogic cluster defined by the application_cluster_name variable. The clusters for the data-tier and application-tier are created in the same manner (only the name is different). In the creation of the Coherence cluster, we also set the Coherence-tier for the clusters, i.e., define if the cluster members are storage enabled nodes or not, by using the CoherenceTierMBean.

     

    Java EE

     

    When dealing with Java EE applications (such as presented here), we also need to create resources such as JDBC data sources and JMS administered objects (or in WebLogic terms JMS   resources).

     

    JDBC data sourceTo create JDBC data sources, we use the JDBCSystemResourceMBean, from which we obtain a JDBCDataSourceBean instance (that contains among others connection pool   settings and driver parameters). By considering some of the notes in the Tuning Data Source Connection Pools document, we can come up with something like the following to configure data sources:

     

    def create_data_source(targets):
        print 'CREATING DATA SOURCES';
    
        for i in range(len(data_source_names)):
            print '- creating jdbc system resource';
            if cmo.lookupJDBCSystemResource(data_source_names[i]) is None:
                data_source = cmo.createJDBCSystemResource(data_source_names[i]);
                data_source_targets = data_source.getTargets();
                data_source_targets.append(targets);
                data_source.setTargets(data_source_targets);
                jdbc_resource = data_source.getJDBCResource();
                jdbc_resource.setName(data_source_names[i]);
                data_source_params = jdbc_resource.getJDBCDataSourceParams();
                names = [data_source_jndi_names[i]];
                data_source_params.setJNDINames(names);
                if data_source_safe_transaction == 'yes':
                    data_source_params.setGlobalTransactionsProtocol('LoggingLastResource');
                else:
                    data_source_params.setGlobalTransactionsProtocol('EmulateTwoPhaseCommit');            
                driver_params = jdbc_resource.getJDBCDriverParams();
                driver_params.setUrl(data_source_url);
                driver_params.setDriverName(data_source_driver);
                driver_params.setPassword(data_source_passwords[i]);
                driver_properties = driver_params.getProperties();
                driver_properties.createProperty('user');
                user_property = driver_properties.lookupProperty('user');
                user_property.setValue(data_source_users[i]);
                connection_pool_params = jdbc_resource.getJDBCConnectionPoolParams();
                connection_pool_params.setTestTableName(data_source_test);
                connection_pool_params.setConnectionCreationRetryFrequencySeconds(30);
                connection_pool_params.setStatementCacheSize(0);
                if data_source_grid_link == 'yes':
                    oracle_params = jdbc_resource.getJDBCOracleParams();
                    oracle_params.setFanEnabled(java.lang.Boolean('true'));
                    oracle_params.setOnsNodeList(data_source_ons_node_list);
                    oracle_params.setOnsWalletFile('');
                    oracle_params.unSet('OnsWalletPassword');
                    oracle_params.unSet('OnsWalletPasswordEncrypted');
                    oracle_params.setActiveGridlink(java.lang.Boolean('true'));
            else:
                print ' - jdbc system resource ' + data_source_names[i] + ' already exists and will not be created.';

     

    Here, a small note on statement caching is in order (we turned statement caching off, statement cache size is zero). The statement cache size represents the number of statements to store in the cache for each connection. When turning on the WebLogic statement cache, the cached statement objects are very large (around 5MB each). With a lot of database connections this leads to a huge memory consumption (number_of_data_sources * number_of_connections * number_of_statements), with the result that the garbage collector will be working overtime. While using the WebLogic statement cache is slightly faster than using the   Oracle statement cache, the Oracle statement cache will use a lot less memory (because it can do so more efficiently) so it is generally recommended to turn off the WebLogic statement cache and turn on the Oracle statement cache using the connection property oracle.jdbc.implicitstatementcachesize (more information on statement caching can be found in the Database JDBC Developer's Guide).

     

    When working with data sources it might make sense to add work managers to the environment as well, such that we do not run into the situation in which threads have to wait (polling and burning CPU resources) for a connection from the pool but are queued. Some tuning recommendations concerning work managers are presented here. To create work managers (and related constraints and request classes), we can use the SelfTuningMBean, for example:

     

    def create_work_manager(targets):
        print 'CREATING WORK MANAGERS';
        self_tuning = cmo.getSelfTuning();
        for i in range(len(data_source_names)):
            work_manager_name = data_source_names[i] + '_work_manager';
            if self_tuning.lookupWorkManager(work_manager_name) is None:
                print '- creating work manager ' + work_manager_name + '.';
                work_manager = self_tuning.createWorkManager(work_manager_name);
                work_manager_targets = work_manager.getTargets();
                work_manager_targets.append(targets);
                work_manager.setTargets(work_manager_targets);
            
                max_threads_constraint_name = data_source_names[i] + '_max_threads_constraint';
                print '- creating max threads constraint ' + max_threads_constraint_name + '.';
                max_threads_constraint = self_tuning.createMaxThreadsConstraint(max_threads_constraint_name);
                max_threads_constraint.setTargets(work_manager_targets);
                max_threads_constraint.setConnectionPoolName(data_source_names[i]);
                max_threads_constraint.unSet('Count');
                
                capacity_name = data_source_names[i] + '_capacity';
                print '- creating capacity ' + capacity_name + '.';
                capacity = self_tuning.createCapacity(capacity_name);
                capacity.setTargets(work_manager_targets);
                capacity.setCount(200);
        
                print '- setting work manager constraints.';
                work_manager.setMaxThreadsConstraint(max_threads_constraint);
                work_manager.setCapacity(capacity);
                work_manager.setIgnoreStuckThreads(java.lang.Boolean('false'));
            else:
                print ' - work manager ' + work_manager_name + ' already exists and will not be created.';

     

    In the example above we couple a work manager with a max threads constraint and a capacity. In the above example, we have defined a max threads constraint in terms of the availability of a resource that requests depend upon, in this case the data source connection pool (such that the number of concurrent threads executing requests from the constrained work set are limited to the number of connections in the pool).

     

    JMS resourcesAn enterprise messaging system enables applications to asynchronously communicate with one another through the exchange of messages. A message is a request, report, and/or event that contains information needed to coordinate communication between different applications. A message provides a level of abstraction, allowing us to separate the details about the destination system from the application code. The Java Message Service (JMS) is a standard API for accessing enterprise messaging systems that is implemented by industry messaging providers. Specifically, JMS:

     

    • Enables Java applications that share a messaging system to exchange messages.
    • Simplifies application development by providing a standard interface for creating, sending, and receiving messages

     

    WebLogic JMS accepts messages from producer applications and delivers them to consumer applications. The major components of the WebLogic JMS architecture include:

     

    • A JMS server is an environment-related configuration entity that acts as management container for JMS queue and topic resources defined within JMS modules that are targeted to specific that JMS server. A JMS server's primary responsibility for its targeted destinations is to maintain information on what persistent store is used for any persistent messages that arrive on the destinations, and to maintain the states of durable subscribers created on the destinations. We can configure one or more JMS servers per domain,     and a JMS server can manage one or more JMS modules.
    • JMS modules contain configuration resources, such as standalone queue and topic destinations, distributed destinations, and connection factories.
    • Client JMS applications that either produce messages to destinations or consume messages from destinations.
    • JNDI (Java Naming and Directory Interface), which provides a server lookup facility.
    • WebLogic persistent storage (a server instance's default store, a user-defined file store, or a user-defined JDBC-accessible store) for storing persistent message data.

     

    To set-up a messaging system with persistent messaging, we can use:

     

    def create_messaging_resources(targets):
        print 'CREATING MESSAGING RESOURCES';
        
        print '- creating file store';
        file_store_name = application_cluster_name + '_filestore';
        if cmo.lookupFileStore(file_store_name) is None:
            file_store = cmo.createFileStore(file_store_name);
            file_store.setDirectory(domain_application_home);
            file_store_targets = file_store.getTargets();
            file_store_targets.append(targets);
            file_store.setTargets(file_store_targets);
        else:
            print ' - file store ' + file_store_name + ' already exists and will not be created.';
        
        print '- creating jms server';
        jms_server_name = application_cluster_name + '_jms_server'
        if cmo.lookupJMSServer(jms_server_name) is None:
            jms_server = cmo.createJMSServer(jms_server_name);
            file_store = cmo.lookupFileStore(file_store_name);
            jms_server.setPersistentStore(file_store);
            jms_server_targets = jms_server.getTargets();
            jms_server_targets.append(targets);
            jms_server.setTargets(jms_server_targets);
        else:
            print ' - jms server ' + jms_server_name + ' already exists and will not be created.';
    
        print '- creating jms system module';
        if cmo.lookupJMSSystemResource(jms_system_resource_name) is None:
            module = cmo.createJMSSystemResource(jms_system_resource_name);
            module_targets = module.getTargets();
            module_targets.append(targets);
            module.setTargets(module_targets);
            module.createSubDeployment(sub_deployment_name);
            sub_deployment_targets = [];
            jms_server = cmo.lookupJMSServer(jms_server_name);
            sub_deployment_targets.append(ObjectName(repr(jms_server.getObjectName())));
            cd('/JMSSystemResources/'+ jms_system_resource_name +'/SubDeployments/' + sub_deployment_name);
            set('Targets', jarray.array(sub_deployment_targets, ObjectName));
            cd('/');
        else:
            print ' - jms system module ' + jms_system_resource_name + ' already exists and will not be created.';
        
        module = cmo.lookupJMSSystemResource(jms_system_resource_name);
        resource = module.getJMSResource();
        print '- creating connection factories';
        for i in range(len(connection_factory_names)):
            if resource.lookupConnectionFactory(connection_factory_names[i]) is None:
                resource.createConnectionFactory(connection_factory_names[i]);
                connection_factory = resource.lookupConnectionFactory(connection_factory_names[i]);
                connection_factory.setJNDIName(connection_factory_jndi_names[i]);
                connection_factory.setDefaultTargetingEnabled(java.lang.Boolean('true'));
                #connection_factory.getDefaultDeliveryParams().setDefaultUnitOfOrder('.System');
                connection_factory.getTransactionParams().setTransactionTimeout(0);
                connection_factory.getTransactionParams().setXAConnectionFactoryEnabled(java.lang.Boolean('true'));
                connection_factory.getLoadBalancingParams().setLoadBalancingEnabled(java.lang.Boolean('true'));
                connection_factory.getLoadBalancingParams().setServerAffinityEnabled(java.lang.Boolean('false'));
                connection_factory.getSecurityParams().setAttachJMSXUserId(java.lang.Boolean('false'));
                connection_factory.getClientParams().setClientIdPolicy('Restricted');
                connection_factory.getClientParams().setSubscriptionSharingPolicy('Exclusive');
                connection_factory.getClientParams().setMessagesMaximum(10);
            else:
                print ' - connection factory ' + connection_factory_names[i] + ' already exists and will not be created.';
    
        print '- creating uniform distributed queues';
        for i in range(len(distributed_queue_names)):
            if resource.lookupUniformDistributedQueue(distributed_queue_names[i]) is None:
                resource.createUniformDistributedQueue(distributed_queue_names[i]);
                distributed_queue = resource.lookupUniformDistributedQueue(distributed_queue_names[i]);
                distributed_queue.setJNDIName(distributed_queue_jndi_names[i]);
                distributed_queue.setLoadBalancingPolicy('Round-Robin');
                distributed_queue.setSubDeploymentName(sub_deployment_name);
                distributed_queue.setForwardDelay(30);
                
                if resource.lookupUniformDistributedQueue(distributed_error_queue_names[i]) is None:
                    resource.createUniformDistributedQueue(distributed_error_queue_names[i]);
                    distributed_error_queue = resource.lookupUniformDistributedQueue(distributed_error_queue_names[i]);
                    distributed_error_queue.setJNDIName(distributed_error_queue_jndi_names[i]);
                    distributed_error_queue.setLoadBalancingPolicy('Round-Robin');
                    distributed_error_queue.setSubDeploymentName(sub_deployment_name);
                else:
                    print ' - uniform error distributed queue ' + distributed_error_queue_names[i] + ' already exists and will not be created.';
                
                distributed_error_queue = resource.lookupUniformDistributedQueue(distributed_error_queue_names[i])
                distributed_queue.getDeliveryFailureParams().setRedeliveryLimit(2);
                distributed_queue.getDeliveryFailureParams().setExpirationPolicy('Redirect');
                distributed_queue.getDeliveryFailureParams().setErrorDestination(distributed_error_queue);
                distributed_queue.getDeliveryParamsOverrides().setRedeliveryDelay(120);
            else:
                print ' - uniform distributed queue ' + distributed_queue_names[i] + ' already exists and will not be created.';

     

    Here, we create a user-defined file store by using the FileStoreMBean, and the JMSServerMBean to create a JMS Server that is coupled to the created user-defined file store. Next, we create a JMS system resource by using the JMSSystemResourceMBean, and create a SubDeploymentMBean that we target to the created JMS Server. To configure a connection factory and a distributed queue, we first obtain a proxy for the JMSBean, on which we call createConnectionFactory(String name), and createUniformDistributedQueue(String name) to respectively create a connection factory and a uniform distributed queue. For the connection factory we have also set client policies, that become important when using durable subscriptions   (more information on client policies can be found here). The Tuning WebLogic JMS document provides information to get the most out of WebLogic JMS.

     

    When using Message Unit-of-Order, it makes sense to set-up a migration service for the JMS Server and corresponding persistent store. Note that Message Unit-of-Order is a WebLogic Server feature that enables a stand-alone message producer, or a group of producers acting as one, to group messages into a single unit with respect to the processing order. This single unit is called a Unit-of-Order and requires that all messages from that unit be processed sequentially in the order they were created. This means that when we lose a   server that was processing the message in the sequence, we have to wait until the server is back again in order for the other messages to be processed. To overcome this problem, we need to migrate crucial service components to another server. In order to set-up Service Migration for the JMS Server and persistent store, we can use:

     

    def configure_migration_service():
        print 'CONFIGURING MIGRATION SERVICE';
        
        print '- determining candidate machines and servers.';
        cluster_bean_path = getPath('com.bea:Name=' + application_cluster_name + ',Type=Cluster');
        cluster = getMBean(cluster_bean_path);
        servers = cluster.getServers();
        candidate_servers = [];
        candidate_machines = [];
        for server in servers:
            server_object_name = ObjectName(repr(server.getObjectName()));
            candidate_servers.append(server_object_name);
            machine = server.getMachine();
            machine_object_name = ObjectName(repr(machine.getObjectName()));
            if machine_object_name not in candidate_machines:
                candidate_machines.append(machine_object_name);
    
        print '- configuring migration service on cluster ' + application_cluster_name;
        cluster.setMigrationBasis('consensus');
        cluster.setAdditionalAutoMigrationAttempts(3);
        cluster.setMillisToSleepBetweenAutoMigrationAttempts(180000);
        cluster.getDatabaseLessLeasingBasis().setMemberDiscoveryTimeout(30);
        cluster.getDatabaseLessLeasingBasis().setLeaderHeartbeatPeriod(10);
        cd('/Clusters/' + application_cluster_name);
        set('CandidateMachinesForMigratableServers',jarray.array(candidate_machines, ObjectName));
        cd('/');
    
        print '- configuring migratable targets';
        migratable_target = cmo.getMigratableTargets()[0];
        if jms_migration_exactly_once == 'yes':
            migratable_target.setMigrationPolicy('exactly-once');
        else:
            migratable_target.setMigrationPolicy('failure-recovery');
        cd('/MigratableTargets/' + migratable_target.getName());
        set('ConstrainedCandidateServers',jarray.array(candidate_servers, ObjectName));
        cd('/');
        
    def create_messaging_resources(targets):
        print 'CREATING MESSAGING RESOURCES';
        
        migratable_target = cmo.getMigratableTargets()[0];    
        print '- creating file store';
        file_store_name = application_cluster_name + '_filestore';
        if cmo.lookupFileStore(file_store_name) is None:
            file_store = cmo.createFileStore(file_store_name);
            file_store.setDirectory(domain_application_home);
            file_store_targets = file_store.getTargets();
            file_store_targets.append(migratable_target);
            file_store.setTargets(file_store_targets);
        else:
            print ' - file store ' + file_store_name + ' already exists and will not be created.';
        
        print '- creating jms server';
        jms_server_name = application_cluster_name + '_jmsserver'
        if cmo.lookupJMSServer(jms_server_name) is None:
            jms_server = cmo.createJMSServer(jms_server_name);
            file_store = cmo.lookupFileStore(file_store_name);
            jms_server.setPersistentStore(file_store);
            jms_server_targets = jms_server.getTargets();
            jms_server_targets.append(migratable_target);
            jms_server.setTargets(jms_server_targets);
        else:
            print ' - jms server ' + jms_server_name + ' already exists and will not be created.';
        ...

     

    Here, we set-up migration using the ClusterMBean, and set-up a migratable target that will be used for targeting the JMS Server and corresponding persistent store. Of course, we can also use more than one migratable target, for example.:

     

    def configure_migration_service():
        ...
        print '- configuring migratable targets';
        migratable_targets = cmo.getMigratableTargets();
        for migratable_target in migratable_targets:
            if jms_migration_exactly_once == 'yes':
                migratable_target.setMigrationPolicy('exactly-once');
            else:
                migratable_target.setMigrationPolicy('failure-recovery');
            cd('/MigratableTargets/' + migratable_target.getName());
            set('ConstrainedCandidateServers',jarray.array(candidate_servers, ObjectName));
            cd('/');
            
    def create_messaging_resources(targets):
        print 'CREATING MESSAGING RESOURCES';
        
        migratable_targets = cmo.getMigratableTargets();
        for i in range(len(migratable_targets)):
            file_store_name = application_cluster_name + '_filestore_' + repr(i);
            if cmo.lookupFileStore(file_store_name) is None:
                print '- creating file store ' + file_store_name;
                file_store = cmo.createFileStore(file_store_name);
                file_store.setDirectory(domain_application_home);
                file_store_targets = file_store.getTargets();
                file_store_targets.append(migratable_targets[i]);
                file_store.setTargets(file_store_targets);
            else:
                print ' - file store ' + file_store_name + ' already exists and will not be created.';
        
            jms_server_name = application_cluster_name + '_jmsserver_' + repr(i);
            if cmo.lookupJMSServer(jms_server_name) is None:
                print '- creating jms server ' + jms_server_name;
                jms_server = cmo.createJMSServer(jms_server_name);
                file_store = cmo.lookupFileStore(file_store_name);
                jms_server.setPersistentStore(file_store);
                jms_server_targets = jms_server.getTargets();
                jms_server_targets.append(migratable_targets[i]);
                jms_server.setTargets(jms_server_targets);
            else:
                print ' - jms server ' + jms_server_name + ' already exists and will not be created.';
        ...

     

    New Features

     

    Some notable new features are domain partitioning, and configurable elasticity (and of course Java EE 7 compatibility).

     

    Domain partitioningTo create domain partitions we can use:

     

    def create_partition(targets):
        print 'CREATING DOMAIN PARITION';
        
        print '- creating virtual targets';
        for i in range(len(virtual_target_names)):
            virtual_target_name = virtual_target_names[i];
            virtual_target_uri = virtual_target_uris[i]
            if cmo.lookupVirtualTarget(virtual_target_name) is None:
                print ' - creating virtual target ' + virtual_target_name + ' with uri ' + virtual_target_uri + '.';
                virtual_target = cmo.createVirtualTarget(virtual_target_name);
                virtual_target.setUriPrefix(virtual_target_uri);
                virtual_target_targets = virtual_target.getTargets();
                virtual_target_targets.append(targets);
                virtual_target.setTargets(virtual_target_targets);
                
                web_server_log = virtual_target.getWebServer().getWebServerLog();
                web_server_log.setLoggingEnabled(java.lang.Boolean('false'));
                web_server_log.setRotationType('bySize');
                web_server_log.setFileMinSize(5000);
                web_server_log.setFileCount(10);
            else:
                print ' - virtual target ' + virtual_target_name + ' with uri ' + virtual_target_uri + ' already exists and will not be created.';
            
        print '- creating resource group template';
        if cmo.lookupResourceGroupTemplate(resource_template_name) is None:
            resource_group_template = cmo.createResourceGroupTemplate(resource_template_name);
        else:
            print ' - resource group template ' + resource_template_name + ' already exists and will not be created.';
        
        print '- creating partitions';
        for i in range(len(partition_names)):
            partition_name = partition_names[i];
            virtual_target_name = virtual_target_names[i];
            resource_group_name = resource_group_names[i];
            if cmo.lookupPartition(partition_name) is None:
                print ' - creating partition ' + partition_name + ': virtual target ' + virtual_target_name + ', resource group ' + resource_group_name + '.';
                partition = cmo.createPartition(partition_name);
                virtual_target = cmo.lookupVirtualTarget(virtual_target_name);
                available_targets = partition.getAvailableTargets();
                available_targets.append(virtual_target);
                partition.setAvailableTargets(available_targets);
                default_targets = partition.getDefaultTargets();
                default_targets.append(virtual_target);
                partition.setDefaultTargets(default_targets);
                
                if security_realm_name == 'default':
                    security_realm = cmo.getSecurityConfiguration().getDefaultRealm();
                    partition.setRealm(security_realm);
                else:
                    security_realm = cmo.getSecurityConfiguration().lookupRealm(security_realm_name);
                    partition.setRealm(security_realm);
                
                resource_group = partition.createResourceGroup(resource_group_name);
                resource_group_template = cmo.lookupResourceGroupTemplate(resource_template_name);
                resource_group.setResourceGroupTemplate(resource_group_template);
                
                system_file_system = partition.getSystemFileSystem();
                system_file_system.setRoot(domain_configuration_home + '/partitions/' + partition_name + '/system');
                system_file_system.setCreateOnDemand(java.lang.Boolean('true'));
                system_file_system.setPreserved(java.lang.Boolean('false'));
            else:
                print ' - partition ' + partition_name + ' already exists and will not be created.';

     

    Domain partitions are an administrative and runtime slice of a WebLogic domain that is dedicated to running application instances and related resources for a tenant. The partition has a virtual target and a resource group (that we base on a resource group template). A virtual target represents a target for a resource group, both at the domain level and in a domain partition. It defines the access points to resources, such as one or more host names, a URI prefix, and the managed servers or cluster to which the virtual target is itself targeted. Virtual targets are similar to the virtual hosts on WebLogic Server. Like virtual hosts, virtual targets provide a separate HTTP server (web container) on each target. A resource group template is a named, domain-level collection of deployable resources intended to be used as a pattern by (usually) multiple resource groups. Each resource group that refers to a given template will have its own runtime copies of the resources defined in the template.

     

    Note that we can also couple a domain partition to a separate security realm. To create a new security realm, we can use:

     

    def create_security_realm():
        print 'CREATING SECURITY REALM';
        
        security_configuration = cmo.getSecurityConfiguration();    
        if security_configuration.lookupRealm(security_realm_name) is None:
            security_configuration.createRealm(security_realm_name);
            realm = security_configuration.lookupRealm(security_realm_name);
            realm.setDeployCredentialMappingIgnored(java.lang.Boolean('true'));
            realm.createAuthenticationProvider('DefaultAuthenticator', 'weblogic.security.providers.authentication.DefaultAuthenticator');
            authenticator = realm.lookupAuthenticationProvider('DefaultAuthenticator');
            authenticator.setControlFlag('SUFFICIENT');
            realm.createAuthenticationProvider('DefaultIdentityAsserter', 'weblogic.security.providers.authentication.DefaultIdentityAsserter');
            realm.createAuthorizer('XACMLAuthorizer', 'weblogic.security.providers.xacml.authorization.XACMLAuthorizer');
            realm.createAdjudicator('DefaultAdjudicator', 'weblogic.security.providers.authorization.DefaultAdjudicator');
            realm.createRoleMapper('XACMLRoleMapper', 'weblogic.security.providers.xacml.authorization.XACMLRoleMapper');
            realm.createCredentialMapper('DefaultCredentialMapper', 'weblogic.security.providers.credentials.DefaultCredentialMapper');
            realm.createCertPathProvider('WebLogicCertPathProvider', 'weblogic.security.providers.pk.WebLogicCertPathProvider');
            cert_path_provider = realm.lookupCertPathProvider('WebLogicCertPathProvider');
            realm.setCertPathBuilder(cert_path_provider);
            realm.createPasswordValidator('SystemPasswordValidator', 'com.bea.security.providers.authentication.passwordvalidator.SystemPasswordValidator');
            password_validator = realm.lookupPasswordValidator('SystemPasswordValidator');
            password_validator.setMinPasswordLength(8);
            password_validator.setMinNumericOrSpecialCharacters(1);
        else:
            print '- security realm ' + security_realm_name + ' already exists and will not be created.';

     

    We can also add resource management to a domain partition. Resource managers ensure the fairness of allocation and reduce the contention of shared resources (memory and cpu). For a shared resource we can define a trigger. A trigger defines a static constraint value for the allowed usage of a resource (open file, heap retained, and cpu time utilization). When the consumption of that resource exceeds the constraint value, a specified action (notify, slow, or shutdown) is performed.

     

    Resource management can be reflected in the Java Virtual Machine by using -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:+ResourceManagement -XX:ResourceManagementSampleInterval=30000. -XX:+ResourceManagement enables the use of resource management during the runtime of the application. We can additionally add the -XX:ResourceManagementSampleInterval=<VALUE> parameter that controls the sampling interval for resource management measurements (in milliseconds). More information   about the Java Virtual Machine parameters can be found here.

     

    To couple resources (such JDBC data sources and JMS administered objects) to the domain partition, we use the created resource group template, for example:

    def create_messaging_resources(resource_template):
        print 'CREATING MESSAGING RESOURCES';
        
        print '- creating file store';
        file_store_name = application_cluster_name + '_filestore';
        if resource_template.lookupFileStore(file_store_name) is None:
            file_store = resource_template.createFileStore(file_store_name);
            file_store.setDirectory(domain_application_home);
        else:
            print ' - file store ' + file_store_name + ' already exists and will not be created.';
        
        print '- creating jms server';
        jms_server_name = application_cluster_name + '_jmsserver'
        if resource_template.lookupJMSServer(jms_server_name) is None:
            jms_server = resource_template.createJMSServer(jms_server_name);
            file_store = resource_template.lookupFileStore(file_store_name);
            jms_server.setPersistentStore(file_store);
        else:
            print ' - jms server ' + jms_server_name + ' already exists and will not be created.';
    
        print '- creating jms system module';
        if resource_template.lookupJMSSystemResource(jms_system_resource_name) is None:
            module = resource_template.createJMSSystemResource(jms_system_resource_name);
            module.createSubDeployment(sub_deployment_name);
            sub_deployment_targets = [];
            jms_server = resource_template.lookupJMSServer(jms_server_name);
            sub_deployment_targets.append(ObjectName(repr(jms_server.getObjectName())));
            cd('/ResourceGroupTemplates/' + resource_template_name + '/JMSSystemResources/'+ jms_system_resource_name +'/SubDeployments/' + sub_deployment_name);
            set('Targets', jarray.array(sub_deployment_targets, ObjectName));
            cd('/');
        else:
            print ' - jms system module ' + jms_system_resource_name + ' already exists and will not be created.';
        
        module = resource_template.lookupJMSSystemResource(jms_system_resource_name);
        resource = module.getJMSResource();
        print '- creating connection factories';
        for i in range(len(connection_factory_names)):
            if resource.lookupConnectionFactory(connection_factory_names[i]) is None:
                resource.createConnectionFactory(connection_factory_names[i]);
                connection_factory = resource.lookupConnectionFactory(connection_factory_names[i]);
                connection_factory.setJNDIName(connection_factory_jndi_names[i]);
                connection_factory.setDefaultTargetingEnabled(java.lang.Boolean('true'));
                connection_factory.getTransactionParams().setTransactionTimeout(0);
                connection_factory.getTransactionParams().setXAConnectionFactoryEnabled(java.lang.Boolean('true'));
                connection_factory.getLoadBalancingParams().setLoadBalancingEnabled(java.lang.Boolean('true'));
                connection_factory.getLoadBalancingParams().setServerAffinityEnabled(java.lang.Boolean('false'));
                connection_factory.getSecurityParams().setAttachJMSXUserId(java.lang.Boolean('false'));
                connection_factory.getClientParams().setClientIdPolicy('Restricted');
                connection_factory.getClientParams().setSubscriptionSharingPolicy('Exclusive');
                connection_factory.getClientParams().setMessagesMaximum(10);
            else:
                print ' - connection factory ' + connection_factory_names[i] + ' already exists and will not be created.';
    
        print '- creating uniform distributed queues';
        for i in range(len(distributed_queue_names)):
            if resource.lookupUniformDistributedQueue(distributed_queue_names[i]) is None:
                resource.createUniformDistributedQueue(distributed_queue_names[i]);
                distributed_queue = resource.lookupUniformDistributedQueue(distributed_queue_names[i]);
                distributed_queue.setJNDIName(distributed_queue_jndi_names[i]);
                distributed_queue.setLoadBalancingPolicy('Round-Robin');
                distributed_queue.setSubDeploymentName(sub_deployment_name);
                distributed_queue.setForwardDelay(30);
                
                if resource.lookupUniformDistributedQueue(distributed_error_queue_names[i]) is None:
                    resource.createUniformDistributedQueue(distributed_error_queue_names[i]);
                    distributed_error_queue = resource.lookupUniformDistributedQueue(distributed_error_queue_names[i]);
                    distributed_error_queue.setJNDIName(distributed_error_queue_jndi_names[i]);
                    distributed_error_queue.setLoadBalancingPolicy('Round-Robin');
                    distributed_error_queue.setSubDeploymentName(sub_deployment_name);
                else:
                    print ' - uniform error distributed queue ' + distributed_error_queue_names[i] + ' already exists and will not be created.';
                
                distributed_error_queue = resource.lookupUniformDistributedQueue(distributed_error_queue_names[i])
                distributed_queue.getDeliveryFailureParams().setRedeliveryLimit(2);
                distributed_queue.getDeliveryFailureParams().setExpirationPolicy('Redirect');
                distributed_queue.getDeliveryFailureParams().setErrorDestination(distributed_error_queue);
                distributed_queue.getDeliveryParamsOverrides().setRedeliveryDelay(120);
            else:
                print ' - uniform distributed queue ' + distributed_queue_names[i] + ' already exists and will not be created.';
    
    def create_data_source(resource_template):
        print 'CREATING DATA SOURCES';
    
        for i in range(len(data_source_names)):
            print ' - creating jdbc system resource';
            if resource_template.lookupJDBCSystemResource(data_source_names[i]) is None:
                data_source = resource_template.createJDBCSystemResource(data_source_names[i]);            
                jdbc_resource = data_source.getJDBCResource();
                jdbc_resource.setName(data_source_names[i]);
                data_source_params = jdbc_resource.getJDBCDataSourceParams();
                names = [data_source_jndi_names[i]];
                data_source_params.setJNDINames(names);
                if data_source_safe_transaction == 'yes':
                    data_source_params.setGlobalTransactionsProtocol('LoggingLastResource');
                else:
                    data_source_params.setGlobalTransactionsProtocol('EmulateTwoPhaseCommit');            
                driver_params = jdbc_resource.getJDBCDriverParams();
                driver_params.setUrl(data_source_url);
                driver_params.setDriverName(data_source_driver);
                driver_params.setPassword(data_source_passwords[i]);
                driver_properties = driver_params.getProperties();
                driver_properties.createProperty('user');
                user_property = driver_properties.lookupProperty('user');
                user_property.setValue(data_source_users[i]);
                connection_pool_params = jdbc_resource.getJDBCConnectionPoolParams();
                connection_pool_params.setTestTableName(data_source_test);
                connection_pool_params.setConnectionCreationRetryFrequencySeconds(30);
                connection_pool_params.setStatementCacheSize(0);
                if data_source_grid_link == 'yes':
                    oracle_params = jdbc_resource.getJDBCOracleParams();
                    oracle_params.setFanEnabled(java.lang.Boolean('true'));
                    oracle_params.setOnsNodeList(data_source_ons_node_list);
                    oracle_params.setOnsWalletFile('');
                    oracle_params.unSet('OnsWalletPassword');
                    oracle_params.unSet('OnsWalletPasswordEncrypted');
                    oracle_params.setActiveGridlink(java.lang.Boolean('true'));
            else:
                print ' - jdbc system resource ' + data_source_names[i] + ' already exists and will not be created.';
                
    def create_work_manager():
        print 'CREATING WORK MANAGERS';    
        partitions = cmo.getPartitions();
        for partition in partitions:
            self_tuning = partition.getSelfTuning();
            for i in range(len(data_source_names)):
                work_manager_name = partition.getName() + '_' + data_source_names[i] + '_work_manager';
                if self_tuning.lookupWorkManager(work_manager_name) is None:
                    print '- creating work manager ' + work_manager_name + '.';
                    work_manager = self_tuning.createWorkManager(work_manager_name);
                
                    max_threads_constraint_name = partition.getName() + '_' + data_source_names[i] + '_max_threads_constraint';
                    print '- creating max threads constraint ' + max_threads_constraint_name + '.';
                    max_threads_constraint = self_tuning.createMaxThreadsConstraint(max_threads_constraint_name);
                    # note that resource names are decorated with $partition_name
                    partition_data_source_name = data_source_names[i] + '$' + partition.getName();
                    max_threads_constraint.setConnectionPoolName(partition_data_source_name);
                    max_threads_constraint.unSet('Count');
                    #max_threads_constraint.setCount(15);
                    
                    capacity_name = partition.getName() + '_' + data_source_names[i] + '_capacity';
                    print '- creating capacity ' + capacity_name + '.';
                    capacity = self_tuning.createCapacity(capacity_name);
                    capacity.setCount(200);
            
                    print '- setting work manager constraints.';
                    work_manager.setMaxThreadsConstraint(max_threads_constraint);
                    work_manager.setCapacity(capacity);
                    work_manager.setIgnoreStuckThreads(java.lang.Boolean('false'));
                else:
                    print ' - work manager ' + work_manager_name + ' already exists and will not be created.';

     

    In this way, we create the necessary resources (such JDBC data sources and JMS administrative objects) needed by the applications only once and couple them to a resource template, which as the name suggests is a template for the resource group within a partition. When deploying applications, we 'scope' the application to a particular resource group within a partition, for example, javaee7_resource_group in javaee7_partition or loadtest_resource_group in loadtest_partition. The targets (clusters and managed servers) are determined from the virtual target that is coupled to the partition. So we have sort of a triplet, consisting of a virtual target (that is targeted to clusters and managed servers), a resource group (which can be based on a template that contains the necessary resources), and a partition (which is the administrative object that brings it all together).

     

    Resources are coupled in a similar way as to the 'global' domain, in which we use an editable DomainMBean instance and call create<MBEAN_NAME>. When working with resource templates we use the ResourceGroupTemplateMBean and call create<MBEAN_NAME>.

     

    Configurable elasticityBy using dynamic clusters, we can add a form of elasticity to the WebLogic environment (as was done in the post Automatic Scaling and Deployment Plans, but now configured by using the diagonostic framework), for example:

     

    def create_wldf_system_resource():
        print 'CREATING WLDF SYSTEM RESOURCE';
        
        if cmo.lookupWLDFSystemResource(wldf_system_resource_name) is None:
            wldf_system_resource = cmo.createWLDFSystemResource(wldf_system_resource_name);
            wldf_system_resource.setDescription('a useful description');
            
            targets = [];
            cluster_bean_path = getPath('com.bea:Name=' + application_cluster_name + ',Type=Cluster');
            cluster = getMBean(cluster_bean_path);
            targets.append(ObjectName(repr(cluster.getObjectName())));
    
            cd('/WLDFSystemResources/'+ wldf_system_resource_name);
            set('Targets', jarray.array(targets, ObjectName));
            cd('/');
        else:
            print '- wldf system resource ' + wldf_system_resource_name + ' already exists and will not be created.';
        
    def create_scale_up_action():
        print 'CREATING SCALE UP ACTION';
        wldf_system_resource_bean_path = getPath('com.bea:Name=' + wldf_system_resource_name + ',Type=WLDFSystemResource');
        wldf_system_resource = getMBean(wldf_system_resource_bean_path);
        wldf_resource = wldf_system_resource.getWLDFResource();
        watch_notification = wldf_resource.getWatchNotification();
        
        if watch_notification.lookupScaleUpAction(scale_up_action_name) is None:
            print ('- creating scale up action');
            watch_notification.createScaleUpAction(scale_up_action_name);
            scale_up_action = watch_notification.lookupScaleUpAction(scale_up_action_name);
            scale_up_action.setEnabled(java.lang.Boolean('true'));
            scale_up_action.setClusterName(application_cluster_name);
            scale_up_action.setScalingSize(1);
            scale_up_action.setTimeout(0);
            
            print ('- creating scale up policy');
            watch_notification.createWatch(scale_up_policy_name);
            scale_up_policy = watch_notification.lookupWatch(scale_up_policy_name);
            scale_up_policy.setEnabled(java.lang.Boolean('true'));
            scale_up_policy.setExpressionLanguage('EL');
            scale_up_policy.setRuleType('Harvester');
            instance_name_pattern = get_instance_name_patterns()[0];
            scale_up_rule_expression = smart_rule_name + '("' + application_cluster_name + '","' + instance_name_pattern + '","' + attribute_expression + '","' + scale_up_comparison_operator + '","' + scale_up_threshold + '","' +  percentage_of_servers + '","' + sampling_rate + '","' + sample_retention_period + '")';
            scale_up_policy.setRuleExpression(scale_up_rule_expression);
            scale_up_policy.setAlarmType('AutomaticReset');
            scale_up_policy.setAlarmResetPeriod(120000);    
            scale_up_notifications = scale_up_policy.getNotifications();
            scale_up_notifications.append(scale_up_action);
            scale_up_policy.setNotifications(scale_up_notifications);
            scale_up_policy.getSchedule().setMinute('*');
            scale_up_policy.getSchedule().setSecond('*/' + policy_schedule_seconds);
        else:
            print '- scale up action ' + scale_up_action_name + ' already exists and will not be created.';
            
    def create_scale_down_action():
        print 'CREATING SCALE DOWN ACTION';
        wldf_system_resource_bean_path = getPath('com.bea:Name=' + wldf_system_resource_name + ',Type=WLDFSystemResource');
        wldf_system_resource = getMBean(wldf_system_resource_bean_path);
        wldf_resource = wldf_system_resource.getWLDFResource();
        watch_notification = wldf_resource.getWatchNotification();
        
        if watch_notification.lookupScaleDownAction(scale_down_action_name) is None:
            print ('- creating scale down action');
            watch_notification.createScaleDownAction(scale_down_action_name);
            scale_down_action = watch_notification.lookupScaleDownAction(scale_down_action_name);
            scale_down_action.setEnabled(java.lang.Boolean('true'));
            scale_down_action.setClusterName(application_cluster_name);
            scale_down_action.setScalingSize(1);
            scale_down_action.setTimeout(0);    
            
            print ('- creating scale down policy');
            watch_notification.createWatch(scale_down_policy_name);
            scale_down_policy = watch_notification.lookupWatch(scale_down_policy_name);
            scale_down_policy.setEnabled(java.lang.Boolean('true'));
            scale_down_policy.setExpressionLanguage('EL');
            scale_down_policy.setRuleType('Harvester');
            instance_name_pattern = get_instance_name_patterns()[0];
            scale_down_rule_expression = smart_rule_name + '("' + application_cluster_name + '","' + instance_name_pattern + '","' + attribute_expression + '","' + scale_down_comparison_operator + '","' + scale_down_threshold + '","' +  percentage_of_servers + '","' + sampling_rate + '","' + sample_retention_period + '")';
            scale_down_policy.setRuleExpression(scale_down_rule_expression);
            scale_down_policy.setAlarmType('AutomaticReset');
            scale_down_policy.setAlarmResetPeriod(120000);    
            scale_down_notifications = scale_down_policy.getNotifications();
            scale_down_notifications.append(scale_down_action);
            scale_down_policy.setNotifications(scale_down_notifications);
            scale_down_policy.getSchedule().setMinute('*');
            scale_down_policy.getSchedule().setSecond('*/' + policy_schedule_seconds);
        else:
            print '- scale down action ' + scale_down_action_name + ' already exists and will not be created.';

     

    Here, a managed server is added to the dynamic cluster (and started) when a certain threshold is reached. The scaling rule expressions are created using the following parameters.

     

    smart_rule_name='wls:ClusterGenericMetricRule';
    instance_name_pattern='com.bea:ServerRuntime=server_1,Name=server_1_/SessionTest,Type=WebAppComponentRuntime,ApplicationRuntime=SessionTest';
    attribute_expression='OpenSessionsCurrentCount';
    scale_up_comparison_operator='>';
    scale_down_comparison_operator='<';
    scale_up_threshold='1000'
    scale_down_threshold='10';
    percentage_of_servers='100';
    sampling_rate='15 seconds';
    sample_retention_period='300 seconds';
    policy_schedule_seconds='15';

     

    The threshold in this case is the based on the OpenSessionsCurrentCount. The difficult part here is determining the instance_name_pattern (note that this is the ObjectName of a particular runtime MBean). We can construct the ObjectName automatically, by using:

     

    def get_instance_name_patterns():
        instance_name_patterns = [];    
        server_runtimes = domainRuntimeService.getServerRuntimes();
        for server_runtime in server_runtimes:
            application_runtimes = server_runtime.getApplicationRuntimes();    
            for application_runtime in application_runtimes:
                if application_runtime.getName() == application_name:
                    component_runtimes = application_runtime.getComponentRuntimes();
                    for component_runtime in component_runtimes:
                        # we are interested in the web component of the application
                        if 'WebAppComponentRuntime' in str(component_runtime.getObjectName()):                    
                            instance_name_pattern_array = [];
                            object_name_parts = str(component_runtime.getObjectName()).split(',');
                            # lose the location part of the objectname as the harvester is in the ServerRuntime namespace (location is only applicable to the DomainRuntime)                    
                            for object_name_part in object_name_parts:
                                if 'Location' not in object_name_part:
                                    instance_name_pattern_array.append(object_name_part);
                            instance_name_pattern = ','.join(instance_name_pattern_array);
                            instance_name_patterns.append(instance_name_pattern);
        return instance_name_patterns;

     

    This obtains the ObjectNames for a particular web application that is defined by the application_name variable.

     

    To put limits (minimum and maximum) on the number of servers in the dynamic cluster, we set the variables MaxDynamicClusterSize, and MinDynamicClusterSize on the dynamic cluster, for example:

     

    print '- adding server template to the cluster ' + application_cluster_name;
    cluster.getDynamicServers().setServerTemplate(server_template);
    dynamic_server_count = len(machine_listen_addresses) * number_of_managed_servers_per_machine;
    max_dynamic_server_count = len(machine_listen_addresses) * maximum_number_of_managed_servers_per_machine;
    cluster.getDynamicServers().setDynamicClusterSize(dynamic_server_count);
    cluster.getDynamicServers().setMinDynamicClusterSize(dynamic_server_count);
    cluster.getDynamicServers().setMaxDynamicClusterSize(max_dynamic_server_count);
    cluster.getDynamicServers().setDynamicClusterCooloffPeriodSeconds(30);
    cluster.getDynamicServers().setMachineNameMatchExpression('machine*');
    cluster.getDynamicServers().setServerNamePrefix('server_');
    cluster.getDynamicServers().setCalculatedListenPorts(java.lang.Boolean('true'));
    cluster.getDynamicServers().setCalculatedMachineNames(java.lang.Boolean('true'));
    cluster.getDynamicServers().setCalculatedListenPorts(java.lang.Boolean('true'));

     

    Note that the DynamicClusterSize represents the current number of servers in the dynamic cluster (this is also the variable that is changed by the scale up and scale down actions). We also set a cool-off period, for the elastic scaling operations (in this period the DynamicClusterSize parameter cannot be changed).

     

    By using The Grinder we can put some load on the system, for example by using the following script to generate sessions:

     

    from net.grinder.script.Grinder import grinder
    from net.grinder.script import Test
    from net.grinder.plugin.http import HTTPRequest
    from HTTPClient import NVPair
     
    protectedResourceTest = Test(1, "Request resource")
    authenticationTest = Test(2, "POST to login.jsp")
    class TestRunner:
        def __call__(self):
            request = protectedResourceTest.wrap(HTTPRequest(url="http://machine2.com:8880/SessionTest/test"))
            result = request.GET()
            result = maybeAuthenticate(result)
            result = request.GET()
            grinder.logger.info(result.text)
     
    def maybeAuthenticate(lastResult):
        if lastResult.statusCode == 401 or lastResult.text.find("login.jsp") != -1:
            grinder.logger.info("Challenged, authenticating")
            authenticationFormData = (NVPair("j_username", "employee"), NVPair("j_password", "welcome1"))
            request = authenticationTest.wrap(HTTPRequest(url="%s/j_security_check" % lastResult.originalURI))
            return request.POST(authenticationFormData)

     

    When the loadtest is run the following is observed in the logging:

     

    # SCALE UP TRIGGERED
    ####<Oct 15, 2015 5:39:15 PM CEST> <Notice> <Diagnostics> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000038> <1444923555842> <[severity-value: 32] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320068> <Watch "session_test_scale_up_policy" in module "session_test_module" with severity "Notice" on server "AdminServer" has triggered at Oct 15, 2015 5:39:15 PM CEST. Notification details: 
    WatchRuleType: Harvester 
    WatchRule: wls:ClusterGenericMetricRule("application_cluster","com.bea:ServerRuntime=server_2,Name=server_2_/SessionTest,Type=WebAppComponentRuntime,ApplicationRuntime=SessionTest","OpenSessionsCurrentCount",">","1000","100","15 seconds","300 seconds") 
    WatchData: com.bea:ApplicationRuntime=SessionTest,Location=server_2,Name=server_2_/SessionTest,ServerRuntime=server_2,Type=WebAppComponentRuntime//OpenSessionsCurrentCount = [867, 997, 916, 1097, 988, 1120, 1068, 1239, 1110, 1287, 1165, 1409, 1269, 1464, 1361, 1726, 1602, 1909, 1753, 2115]  
    WatchAlarmType: AutomaticReset 
    WatchAlarmResetPeriod: 120000 
    > 
    ####<Oct 15, 2015 5:39:15 PM CEST> <Info> <DiagnosticsWatch> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000039> <1444923555850> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320204> <Executing action session_test_scale_up with timeout period 0 seconds.> 
    ####<Oct 15, 2015 5:39:15 PM CEST> <Info> <DiagnosticsWatch> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000039> <1444923555851> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320220> <Scale up action session_test_scale_up invoked for cluster application_cluster with scaling size 1 and timeout period of 0 seconds.> 
    ####<Oct 15, 2015 5:39:19 PM CEST> <Info> <Health> <machine1.com> <AdminServer> <weblogic.GCMonitor> <<anonymous>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000006> <1444923559447> <[severity-value: 64] [rid: 0:1] [partition-id: 0] [partition-name: DOMAIN] > <BEA-310002> <45% of the total memory in the server is free.> 
    ####<Oct 15, 2015 5:39:20 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000003a> <1444923560045> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162747> <wf0078: Scale-up operation on cluster application_cluster started.> 
    ####<Oct 15, 2015 5:39:20 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000003a> <1444923560127> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162703> <wf0078: Cluster application_cluster will be expanded by 1 servers.> 
    ####<Oct 15, 2015 5:39:20 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000003a> <1444923560129> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162719> <Adjusting maximum servers count by requested amount 1 for cluster application_cluster.> 
    ####<Oct 15, 2015 5:39:20 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000003a> <1444923560129> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162714> <Updating max servers count for application_cluster by 1 to 6.> 
    ####<Oct 15, 2015 5:39:20 PM CEST> <Info> <JMX> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000003a> <1444923560136> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149512> <JMX Connector Server started at service:jmx:iiop://machine1.com:7001/jndi/weblogic.management.mbeanservers.editsession.DOMAIN._Elasticity_NamedEdit_application_cluster_4.> 
    ####<Oct 15, 2015 5:39:28 PM CEST> <Info> <JMX> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000003a> <1444923568499> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149513> <JMX Connector Server stopped at service:jmx:iiop://machine1.com:7001/jndi/weblogic.management.mbeanservers.editsession.DOMAIN._Elasticity_NamedEdit_application_cluster_4.> 
    ####<Oct 15, 2015 5:40:01 PM CEST> <Info> <Server> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000003e> <1444923601168> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-002635> <The server "server_6" connected to this server.> 
    ####<Oct 15, 2015 5:40:01 PM CEST> <Info> <JMX> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000003f> <1444923601305> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149506> <Established JMX Connectivity with server_6 at the JMX Service URL of service:jmx:t3://192.168.101.110:9007/jndi/weblogic.management.mbeanservers.runtime.> 
    ####<Oct 15, 2015 5:40:02 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000003a> <1444923602521> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162710> <wf0078: Cluster application_cluster scale up, server server_6 successfully started.> 
    ####<Oct 15, 2015 5:40:02 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000003a> <1444923602522> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162711> <wf0078: Scale up work for cluster application_cluster completed, started 1 new instances.> 
    ####<Oct 15, 2015 5:40:02 PM CEST> <Info> <DiagnosticsWatch> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000039> <1444923602835> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320221> <Scale up task session_test_scale_up for cluster application_cluster complete, status: SUCCESS> 
    
    # STOP LOAD TEST
    ...
    # SCALE DOWN TRIGGERED
    ####<Oct 15, 2015 5:58:15 PM CEST> <Notice> <Diagnostics> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000063> <1444924695407> <[severity-value: 32] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320068> <Watch "session_test_scale_down_policy" in module "session_test_module" with severity "Notice" on server "AdminServer" has triggered at Oct 15, 2015 5:58:15 PM CEST. Notification details: 
    WatchRuleType: Harvester 
    WatchRule: wls:ClusterGenericMetricRule("application_cluster","com.bea:ServerRuntime=server_2,Name=server_2_/SessionTest,Type=WebAppComponentRuntime,ApplicationRuntime=SessionTest","OpenSessionsCurrentCount","<","10","100","15 seconds","300 seconds") 
    WatchData: com.bea:ApplicationRuntime=SessionTest,Location=server_2,Name=server_2_/SessionTest,ServerRuntime=server_2,Type=WebAppComponentRuntime//OpenSessionsCurrentCount = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]  
    WatchAlarmType: AutomaticReset 
    WatchAlarmResetPeriod: 120000 
    > 
    ####<Oct 15, 2015 5:58:15 PM CEST> <Info> <DiagnosticsWatch> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000064> <1444924695408> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320204> <Executing action session_test_scale_down with timeout period 0 seconds.> 
    ####<Oct 15, 2015 5:58:15 PM CEST> <Info> <DiagnosticsWatch> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000064> <1444924695408> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320215> <Scale down action session_test_scale_down invoked for cluster application_cluster with scaling size 1 and timeout period of 0 seconds.> 
    ####<Oct 15, 2015 5:58:19 PM CEST> <Info> <Health> <machine1.com> <AdminServer> <weblogic.GCMonitor> <<anonymous>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000006> <1444924699491> <[severity-value: 64] [rid: 0:1] [partition-id: 0] [partition-name: DOMAIN] > <BEA-310002> <43% of the total memory in the server is free.> 
    ####<Oct 15, 2015 5:58:20 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000066> <1444924700446> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162748> <wf0082: Scale-down operation on cluster application_cluster started.> 
    ####<Oct 15, 2015 5:58:20 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000066> <1444924700510> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162705> <wf0082: Cluster application_cluster will be scaled down by 1 servers.> 
    ####<Oct 15, 2015 5:58:20 PM CEST> <Warning> <LifeCycle> <machine1.com> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000066> <1444924700532> <[severity-value: 16] [rid: 0:25] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <No OTD registered with Lifecycle module> 
    ####<Oct 15, 2015 5:58:28 PM CEST> <Info> <JMX> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000067> <1444924708000> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149507> <JMX Connectivity has been discontinued with the Managed Server server_3.> 
    ####<Oct 15, 2015 5:58:28 PM CEST> <Info> <Server> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000067> <1444924708001> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-002634> <The server "server_3" disconnected from this server.> 
    ####<Oct 15, 2015 5:58:28 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000066> <1444924708538> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162717> <wf0082: Scale down of server application_cluster completed successfully for cluster server_3.> 
    ####<Oct 15, 2015 5:58:28 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000066> <1444924708539> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162721> <wf0082: Scale down work for cluster application_cluster completed, stopped 1 instances.> 
    ####<Oct 15, 2015 5:58:28 PM CEST> <Info> <DiagnosticsWatch> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000064> <1444924708955> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320216> <Scale down task session_test_scale_down for cluster application_cluster complete, status: SUCCESS> 
    
    # SCALE DOWN TRIGGERED, FAILS AS THE MINIMUM OF SERVERS IS REACHED IN THE CLUSTER
    ####<Oct 15, 2015 6:00:15 PM CEST> <Notice> <Diagnostics> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <694f2778-9444-4e04-9109-1d561215b3f4-0000006f> <1444924815430> <[severity-value: 32] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320068> <Watch "session_test_scale_down_policy" in module "session_test_module" with severity "Notice" on server "AdminServer" has triggered at Oct 15, 2015 6:00:15 PM CEST. Notification details: 
    WatchRuleType: Harvester 
    WatchRule: wls:ClusterGenericMetricRule("application_cluster","com.bea:ServerRuntime=server_2,Name=server_2_/SessionTest,Type=WebAppComponentRuntime,ApplicationRuntime=SessionTest","OpenSessionsCurrentCount","<","10","100","15 seconds","300 seconds") 
    WatchData: com.bea:ApplicationRuntime=SessionTest,Location=server_2,Name=server_2_/SessionTest,ServerRuntime=server_2,Type=WebAppComponentRuntime//OpenSessionsCurrentCount = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]  
    WatchAlarmType: AutomaticReset 
    WatchAlarmResetPeriod: 120000 
    > 
    ####<Oct 15, 2015 6:00:15 PM CEST> <Info> <DiagnosticsWatch> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000070> <1444924815431> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320204> <Executing action session_test_scale_down with timeout period 0 seconds.> 
    ####<Oct 15, 2015 6:00:15 PM CEST> <Info> <DiagnosticsWatch> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000070> <1444924815431> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320215> <Scale down action session_test_scale_down invoked for cluster application_cluster with scaling size 1 and timeout period of 0 seconds.> 
    ####<Oct 15, 2015 6:00:20 PM CEST> <Info> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000072> <1444924820468> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162748> <wf0083: Scale-down operation on cluster application_cluster started.> 
    ####<Oct 15, 2015 6:00:20 PM CEST> <Warning> <Elasticity> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000072> <1444924820576> <[severity-value: 16] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162704> <wf0083: Cluster application_cluster contains only 2 running Managed Servers which is less or equal to the minimum number of servers (2). The cluster will not be scaled down.> 
    ####<Oct 15, 2015 6:00:20 PM CEST> <Error> <MgmtOrchestration> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)'> <LCMUser> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000072> <1444924820578> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2192046> <Workflow wf0083 failed in step <[ElasticScalingWorkflow_application_cluster_8] wf0083-0 (InstanceChooser)> during execute operation. Requested resolution is FAIL. Reason: wf0083: Cluster application_cluster contains only 2 running Managed Servers which is less or equal to the minimum number of servers (2). The cluster will not be scaled down.> 
    ####<Oct 15, 2015 6:00:20 PM CEST> <Info> <DiagnosticsWatch> <machine1.com> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <694f2778-9444-4e04-9109-1d561215b3f4-00000070> <1444924820737> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-320216> <Scale down task session_test_scale_down for cluster application_cluster complete, status: FAIL>

     

    By using the Admin Console, we get insight in the number of sessions and how they are distributed over the servers.

     

    scale_up.png

     

    When the loadtest is stopped the number of servers is decreased (also see the log snippets presented above).

     

    scale_down.png

     

    Combining the WLST Scripts

     

    Based on what we need for the environment, we can combine the above presented scripts. For example, to create a runtime environment that contains a cluster that spans multiple machines and has a data source and JMS resources, we would have:

     

    connect_to_admin_server();
    
    start_edit_mode();
    
    create_dynamic_cluster();
    create_machines();
    
    save_and_active_changes();
    
    start_edit_mode();
    
    target_bean_path = getPath('com.bea:Name=' + application_cluster_name + ',Type=Cluster');
    target = getMBean(target_bean_path);
    create_messaging_resources(target);
    create_data_source(target);
    create_work_manager(target);
    
    save_and_active_changes();
    
    create_users_and_groups();
    
    print 'DISCONNECT FROM THE ADMIN SERVER';
    disconnect();

     

    When using a cluster that also includes migration (the cluster must contain configured servers, when we want to use migratable targets on the cluster), we would have:

     

    connect_to_admin_server();
    
    start_edit_mode();
    
    create_cluster();
    create_machines();
    create_servers();
    configure_migration_service();
    
    save_and_active_changes();
    
    if isRestartRequired():
        restart_admin_server();
        connect_to_admin_server();
    
    start_edit_mode();
    
    target_bean_path = getPath('com.bea:Name=' + application_cluster_name + ',Type=Cluster');
    target = getMBean(target_bean_path);
    create_messaging_resources(target);
    create_data_source(target);
    
    save_and_active_changes();
    
    print 'DISCONNECT FROM THE ADMIN SERVER';
    disconnect();

     

    In this case which we also check if an Administration Server restart is required by using isRestartRequired.

     

    To create a domain partition, we would have:

     

    connect_to_admin_server();
    
    start_edit_mode();
    
    create_dynamic_cluster();
    create_machines();
    
    save_and_active_changes();
    
    if security_realm_name != 'default':
        start_edit_mode();
        create_security_realm();
        save_and_active_changes();
    
    start_edit_mode();
    
    cluster_bean_path = getPath('com.bea:Name=' + application_cluster_name + ',Type=Cluster');
    cluster = getMBean(cluster_bean_path);
    create_partition(cluster);
    
    save_and_active_changes();
    
    if isRestartRequired():
        restart_admin_server();
        connect_to_admin_server();
    
    start_edit_mode();
    
    resource_template_bean_path = getPath('com.bea:Name=' + resource_template_name + ',Type=ResourceGroupTemplate');
    resource_template = getMBean(resource_template_bean_path);
    create_messaging_resources(resource_template);
    create_data_source(resource_template);
    
    save_and_active_changes();
    
    print 'DISCONNECT FROM THE ADMIN SERVER';
    disconnect();

     

    When this is the first domain partition that is created, we have to restart the Administration Server (The following non-dynamic attribute(s) have been changed on MBeans that require server re-start: MBean Changed : com.bea:Name=tryout_domain,Type=SecurityConfiguration Attributes changed : AdministrativeIdentityDomain).

     

    Ansible

     

    Based on the above presented scripts, we can create templates based on defined variables. To set-up a WebLogic run-time environment we use the following playbooks:

     

    /etc/ansible
        /roles
            /wls_common
                /files
                /templates
                    /weblogic
                        create_deployment_environment_py.j2
                /vars
                    main.yml
            /wls_deploy_config
                /tasks
                    main.yml
        admin_server.yml

     

    And the following inventory file.

     

    [admin_server]
    machine1.com
    
    [managed_servers]
    machine2.com
    
    [http_servers]
    machine2.com
    
    [database_server]
    controlnode.com

     

    The variables are set in the .../wls_common/vars/main.yml file, for example:

     

    ##### domain parameters #####
    application_cluster_name:                      "application_cluster"
    machine_listen_addresses:                      "machine1.com,machine2.com"
    machine_user_id:                               "{{ oracle_install_user }}"
    machine_group_id:                              "{{ oracle_install_group }}"
    number_of_managed_servers_per_machine:         "2"
    maximum_number_of_managed_servers_per_machine: "3"
    managed_server_listen_port_start:              "9001"
    
    jms_system_resource_name:                      "jms_system_resource_module"
    sub_deployment_name:                           "{{ jms_system_resource_name }}_sub_deployment"
    connection_factory_names:                      "connection_factory"
    connection_factory_jndi_names:                 "jms/ConnectionFactory"
    distributed_queue_names:                       "company_distributed_queue"
    distributed_queue_jndi_names:                  "jms/CompanyQueue"
    distributed_error_queue_names:                 "company_distributed_error_queue"
    distributed_error_queue_jndi_names:            "jms/CompanyErrorQueue"
    
    data_source_names:                             "data_source"
    data_source_jndi_names:                        "jdbc/exampleDS"
    data_source_users:                             "example"
    data_source_passwords:                         "example"
    data_source_url:                               "jdbc:oracle:thin:@//database-server:1521/orcl12"
    data_source_driver:                            "oracle.jdbc.OracleDriver"
    data_source_test:                              "SQL ISVALID"
    data_source_safe_transaction:                  "no"
    data_source_grid_link:                         "no"
    data_source_ons_node_list:                     "database-server1:6200,database-server2:6200"
    
    configure_ssl:                                 "no"
    managed_server_ssl_listen_port_start:          "10001"
    config_type:                                   "CustomIdentityAndCustomTrust"
    store_type:                                    "jks"
    key_store_file_name:                           "/home/weblogic/certs/machine1.keystore"
    key_store_pass_phrase:                         "somepassword"
    trust_store_file_name:                         "/home/weblogic/certs/machine1.truststore"
    trust_store_pass_phrase:                       "somepassword"
    private_key_alias:                             "machine1.com"
    private_key_pass_phrase:                       "somepassword"

     

    To include this in the .../wls_common/templates/weblogic/create_deployment_environment_py.j2 template, we use:

     

    import os;
    import socket;
    
    print 'DEFINE VARIABLES FOR DOMAIN CREATION';
    application_cluster_name='{{ application_cluster_name }}';
    machine_listen_addresses='{{ machine_listen_addresses }}'.split(',');
    machine_user_id='{{ machine_user_id }}';
    machine_group_id='{{ machine_group_id }}';
    number_of_managed_servers_per_machine={{ number_of_managed_servers_per_machine }};
    maximum_number_of_managed_servers_per_machine={{ maximum_number_of_managed_servers_per_machine }};
    managed_server_listen_port_start={{ managed_server_listen_port_start }};
    
    jms_system_resource_name='{{ jms_system_resource_name }}';
    sub_deployment_name='{{ sub_deployment_name }}';
    connection_factory_names='{{ connection_factory_names }}'.split(',');
    connection_factory_jndi_names='{{ connection_factory_jndi_names }}'.split(',');
    distributed_queue_names='{{ distributed_queue_names }}'.split(',');
    distributed_queue_jndi_names='{{ distributed_queue_jndi_names }}'.split(',');
    distributed_error_queue_names='{{ distributed_error_queue_names }}'.split(',');
    distributed_error_queue_jndi_names='{{ distributed_error_queue_jndi_names }}'.split(',');
    
    data_source_names='{{ data_source_names }}'.split(',');
    data_source_jndi_names='{{ data_source_jndi_names }}'.split(',');
    data_source_users='{{ data_source_users }}'.split(',');
    data_source_passwords='{{ data_source_passwords }}'.split(',');
    data_source_url='{{ data_source_url }}';
    data_source_driver='{{ data_source_driver }}';
    data_source_test='{{ data_source_test }}';
    data_source_safe_transaction='{{ data_source_safe_transaction }}';
    data_source_grid_link='{{ data_source_grid_link }}';
    data_source_ons_node_list='{{ data_source_ons_node_list }}';
    
    configure_ssl='{{ configure_ssl }}';
    managed_server_ssl_listen_port_start='{{ managed_server_ssl_listen_port_start }}';
    config_type='{{ config_type }}';
    store_type='{{ store_type }}';
    key_store_file_name='{{ key_store_file_name }}';
    key_store_pass_phrase='{{ key_store_pass_phrase }}';
    trust_store_file_name='{{ trust_store_file_name }}';
    trust_store_pass_phrase='{{ trust_store_pass_phrase }}';
    private_key_alias='{{ private_key_alias }}';
    private_key_pass_phrase='{{ private_key_pass_phrase }}';
    
    ...
    
    connect_to_admin_server();
    
    start_edit_mode();
    
    create_dynamic_cluster();
    create_machines();
    
    save_and_active_changes();
    
    start_edit_mode();
    
    target_bean_path = getPath('com.bea:Name=' + application_cluster_name + ',Type=Cluster');
    target = getMBean(target_bean_path);
    create_messaging_resources(target);
    create_data_source(target);
    
    save_and_active_changes();
    
    print 'DISCONNECT FROM THE ADMIN SERVER';
    disconnect();

     

    To run the WLST script, we use the following playbook.

     

    # /etc/ansible/roles/wls_deploy_config/tasks/main.yml
    
    - name: create create_deployment_environment.py
    template:
        src=roles/wls_common/templates/weblogic/create_deployment_environment_py.j2 
        dest="{{ create_deploy_domain_py }}"
        owner="{{ oracle_install_user }}"
        group="{{ oracle_install_group }}"
        mode=0644
    register: script_created
    
    - name: create deployment environment
    shell: /bin/su - "{{ oracle_install_user }}" -c "{{ wlst_sh }} -loadProperties {{ environment_properties }} {{ create_deploy_domain_py }}"
    register: created_deployment_environment
    when: script_created|changed
    
    - name: check if the weblogic template is present
    stat:
        path="{{ wls_template_file }}"
        get_md5=no
        get_checksum=no
    register: weblogic_template
    when: script_created|changed and created_deployment_environment|success
    
    - name: remove old template
    shell: /bin/rm -f {{ wls_template_file }}
    when: script_created|changed and created_deployment_environment|success and weblogic_template.stat.exists
        
    - name: create template
    shell: /bin/su - "{{ oracle_install_user }}" -c "{{ pack_sh }} -managed=true -domain={{ domain_configuration_home }} -template={{ wls_template_file }} -template_name={{ domain_name }}"
        creates="{{ wls_template_file }}"
    when: script_created|changed and created_deployment_environment|success

     

    This creates the WLST script create_deployment_environment.py. When the script has been changed, for example, when a parameter has changed, the script is run. To be able to roll-out the domain on other machines a WebLogic template is created.

     

    To run the playbook, we can use:

     

    - hosts: admin_server
    user: root
    
    roles:
        - wls_common
        - wls_install
        - wls_basic_config
        - wls_deploy_config

     

    The wls_install and wls_basic_config are descibed in the post Fun with Ansible.

     

    When we run the playbook the following output is observed.

     

    [root@controlnode ansible]# ansible-playbook wls_admin_server.yml
    
    PLAY [admin_server] ***********************************************************
    
    GATHERING FACTS ***************************************************************
    ok: [machine1.com]
    ...
    TASK: [wls_basic_config | start admin server] *********************************
    ok: [machine1.com]
    
    TASK: [wls_deploy_config | create create_deployment_environment.py] ***********
    changed: [machine1.com]
    
    TASK: [wls_deploy_config | create deployment environment] *********************
    changed: [machine1.com]
    
    TASK: [wls_deploy_config | check if the weblogic template is present] *********
    ok: [machine1.com]
    
    TASK: [wls_deploy_config | remove old template] *******************************
    changed: [machine1.com]
    
    TASK: [wls_deploy_config | create template] ***********************************
    changed: [machine1.com]
    
    PLAY RECAP ********************************************************************
    machine1.com               : ok=27   changed=4    unreachable=0    failed=0
    
    # check
    wls:/tryout_domain/serverConfig> config = domainRuntimeService.getDomainConfiguration();
    wls:/tryout_domain/serverConfig> cluster = config.lookupCluster('application_cluster');
    wls:/tryout_domain/serverConfig> print cluster.getDynamicServers().getMaximumDynamicServerCount();
    4

     

    When we run the playbook again, without any changes the following output is observed:

     

    [root@controlnode ansible]# ansible-playbook wls_admin_server.yml
    
    PLAY [admin_server] ***********************************************************
    
    GATHERING FACTS ***************************************************************
    ok: [machine1.com]
    ...
    TASK: [wls_basic_config | start admin server] *********************************
    ok: [machine1.com]
    
    TASK: [wls_deploy_config | create create_deployment_environment.py] ***********
    ok: [machine1.com]
    
    TASK: [wls_deploy_config | create deployment environment] *********************
    skipping: [machine1.com]
    
    TASK: [wls_deploy_config | check if the weblogic template is present] *********
    skipping: [machine1.com]
    
    TASK: [wls_deploy_config | remove old template] *******************************
    skipping: [machine1.com]
    
    TASK: [wls_deploy_config | create template] ***********************************
    skipping: [machine1.com]
    
    PLAY RECAP ********************************************************************
    machine1.com               : ok=23   changed=0    unreachable=0    failed=0

     

    To change the number of servers in the dynamic cluster, we set the number_of_managed_servers_per_machine in the variable file to a different value, for example:

    [root@controlnode ansible]# ansible-playbook wls_admin_server.yml
    
    PLAY [admin_server] ***********************************************************
    
    GATHERING FACTS ***************************************************************
    ok: [machine1.com]
    ...
    TASK: [wls_basic_config | start admin server] *********************************
    ok: [machine1.com]
    
    TASK: [wls_deploy_config | create create_deployment_environment.py] ***********
    changed: [machine1.com]
    
    TASK: [wls_deploy_config | create deployment environment] *********************
    changed: [machine1.com]
    
    TASK: [wls_deploy_config | check if the weblogic template is present] *********
    ok: [machine1.com]
    
    TASK: [wls_deploy_config | remove old template] *******************************
    changed: [machine1.com]
    
    TASK: [wls_deploy_config | create template] ***********************************
    changed: [machine1.com]
    
    PLAY RECAP ********************************************************************
    machine1.com               : ok=27   changed=4    unreachable=0    failed=0
    
    # check
    wls:/tryout_domain/serverConfig> config = domainRuntimeService.getDomainConfiguration();
    wls:/tryout_domain/serverConfig> cluster = config.lookupCluster('application_cluster');
    wls:/tryout_domain/serverConfig> print cluster.getDynamicServers().getMaximumDynamicServerCount();
    4
    wls:/tryout_domain/serverConfig> print cluster.getDynamicServers().getMaximumDynamicServerCount();
    6

     

    Now, we can create and change WebLogic domains in a jiffy.

     

    References

     

    [1] Java API Reference for Oracle WebLogic Server.

     

    [2] WLST Command Reference for WebLogic Server

     

    .[3] Ansible Documentation.

     

    About the Author

     

    Oracle ACE Director René van Wijk works with numerous technologies, including Oracle Coherence, Oracle WebLogic, Hibernate, Java Virtual Machine, JBoss, and Spring. A graduate of the Delft University of Technology, René transfers his knowledge and experience regularly through training, publications and presentations at seminars and conferences.

     


     

    This article represents the expertise, findings, and opinions of the author. It has been published by Oracle in this space as part of a larger effort to encourage the exchange of such information within this Community, and to promote evaluation and commentary by peers. This article has not been reviewed by the relevant Oracle product team for compliance with Oracle's standards and practices, and its publication should not be interpreted as an endorsement by Oracle of the statements expressed therein.