Skip navigation

Last week, Oracle Product Director Will Lyons announced the long expected release of Oracle WebLogic 14.1.1. All  details about this release you can find https://bit.ly/2UNSfB7  and on my blog: https://bit.ly/3dZNOdS so I won't go into details about all the nice new features of it.

 

I'd rather now wanted to test out if this version was ready to run on a Kubernetes platform, and in this case the obvious choice was thee Oracle Kubernetes Engine(OKE), although RedHat OpenShift and Microsoft Azure Kubernetes Service could also be one of the options.

 

 

Ingredients for running WebLogic 14.1.1 domain

The following components belonged to my toolset to build this newest version:

 

WebLogic 14 docker image

To build my private image and push it to my private registry, I first build my local image.

There is a shell script provided, but let's look into the dockerfiles

I chose the generic file and the downloaded WebLogic package

So, with the shell script I build my docker image, only in the script I had to change the docker into the podman command:

# ################## #
# BUILDING THE IMAGE #
# ################## #
echo "Building image '$IMAGE_NAME' ..."

# BUILD THE IMAGE (replace all environment variables)
BUILD_START=$(date '+%s')
podman build --force-rm=$NOCACHE --no-cache=$NOCACHE $PROXY_SETTINGS -t $IMAGE_NAME -f Dockerfile.$DISTRIBUTION . || {
  echo "There was an error building the image."
  exit 1
}
BUILD_END=$(date '+%s')
BUILD_ELAPSED=`expr $BUILD_END - $BUILD_START`

 

 

And run the script:

./buildDockerImage.sh -v 14.1.1.0 -g

 

After the build is done, I used podman to inspect the image:

In this example you also see my private OCI registry version which I will cover next

 

Push to OCIR

This image I wanted to have in my OCI container registry so I setup a push to OCIR. But first you need to set up an auth token in your OCI:

- Go to User Settings

- Under resource you can find how to generate

- Beware to save the passphrase because you wont see that one anymore

- Use the passphrase to login, use double quotes!

 

podman login fra.ocir.io
Username: frce4kd4ndqf/oracleidentitycloudservice/mschildmeijer@qualogy.com
Password:********
Login Succeeded!

 

podman images

 

 

Now tag my WebLogic 14 container image for pushing:

 

podman tag oracle/weblogic:14.1.1.0-generic fra.ocir.io/frce4kd4ndqf/oracle/weblogic:14.1.1.0-generic
podman push fra.ocir.io/frce4kd4ndqf/oracle/weblogic:14.1.1.0-generic

 

Result:

WebLogic 14 on Kubernetes

Next, to install WebLogic on Kubernetes, the following actions need to be done:

  1. Configure Helm for installing the WebLogic Kubernetes Operator 2.5 and pushing it to my OCIR. I now used the Helm 2 version however for OKE v3 is supported.

 

 

cat << EOF | kubectl apply -f -
> apiVersion: rbac.authorization.k8s.io/v1
> kind: ClusterRoleBinding
> metadata:
>   name: helm-user-cluster-admin-role
> roleRef:
>   apiGroup: rbac.authorization.k8s.io
>   kind: ClusterRole
>   name: cluster-admin
> subjects:
> - kind: ServiceAccount
>   name: default
>   namespace: kube-system
> EOF

 

2. Get the Helm client and configure it with

helm init

3. Pull the WebLogic Kunernetes Operator Image

 

podman pull oracle/weblogic-kubernetes-operator:2.5.0

 

 

 

4. Tag and push it ot OCIR

 

podman tag oracle/weblogic-kubernetes-operator:2.5.0 fra.ocir.io/frce4kd4ndqf/oracle/weblogic-kubernetes-operator:2.5.0
podman push fra.ocir.io/frce4kd4ndqf/oracle/weblogic-kubernetes-operator:2.5.0

 

 

5.Add the github repo to helm

 

 

helm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/cha
rt

 

 

 

helm repo list

 

 

s

 

 

 

helm install weblogic-operator/weblogic-operator --name weblogic-operator

 

 

etc...

 

 

helm status weblogic-operator

 

gives you the current status of your deployment.

 

WebLogic Domain

With the provided scripts from github a lot of stuff is taken care of, however beware of some extra actions:

  • Creation of NFS on your OKE nodes; there is no solid instruction in the Oracle Github docs
  • Permissions to execute stuff within you containers
  • Adjust parameters in the yamls to your own needs

 

NFS on OKE

If you want to make use of persistent volumes on OKE you need to create an NFS share. A good instruction is here:https://bit.ly/2JEPX0G

However you need to set up console connections for your nodes, this is easy to do.

Click on all you compute instances, under resources you can find the link

It gives you the detaied command how to connect, however: I encountered to get the login prompt, and for the standard OPC user there is no password known.

You can find here to reset your opc user, after this you can login. https://bit.ly/2X8FYIW

https://bit.ly/2X8FYIW  section Resetting the OPC user password via the console

Once logged in you can setup NFS

 

WebLogic Domain provisioning

Now, after done alle of the above it's time to provison your WebLogic domain. In the cloned repository you can find them in weblogic-kubernetes-operator/kubernetes/samples/scripts

 

Domain on Persistent Volume

First I create a persistent volume and claim to store WebLogic Artefacts out using the yaml files:

# The version of this inputs file.  Do not modify.
version: create-weblogic-sample-domain-pv-pvc-inputs-v1

# The base name of the pv and pvc
baseName: weblogic-fourteen

# Unique ID identifying a domain.
# If left empty, the generated pv can be shared by multiple domains
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.
domainUID:

# Name of the namespace for the persistent volume claim
namespace: default

# Persistent volume type for the persistent storage.
# The value must be 'HOST_PATH' or 'NFS'.
# If using 'NFS', weblogicDomainStorageNFSServer must be specified.
weblogicDomainStorageType: HOST_PATH

# The server name or ip address of the NFS server to use for the persistent storage.
# The following line must be uncomment and customized if weblogicDomainStorateType is NFS:
#weblogicDomainStorageNFSServer: 10.0.10.2

# Physical path of the persistent storage.
# When weblogicDomainStorageType is set to HOST_PATH, this value should be set the to path to the
# domain storage on the Kubernetes host.
# When weblogicDomainStorageType is set to NFS, then weblogicDomainStorageNFSServer should be set
# to the IP address or name of the DNS server, and this value should be set to the exported path
# on that server.
# Note that the path where the domain is mounted in the WebLogic containers is not affected by this
# setting, that is determined when you create your domain.
# The following line must be uncomment and customized:
weblogicDomainStoragePath: /scratch

# Reclaim policy of the persistent storage
# The valid values are: 'Retain', 'Delete', and 'Recycle'
weblogicDomainStorageReclaimPolicy: Retain

# Total storage allocated to the persistent storage.
weblogicDomainStorageSize: 10Gi

 

 ./create-pv-pvc.sh -i create-pv-pvc-inputs.yaml -o pvc -e

Which creates the pv and pvc yaml files plus the objects in your K8S cluster

 

So now create the jobs which are going to be used by the operator, to create the AdminServer and Managed Server Pods

In create-weblogic-domain/domain-home-on-pv directory you find the necessary scripts, you can amend for your own needs

I changed name of the domain and the location of my WebLogic image:

First you need to create 2 secrets:

- For pulling the image:

- WebLogic credentials, script can be found in

- create the domain job:

./create-domain.sh -i create-domain-inputs.yaml -o weblogic-fourteen -e





 

After this the pods we're created, however they remained in status Init:Error.

kubectl logs <pod> didn't gave much information; a way to get behind why the pods we're failing was to do this:

 

kubectl get pods
kubectl describe po < name of the pod>

 

Look at he part of the Container ID that is in init status

kubectl logs <pod-name> -c <init-containerid>

 

There I found that the NFS share was not writable for the scripts to build up the domain. To resolve, I changed the values in the create-domain-job-template.yaml

The spec / initContainer had value 0, which I changed in the opc userid (1000)

After this myWebLogic domain was created successfully!

 

Leftovers

Now this proved that WebLogic 14.1.1 runs as equal as the 12 versions. I also proved that Podman can be used instead of the docker-cli. The commands are more or less the same.

Now this is an empty domain, so a view leftovers for me:

- Deploying applications

- Using Istio with JMS

- Use GraalVM instead of Oracle Java

- Deploy polyglot applications

- Configure ELK

- Configure Prometheus and Grafana

In this surrealistic time of the COVID 19 virus, a lot of conferences where I was going to speak this year were cancelled, so I decided to write my presentation in the form of an article. Not sure if I’m able to tell this story somewhere in the world this year, so here it is!

 

Delivering high quality software in a 24*7 environment is a challenge for a lot of teams these days. Teams of multinationals need to deliver more functionality more quickly. Some of these teams are able to deliver new software every hour. How is it possible to meet these demands?


DevOps

You must have heard the term many times these days. Maybe you already work like this in your team. Anyway, methods of working are changing rapidly, and the DevOps’s way of working is the way a lot of teams aim towards. Either to start with or to improve their own way of working and meet the high demands of their clients, business teams or goals set by their company.


Other ways of Development & Operations

Besides the methods of working, teams are also facing new technologies and innovations of how to develop, build, deploy, operate and monitor applications. A lot of companies these days shift from on-premises to “Cloud Native” applications. This means, for instance, that a build pipeline of an application might look a little different from traditional applications. Also take into consideration that the application landscape is being redesigned into a (partially) microservices or serverless landscape, supported by container-native platforms and private or commercial cloud vendors.


DevOps challenges – the “Ops” in DevOps

I won’t go through all aspects of DevOps, but one challenge DevOps teams face is the Ops part of it. These are business critical applications, that don’t allow any downtime or performance loss. A lot of teams are not well focused on the Ops part. This leads to error rates of applications being high: 32% of production apps have problems and even worse: that are often only discovered when they are reported by customers.

 

There are several reasons for this to happen:

  • Lack of continuity and/or automation
  • Lack of visibility

 

CALMSS Model

To become more successful in DevOps, Forrester developed a model for DevOps teams, to help them achieve their goals. The so-called CALMMS model:

 

 

 

This whitepaper will focus on the technology part, which supports certain highlighted aspects of the CALMSS model, especially the “Oracle way” of how to interpret these.

The last few years, Oracle has done the major job of adopting and keeping up Cloud Native technologies and embedding them in their current cloud offerings, which will be discussed in the coming part.

 

Solutions for Cloud Native Deployments

Through the years, the industry has developed some solutions for automating deployments as much as possible, in line with the DevOps way of working.

 

 

Oracle Container Pipeline – A Cloud Native Container Pipeline

To meet a high level of automation, it is essential to automate your software delivery from development to operations. Specially for in the cloud, Oracle has some technologies to support this. The ingredients you may need are the following: Versioning & Container Registry, Containers & Orchestration Engine, Provisioning, Container Pipelines and Packaging & Deployments. Having these ingredients will enable your team to meet a higher level of delivering, because less manual work needs to be done when it’s implemented. Of course, the implementation itself takes time and investment, but it will become beneficial later, when you are up to speed with it.

 

 

 

 

Versioning & Container Registry

For setting up a continuous, and in this case a container pipeline, you need a mechanism to version you source code and register your application containers in a registry. There are various source code repositories:

  • Git (commonly used with GitHub)
  • SubVersion
  • Bitbucket

The most common is GitHub. There are even movements to let GitHub be the “source of truth” and implement a GitOps methodology. With this, you declaratively describe the entire desired state of your system in Git. Everything is controlled and automated with YAMLs and pull requests.

Container registries are used to store container images created during the application development process. Container images placed in the registry can be used during application development. Companies often use a public registry, but it is recommended to have private registry. Within the Oracle Cloud, you can set up your own private registry, with benefits like high availability and maintenance.

 

 

 

 

 

Set up your private container registry at Oracle Cloud Infrastructure

To be able to push and pull your containers to a private OCI registry, the following steps need to be applied:

  • Set up an auth token in your Cloud Console

 

 

  • From your OCI client, login to your OCI registry with the docker interface(fra is the Frankfurt region)
docker login fra.ocir.io

Credentials to be filled in. You can find your user details in your cloud user settings:

<tenancy-namespace>/oracleidentitycloudservice/<username>.


Then you can either pull an image, or your local build image and push it to your private registry

Tag the image for pushing

docker tag <docker hub name>/helloworld:latest
<region-key>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag>

Region, tenancy etc should apply to your own situation.

And push

docker push
<region-key>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag>

 

You can verify the push aferwards

 

 

 

Containers & Orchestration Engine

From bare metal to virtualization to containerization. Containers have gained significant popularity the last few years. There are many container technologies to choose from, but there is a general lack of knowledge about the subtle differences in these technologies, and when to use what. Docker has become the most popular and more or less the de facto standard for container engines and runtime, though there are more, such as:

  • CoreOS - Rocket
  • Linux Containers (LXC). Docker was built on top of it.
  • Kata containers

For a long time, since 1979, the technology for containers was already there, hidden in UNIX and later in Linux as chroot, where the root directory of a process and its children were isolated to a new location in the filesystem. This was the beginning of process isolation: segregating file access for each process.

So, some basic characteristics of containers are:

  • Container: Configurable unit for small set of services/applications. Light weighted images
  • Share the OS kernel – No hypervisor. Except for Kata Containers which has its own kernel
  • Isolated resources (CPU, Memory, Storage)
  • Application & Infrastructure software as a whole

Orchestration

In a container-based landscape, there can be a large number of containers, which leads to questions such as: How to manage and structure these? An orchestration platform can be the solution. Kubernetes is such a platform. It manages storage, compute resources, networking where “Infrastructure as code” is the way for lifecycle management. The orchestration platforms present these days are:

  • Docker Swarm
  • Kubernetes, on premises or Cloud Engines by Microsoft (AKE), Google (GKE), IBM/RedHat, Amazon and Oracle (OKE)
  • Red Hat OpenShift

Kubernetes might now be considered as a standard platform and adopted technology to orchestrate containers. Once initiated as an internal project at Google, Kubernetes is now a framework for building distributed platforms. It manages and orchestrates container processes. It takes care of the lifecycle management of runtime containers.

 

 

 

Some of the basic concepts of Kubernetes are:

  • Master: Controls Kubernetes nodes.
  • Node: Perform requested, assigned tasks by the master
  • Pod: Scheduler entity for a group of one or more containers
  • Replication controller:  Control of identical pods across a cluster.
  • Service: Work definitions and connection between containers and pods.
  • kubelet: Reads container manifests to watch containers’ lifecycles
  • kubectl: command line configuration tool for Kubernetes
  • etcd: Key – Value store holding all Cluster Configs

 

 

 

Provisioning

It’s important to provision your “Infrastructure as Code”. Now, every Cloud vendor has a portal where you can set up a Kubernetes Cluster in minute, but when you want to do it repeatedly and more automated, it’s recommended to use the Terraform Provider. Terraform is a tool to provision Cloud environments. These scripts can be easily integrated in your container pipeline, as seen in this whitepaper. For a detailed setup of Terraform, look at: http://https://community.oracle.com/blogs/mnemonic/2018/09/23/oracle-kubernetes-engine-setup-fast-with-terraform

 

 

 

 

 

 

Packaging & Deployments

In the Opensource community, it’s all about adoption. If there is a fine technology or good initiative, it will be embraced and finally become some sort of a standard. Speaking of packaging and deployment, Helm has become a widely used tool for doing this. It’s a Release and Package Management tool for Kubernetes and can be integrated with CI build tools (Maven, Jenkins, and Wercker)

This is a simple setup of the Helm components

 

 

Helm workflow according to V3

 

 

 

 

Container Pipelines

Setting up a Cloud Native Container pipeline can be done with different technologies. There are a lot of Opensource technologies to facilitate this. A few well known are:

  • JenkinsX and Tekton: you could run the default Jenkins in the cloud, but JenkinsX and Tekton are cloud native
  • Knative, Originated by Google
    1. K8S-native OSS framework for creating (CI/CD) systems
  • Spinnaker: an open source, multi-cloud continuous delivery platform initiated in 2014 by Netflix
  • OpenShift Pipelines
  • Azure DevOps Pipelines
  • Oracle Container Pipelines (Wercker): a Docker and Kubernetes native CI CD platform for Kubernetes and Microservices Deployment
    1. Former Wercker, acquired by Oracle
    2. Partial Open Source (the CLI)

 

Melting this all together in: Oracle Container Pipelines

Oracle Container Pipelines is fully web based and integrated with tools like GitHub/lab or Bitbucket. It contains all the build workflows, and has dependencies, environment settings and permissions.

Before Oracle acquired it, it was called Wercker. It is a Container-native Open Source CI/CD Automation platform for Kubernetes & Microservice Deployments. Every Artifact can be a packaged Docker Container.

In the base, it is a CI/CD tool designed specifically for container development. It’s pretty much codeless, meant to work with containers, to be used with any stack in any environment. Its central concept is pipelines, which operate on code and artifacts to generate a container. A pipeline generates an artifact, which can be packaged into a container. The aim is to work directly with containers through the entire process, from Development to Operation. This means the code is containerized from the beginning, with a development environment that's almost the same as the production one.

 

 

Oracle Container Pipelines works with some of following characteristic:

  • Organizations
    • This is the team, group or department grouped together to work on a certain project, as a unit in Wercker. It hosts applications, and users can be part of one or more Organizations.
  • Applications
    • This contains the build workflows and consist of dependencies, environment configuration and permissions. Applications are linked to a versioning system, usually to a project on Github, Gitlab, or Bitbucket, or your own SCN.
  • Steps
    • Stages in the pipeline with an Isolated script or compiled binary for accomplishing specific automation tasks.
    • Such as install, build, configure, test. You can add an npm-install, maven build or a python script to test your build.
    • Or a Docker action (Push, Pull etc.).
  •   Pipelines (pipeline consists of steps)
    • A series of steps that are triggered on a Git push, or the completion of another pipeline. This is more or less a GitOps approach
    •   Workflows is a set of chained branched pipelines to form multi-stage, multi-branch complex CI/CD flows. These are configured in the Web UI and depend on the wercker.yaml where the pipelines are configured, but they are not part of that yaml. Variations are based on branch.
      Workflows can be executed in parallel.
  • werker.yaml: the central file that must be on the root of your Git repository, and defines the build of your application using the steps and pipelines you configured in Wercker

 

 

example wercker.yaml

 

Configure Wercker for OKE

To be able to deploy applications to your managed Kubernetes Engine, you need to set some configuration Items on the configuration tab of your Wercker Application, in order to access your OKE:

 

 

 

 

Blend in Helm and Terraform in your Wercker Workflow

Building an entire Oracle Kubernetes Engine and deployment of your applications can be achieved by creating pipelines that consists of all the necessary steps, such as:

  • Use a lightweight image for the build
  • Performing all the steps needed: Terraform commands, Helm commands, specific kubectl commands or scripts
  • Configurations for API keys (in case of Terraform build of OKE), OCI and Kubernetes details

 

 

 

 

Terraform provision

Set up a temp Terraform box

 

Provision Kubernetes OKE cluster

 

Helm Steps

The Helm steps are basically the same as for other steps:

  • Push a build container image to your Container Registry
  • Setup a temp box for helm install
  • Fetch Helm repo and generate charts for install your application container

 

 

Example of the running pipeline

 

These are more or less the steps to take in order to set up a container-based pipeline where you:

  • Provision Infrastructure with Terraform
  • Do Helm initialization and repository Fetch
  • Install application containers with Helm

 

Conclusions

Wercker (or Oracle Container Pipelines) is, in my opinion, a good option for your containerized pipelines, with lots of options for different methods and technologies. It requires some work to set up, but components can be integrated on different kind of levels.

To me, it is currently unclear how this will evolve, especially with other more well-known platforms such as Jenkins-X and Tekton. I will closely follow the different solutions!

 

 

 

Besides all the exciting stuff about containers I still work sometimes with my hands in the mud . Today I encountered some strange behaviour on starting WebLogic, which I wanted to share with you, because it's hard to find the cause.

If you read the error message, you might think, this is an easy one; lots of blogposts and solutions are written. However, not everything is what it' looks like.

 

Executing the startWebLogic.sh script somehow, creates a file for the AdminServer which is located in the <DOMAIN_HOME>servers/AdminServer/tmp, usually called as <WebLogic Server Name>.lok, like in here AdminServer.lok.

This is file is claimed by the java process which WebLogic uses to start and prevents duplicate startups. If a process is running you might get the above error.  Solution was to stop the duplicate process, remove the file and start again.

However: In this case there was no process running...... The file was created but WebLogic failed to start. A removal of the file did not help, and every time I tried to start the file was created.

So I started to investigate on other startscripts, such as the NodeManager..... same results.

 

Further investigation:

 

The domain home was located on an NFS Share, with a separate admin and managed Server homes. I suspected that it had something to do with NFS.

 

So I created a testfile to see if this was the case, a file called TestLock.java:

 

import java.io.File;

import java.io.FileOutputStream;

import java.io.IOException;

 

public class TestLock

{

  public static void main(String[] args)

  {

   try

   {

    File f = new File(".homelock");

    f.createNewFile();

    FileOutputStream fos = new FileOutputStream(f);

    fos.getChannel().lock();

    }

    catch (IOException e)

    {

        e.printStackTrace();

    }

  }

}

 

 

And compiled it to a runnable class, and run it

This resulted in an error. Testing on other NFS did not gave me this error

Diagnosing with the linux command dmesg gave a lot of output, but the one which was applicable:

 

 

After the storage admin resolved the issues with lockd and statd, file locking was available again and WebLogic could startup normally.

In the previous posts I wrote about how to transform a traditional application server such as WebLogic into a containerized platform, based on Docker containers managed by a Kubernetes Cluster. The basics are that there hasn't been any effort yet in looking how a huge and complex environment such as Oracle SOA Suite could fit into a container based strategy, but it's more or less lift and shift the current platform to run as Kubernetes managed containers.

 

There are ways to run a product such as Oracle SOA Suite in Kubernetes, here's the way I did this.

 

Oracle SOA Suite on OKE

 

Other that the standard PaaS service Oracle provides, the SOA Cloud service, this implementation I did is based on IaaS, on the Oracle Cloud Infrastructure, where I configured this Kubernetes Cluster as I described in previous posts. However this also can be done on an on premises infrastructure.

 

 

Ingredients

 

The following parts are involved to set up a SOA Suite domain based on the version 12.2.1.3:

 

  • Docker Engine
  • Kubernetes base setup
  • Oracle Database Image
  • Oracle SOA Suite docker image ( either self build or from Oracle Container Registry
  • Github
  • A lot of patience

 

Set up the SOA Suite repository

 

A SOA Suite installation requires a repository which can be Oracle or some other flavours, to dehydrate SOA instance data and store metadata from composite deployments. I used a separate namespace to setup a database in Kubernetes.

 

The database I created uses the image container-registry.oracle.com/database/enterprise:12.1.0.2, so I used the database yaml I obtained, where I had to add an ephemeral storage because after the first time of deploy I got this message in Kubernetes about exhausted ephemeral storage, so I solved it with this

  • Create a namespace for the database
  • Create a secret to be able to pull images from the container registry
kubectl create secret docker-registry ct-registry-pull-secret \
  --docker-server=container-registry.oracle.com \
  --docker-username=********* \
  --docker-password=********* \
  --docker-email=mschildmeijer@qualogy.com

 

  • Apply the database to Kubernetes. To see progress, you can look into it to see progress of db creation:
kubectl get pods -n database-namespace
NAME                        READY     STATUS    RESTARTS   AGE
database-7b45749f44-kjr97   1/1       Running   0          6d

kubectl exec -ti database-7b45749f44-kjr97 /bin/bash -n database-namespace

 

Or use

kubectl logs database-7b45749f44-kjr97 -n database-namespace

 

So far so good. The only thing left is to create a service for the database to be exposed:

kubectl expose deployment database --type=LoadBalancer --name=database-svc -n database-namespace

 

 

 

 

Repository Creation with RCU

 

To do this, run of a temp resource pod of the SOA Suite image was sufficient to run rcu from it:

kubectl run rcu --generator=run-pod/v1 --image container-registry.oracle.com/middleware/soasuite:12.2.1.3 --overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "regsecret"}] } }' -- sleep-infinity

 

And run rcu from it:
kubectl exec -ti rcu /bin/bash


/u01/oracle/oracle_common/bin/rcu \
  -silent \
  -createRepository \
  -databaseType ORACLE \
  -connectString 130.61.65.56:1521/OraPdb.my.domain.com \
  -dbUser sys \
  -dbRole sysdba \
  -useSamePasswordForAllSchemaUsers true \
  -selectDependentsForComponents true \
  -schemaPrefix FMW1 \
  -component SOAINFRA \
  -component UCSUMS \
  -component ESS \
  -component MDS \
  -component IAU \
  -component IAU_APPEND \
  -component IAU_VIEWER \
  -component OPSS  \
  -component WLS  \
  -component STB

 

Nevertheles this is not completely silent as you have to fill in manually your passwords

 

 

Creation of the SOA domain

 

I used the WebLogic Kubernetes Operator GIT repository to create my SOA domain and changed it to what I needed.

General steps to take:

  • Install the WebLogic Kubernetes Operator
  • Create persistent volumes and claimes (PV/PVC)
  • Create a domain:
    • namespace
      • secrets:
        • RCU secrets
        • WebLogic domain secrets

Use the by oracle provided scripts and tools

  • Rollout the domain

 

Install the WebLogic Operator

I used helm to do this. In the github repository, there are charts available at kubernetes/charts/weblogic-operator. Specifiy in the values .yaml which namespace needs to be managed

The SOA Domain needs to be managed:

domainNamespaces:
  - "default"
  - "domain-namespace-soa"

 

Use the lates operator

# image specifies the docker image containing the operator code.
image: "oracle/weblogic-kubernetes-operator:2.2.1"

 

And install:

helm install kubernetes/charts/weblogic-operator   --name weblogic-operator --namespace weblogic-operator-namespace   --set "javaLoggingLevel=FINE" --wait

 

Persistent volumes and claimes (PV/PVC)

When running s WebLogic domain in Kubernetes pods, 2 different models can be chosen:

  • Domain on image, when all artifacts will be stored in the container
  • Domain on a persistent volume, where domain artifacts can be stored stateful

In the git repository there are some ready to go scripts for creating PV's:

kubernetes/samples/scripts/create-weblogic-domain-pv-pvc/
create-pv-pvc-inputs.yaml
create-pv-pvc.sh
pvc-template.yaml
pv-template.yaml

 

Now provide your own specifics in the inputfile such as:

# The version of this inputs file.  Do not modify.
version: create-weblogic-sample-domain-pv-pvc-inputs-v1
# The base name of the pv and pvc
baseName: soasuite
# Unique ID identifying a domain. 
# If left empty, the generated pv can be shared by multiple domains
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.
domainUID: soa-domain1
# Name of the namespace for the persistent volume claim
namespace: domain-namespace-soa
# Persistent volume type for the persistent storage.
# The value must be 'HOST_PATH' or 'NFS'. 
# If using 'NFS', weblogicDomainStorageNFSServer must be specified.
weblogicDomainStorageType: HOST_PATH
# The server name or ip address of the NFS server to use for the persistent storage.
# The following line must be uncomment and customized if weblogicDomainStorateType is NFS:
#weblogicDomainStorageNFSServer: nfsServer
# Physical path of the persistent storage.
# When weblogicDomainStorageType is set to HOST_PATH, this value should be set the to path to the
# domain storage on the Kubernetes host.
# When weblogicDomainStorageType is set to NFS, then weblogicDomainStorageNFSServer should be set
# to the IP address or name of the DNS server, and this value should be set to the exported path
# on that server.
# Note that the path where the domain is mounted in the WebLogic containers is not affected by this
# setting, that is determined when you create your domain.
# The following line must be uncomment and customized:
weblogicDomainStoragePath: /u01/soapv
# Reclaim policy of the persistent storage
# The valid values are: 'Retain', 'Delete', and 'Recycle'
weblogicDomainStorageReclaimPolicy: Retain
# Total storage allocated to the persistent storage.
weblogicDomainStorageSize: 20Gi

and run it:

./create-pv-pvc.sh -i create-pv-pvc-inputs.yaml -o soapv -e

 

Where -e is just a path where you put your yamls local on your client.

Secrets

Create WebLogic domains access:

kubectl -n domain-namespace-soa \
        create secret generic domain1-soa-credentials \
        --from-literal=username=weblogic \
        --from-literal=password=*****

 

Create SOA repository access

./create-rcu-credentials.sh -u fmw1_opss -p qualogy123 -a sys -q qualogy123 -d soa-domain-3 -n domain-namespace-soa -s opss-secret
secret "opss-secret" created
secret "opss-secret" labeled

 

Do this for all the schema's for SOA Suite

Domain rollout

Because domain rollout is a complicated process, this is all enclosed in a pod which runs several jobs to configure a domain

The only thing you have to to to fill in some inputs in a yaml file, where using a shell script will create a job  which will implement all your values

Some important ones:

# Port number for admin server
adminPort: 7001
# Name of the Admin Server
adminServerName: admin-server
# Unique ID identifying a domain.
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.
domainUID: soa-domain-1 (no underscores!)
# Home of the WebLogic domain
# If not specified, the value is derived from the domainUID as /shared/domains/<domainUID>
domainHome: /u01/domains/soa-domain-1
# Determines which WebLogic Servers the operator will start up
# Legal values are "NEVER", "IF_NEEDED", or "ADMIN_ONLY"
serverStartPolicy: IF_NEEDED
# Cluster name
clusterName: soa-cluster-1
# Number of managed servers to generate for the domain
configuredManagedServerCount: 3
# Number of managed servers to initially start for the domain
initialManagedServerReplicas: 2
# Base string used to generate managed server names
managedServerNameBase: soa-ms
# Port number for each managed server
managedServerPort: 8001
# WebLogic Server Docker image.
# The operator requires WebLogic Server 12.2.1.3.0 with patch 29135930 applied.
# The existing WebLogic Docker image, `store/oracle/weblogic:12.2.1.3`, was updated on January 17, 2019,
# and has all the necessary patches applied; a `docker pull` is required if you already have this image.
# Refer to [WebLogic Docker images](../../../../../site/weblogic-docker-images.md) for details on how
# to obtain or create the image.
image: container-registry.oracle.com/middleware/soasuite:12.2.1.3
# Image pull policy
# Legal values are "IfNotPresent", "Always", or "Never"
imagePullPolicy: IfNotPresent
# Name of the Kubernetes secret to access the Docker Store to pull the WebLogic Server Docker image
# The presence of the secret will be validated when this parameter is enabled.
imagePullSecretName: ct-registry-pull-secret
# Boolean indicating if production mode is enabled for the domain
productionModeEnabled: true
# Name of the Kubernetes secret for the Admin Server's username and password
# The name must be lowercase.
# If not specified, the value is derived from the domainUID as <domainUID>-weblogic-credentials
weblogicCredentialsSecretName: domain1-soa-credentials
# Whether to include server .out to the pod's stdout.
# The default is true.
includeServerOutInPodLog: true
# The in-pod location for domain log, server logs, server out, and node manager log files
# If not specified, the value is derived from the domainUID as /shared/logs/<domainUID>
logHome: /u01/domains/logs/soa-domain-3
# Port for the T3Channel of the NetworkAccessPoint
t3ChannelPort: 30012
# Public address for T3Channel of the NetworkAccessPoint.  This value should be set to the
# kubernetes server address, which you can get by running "kubectl cluster-info".  If this
# value is not set to that address, WLST will not be able to connect from outside the
# Name of the domain namespace
namespace: domain-namespace-soa
#Java Option for WebLogic Server
javaOptions: -Dweblogic.StdoutDebugEnabled=false
# Name of the persistent volume claim
# If not specified, the value is derived from the domainUID as <domainUID>-weblogic-sample-pvc
persistentVolumeClaimName: soa-domain1-soasuite-pvc
# Mount path of the domain persistent volume.
domainPVMountPath: /u01/domains
# Mount path where the create domain scripts are located inside a pod
#
# The `create-domain.sh` script creates a Kubernetes job to run the script (specified in the
# `createDomainScriptName` property) in a Kubernetes pod to create a WebLogic home. Files
# in the `createDomainFilesDir` directory are mounted to this location in the pod, so that
# a Kubernetes pod can use the scripts and supporting files to create a domain home.
createDomainScriptsMountPath: /u01/weblogic
#
# RCU configuration details
#

# The schema prefix to use in the database, for example `SOA1`.  You may wish to make this
# the same as the domainUID in order to simplify matching domains to their RCU schemas.
rcuSchemaPrefix: FMW1

# The database URL
rcuDatabaseURL: 130.61.65.56:1521/ORAPDB.MY.DOMAIN.COM

# The kubernetes secret containing the database credentials
rcuCredentialsSecret: opss-secret
rcuCredentialsSecret: iau-secret
rcuCredentialsSecret: iauviewer-secret
rcuCredentialsSecret: iauappend-secret
rcuCredentialsSecret: wls-secret
rcuCredentialsSecret: soainfra-secret
rcuCredentialsSecret: mds-secret
rcuCredentialsSecret: wls-secret
rcuCredentialsSecret: wlsruntime-secret
rcuCredentialsSecret: stb-secret

 

Execute the script

./create-domain.sh -i create-domain-inputs-soa.yaml -o wlssoa -e -v

 

 

You can follow it using the logs:

kubectl logs -f soa-domain-2-create-fmw-infra-sample-domain-job-572r6 -n domain-namespace-soa

 

So this is basically what it takes, and next time I will do a more deep dive in how to manage a SOA Suite domain in kubernetes.

 

Unfortunately at the moment the internal configuration does not complete entire successful....

 

Could be the case as described in MOS Doc ID 2284797.1

 

Update 7 august 2019

Indeed as I expected, the above issue was due to the choosage of the password, theis had to be in the structure as described in this MOS Document. So no actual Kubernetes issue.

 

To be continued!!!

Traditionally, when you install Oracle WebLogic, you simply download the  necessary WebLogic and FMW Infrastructure jar files, spin up one or multiple servers, install the software and configure your domain, either automated or manual.

Depending on which Cloud strategie you choose you could either:

 

  • Use pure IaaS, so in essence this means you obtain compute power, storage and network from a certain cloud provider, which can be any you choose: AWS, Azure, Google or maybe Oracle.
  • Use PaaS, where your application server platform and generic middleware is part of the cloud subscription. In this scenario you might choose for Oracle's Java Cloud Service  or some other PaaS such as SOA Cloud Service, depending on the needs. Looking at Java based applications, other vendors also offer Java in the Cloud:

        But when you come from the WebLogic application server, the 1st obvious choice seems the Java Cloud Service. This is only one part of the stage of a future roadmap of your application landscape, because the  applications you develop can still be monoliths. I will come to that later.

 

Along with a strategy of "breaking up the monoliths", DevOps, and Cloud, also containerizing your infrastructure is inevitable for the future state of your applicationlandscape. Now Oracle Product Development stated during Oracle OpenWorld, that regarding containerization, they will follow the strategy of the Cloud Native Compute Foundation which means that products like WebLogic will be developed with respect to container technology such as Docker, CoreOS and Kubernetes.

 

 

Install WebLogic on a Kubernetes platform

 

To install a WebLogic Domain on a Kubernetes platform in the cloud, I used the Oracle Kubernetes Engine which is through the OCI console very easy to set up.

  1. Login to you overall Cloud Dashboard and select Compute in the left pane; this brings you into the Oracle Cloud Infrastructure Dashboard
  2. Click on Developer and create the OKE cluster. Be sure that Helm and Tiller are ticked

 

You have to create a compartment in OCI before creating the OKE. You can find this here https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcompartments.htm

After that in your root compartment, you have to create a policy to manage OKE resources, so select in the left pane:

Identity-->policies and create the following policy"

 

Now the base platform is ready, and we use a linux client to access, manage and build further on our Kubernetes platform.

For that we need to obtain the kubeconfig file from the OKE and place it on our client:

 

Local we create a hidden directory

 

mkdir -p $HOME/.kube

 

Next, you need to install the cloud commandline: https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliinstall.htm and then configure it to use with you cloud tenant

 

oci setup config

 

This sets some basic config, generates an API key pair and creates the config file. The public key you will have to upload to you console:

Then create the kubeconfig file locally:

Then, see if the cluster is accessible

 

The Kubernetes Operator for WebLogic

Before we are going to install our WebLogic domain into Kubernetes, first a so called Operator is required. An operator is an extended api on top of the basic K8S when you setup a K8S cluster.  A WebLogic platform has so many specifics in it, which can't be managed by standard K8S api's, so the operator takes care of that. Operations such as WebLogic clustering, shared domain artifacts, t3 and rmi channel access, and lots more are handled by this operator.

To obtain the operator, clone it from github to a directory you prefer:

git clone https://github.com/oracle/weblogic-kubernetes-operator.git 

 

Go to the directory weblogic-kubernetes-operator, where we will install the operator using helm

 

Install the operator using Helm

Before we install the operator, first a role binding needs to be setup for helm in the k8s cluster:

cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: helm-user-cluster-admin-role
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system
EOF

 

Next, out from the cloned git repository you can install the operator using helm:

 

And after a while the operator pod comes online and running

 

Some operational actions can be done with helm. You can inspect the values from your helm chart and see how it's implemented.

So from you directory where the helm chart is located:

To whats implemented:

The operator pod to be seen from your K8S Dashboard

 

 

 

More about the how and why about operators, I advise you to read https://www.qualogy.com/techblog/oracle/whitepaper-kubernetes-and-the-oracle-cloud and the documentation about it which is available on https://docs.helm.sh/

 

 

Preparing and creating a WebLogic domain

Before preparing, you should know what kind of domain model you would like to choose: the domain in docker image or the domain on persistent volume.

If you are really have to preserve state, or make logfiles accessible outside your domain you should us the one on persistent volume.

 

Create a Persistent volume

Using an input file with and changing the values to your to be created domain, namespace, etc:

 

 

# Copyright 2018, Oracle Corporation and/or its affiliates.  All rights reserved.
# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.

# The version of this inputs file.  Do not modify.
version: create-weblogic-sample-domain-pv-pvc-inputs-v1

# The base name of the pv and pvc
baseName: weblogic-storage

# Unique ID identifying a domain. 
# If left empty, the generated pv can be shared by multiple domains
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.
domainUID:

# Name of the namespace for the persistent volume claim
namespace: wls-domain-namespace-1

# Persistent volume type for the persistent storage.
# The value must be 'HOST_PATH' or 'NFS'. 
# If using 'NFS', weblogicDomainStorageNFSServer must be specified.
weblogicDomainStorageType: HOST_PATH

# The server name or ip address of the NFS server to use for the persistent storage.
# The following line must be uncomment and customized if weblogicDomainStorateType is NFS:
#weblogicDomainStorageNFSServer: nfsServer

# Physical path of the persistent storage.
# When weblogicDomainStorageType is set to HOST_PATH, this value should be set the to path to the
# domain storage on the Kubernetes host.
# When weblogicDomainStorageType is set to NFS, then weblogicDomainStorageNFSServer should be set
# to the IP address or name of the DNS server, and this value should be set to the exported path
# on that server.
# Note that the path where the domain is mounted in the WebLogic containers is not affected by this
# setting, that is determined when you create your domain.
# The following line must be uncomment and customized:
weblogicDomainStoragePath: /u01

# Reclaim policy of the persistent storage
# The valid values are: 'Retain', 'Delete', and 'Recycle'
weblogicDomainStorageReclaimPolicy: Retain

# Total storage allocated to the persistent storage.
weblogicDomainStorageSize: 10Gi
weblogicDomainStoragePath: /u01

Now in that directory there is a create script to execute:

 

 

Now the both generated yaml files can be used to apply to the k8s WebLogic namespace:

And in our namespace the storage is created:

 

 

Generating yaml files and create the domain

 

Oracle provides on Github several solutions to create domains. There  domain models I chose was the one to have a perstistent volume to store logs and other artifacts. The storageclass I used was the oci storageclass.

To generation took care of the following:

 

 

  • Create a Job for the entire domaincreation
  • Create a ConfigMap in K8S for parameterizing the WebLogic Domain( based on the yaml inputs)
  • Generation of a domain yaml file out of the input and template yamls
  • Start up scheduled pods for creating the entire domain
  • Final creation of WebLogic Admin Server and Managed Server Pods and start them using the scripts which were included.

And the pod which creates the domain

 

 

 

When this job has finished the final empty WebLogic domain will be created.

 

The road to transformation

 

Now the question is, or this is already production worthy. My opinion is no, because these setups re based on what;s there on github at the moment; so I'd recommend first to start lightweighted and set some of these models up to try them out. To set up an enterprise ready WebLogic Kubernetes platform, also aspects such as automation, loadbalancers, networking, and so on needs to be sorted out how this can be setup in a way that WebLogic can act in a containerized world.

 

In one of my next article I will look at migrating existing WebLogic domain to Kubernetes.

Companies are on the verge of making important decisions regarding containerization of their IT Landscape. And whether, how and when they should move to the cloud.

 

This whitepaper helps companies make the right decisions regarding the Container Orchestration Platform, and how it works together with the Oracle Cloud Infrastructure. It contains a lot of tips, takeaways and things to consider. So you can make the optimal choice for your company’s strategy.

 

https://bit.ly/2BFvKUK  for downloading the whitepaper

When you look at AWS, it has a possibility to create rather easily an Elastic Search Cluster. With a few simple clicks you have a cluster up and running.

Unfortunately the Oracle Cloud Infrastructure doesn't have this feature yet, but that doesn't mean you can't setup Elastic Search on OCI . Using the magic Terraform wand magic is about to happen...

 

Get the terraform Elastic Search OCI Plugin

 

On github there is an OCI ELK plugin available on https://github.com/cloud-partners/oci-elasticsearch, so perform an easy

 

git clone https://github.com/cloud-partners/oci-elasticsearch.git

 

for a local repository download, which leaves you with a bunch of files:

 

in the env-vars you will have tot set your region, compartment and userid, and also you fingerprint and ssh keypairs which wasj generated with

ssh-keygen -b 2048 -t rsa

and left the keypairs in the .ssh directory, so the env-vars would look like this

 

Some other files could acquire my attention:

  • variables.tf where I can specify VCN, shapes and VM image location, I chose shape VM.Standard1.8

other files specified loadbalancer, compute nodes, storage etc.

 

Next run the env-vars script, another way is to put it into your bash_profile

 

. ./env-vars
terraform plan

 

Immediately it returned some errors like:

error: oci_core instance.Bastion Host: "image": required field is not set 

 

To debug this error, I set the terraform loglevel to trace

export TF_LOG=TRACE

 

Still, this did not bring me al lot of new information, until I performed terraform version, which gave the out put that the current version for this provider was outdated.

To upgrade the provided, the binary had to be dowloaded and replaced

https://releases.hashicorp.com/terraform/0.11.10/terraform_0.11.10_linux_amd64.zip

After download

sudo cp -p terraform /usr/local/bin/

 

and in the directory of the oci elk provider:

terraform init

which upgraded the provider.

Finally, explicitly the version had to be set in the provider.tf:

 

When ready the terraform plan and terraform apply went well and gave me a fully running ELK cluster on OCI.

 

In part 2, I will dive deeper in how to setup Elasic Search, Kibana and Logstash.

Kubernetes becomes the defacto standard if it comes to managing and scaling your container platform, where you might consider that containers are the next gen infrastructure platforms as a follow up on virtual machines, where every process application or infrastructure component can run in a docker container, autonomous, lightweight and independent, as an application, or a piece of runtime platform software ( such as a Java JDK )

However,  in the greater whole dockercontainers don't stand for themselves and they need some management; they need to be orchestrated and configured in a meaningful way. One of those platforms is called Kubernetes, developed by Google and since it development more and more technologies embraced Kubernetes as the orchestrator platform for containers.

 

Oracle these days is aiming to get customers into cloud, so in this way they developed a Kubernetes cloud solution called OKE which stands for Oracle Kubernetes Engine and is available from the Oracle Cloud Infrastructure.

 

2018-08-29 16_42_32-Clipboard.png

 

In here you can configure all your way through to setup a Kubernetes engine,

 

imageFile.png

 

The good news is that there is an automated way to fo this, using Terraform.

 

Terrafom is a solution which fits perfectly in a DevOps methodology where Infrastructure as a code and automation are keywords to support the DevOps way of working.  Every confgurational aspect to setup networks, loadbalancers, vm's, containers etc can be rolled out using Terraform, and especially cloud infrastructures

Oracle supports Terraform for its Cloud infrastructure since april 2017.

 

Looking at schematics, the Terraform plugin works as below

Now the steps to roll this out are pretty simple however there are some triviial aspects to consider

 

  • You need an OCI userid and obtainan API key by generating a Public and Private key pair

To set this up use a local linux server to do this:

  • Generate the keypair and convert to PEM format
  • Extract the finger print
  • Add an API key to you OCI userid and paste the Public key contents in it

 

Generate and extract the fingerprint

 

Configure Terraform

 

Lucky for me, there is a OKE Terraform installer present on Github, see https://github.com/oracle/terraform-kubernetes-installer so I had to pull and update this github to my local repository

Next, run some pre commands:

cd terraform-kubernetes-installer
terraform init

To rollout the OKE, the TFVars envfile needed the OCI configuration:

 

tenancy_ocid = "the cloud tenancy id"
compartment_ocid = "the compartment id" This one you need to create in the OCI console
fingerprint = "<extracted from the PubPriv Keyoair"
private_key_path = "/home/oracle/.oci/oci_api_key.pem"
user_ocid = "<the OCI user id>"
Region = "< the region of your OCI > (like eu-frankfurt-1)"

 

Comparment creation to be done in the OCI menu->Identity->Compartment.

 

Now when execute the Terraform configurations, the TFVars need to be exported so the best to do this in the .bash_profile

Before applying, evaluate the plan

terraform plan

...etc

Finally apply the configuration to OKE

 

terraform apply

 

It took a some time but after a while my kubernetes master and worker nodes where created in the OCI. This proved that setting up using Terraform is a fast and simple way if you know the trivial parts ( tenant ocid, userocid etc )

 

In this post I am exploring the sense of configuring and running an Oracle SOA Suite 12.2 domain on docker and managed by Kubernetes, to discover if SOA is a good candidate to run on Docker.

The server where I will install it is running Oracle Linux 6.8; unfortunately Kubernetes now is supported on Linux 7, so my next post will handle that subject.

First of all here are my install bits and experiences.

 

 

Setting up Docker

 

 

Before installing I had to add some YUM repo to get the right docker package:

 

export DOCKERURL="https://yum.dockerproject.org/repo/main/oraclelinux/6"
 rm /etc/yum.repos.d/docker*.repo
yum-config-manager --add-repo "$DOCKERURL/oraclelinux/docker-ce.repo"
yum-config-manager --enable docker-ce-stable-17.05

 

The installation took place on a Linux VM running Oracle Linux 6.8. I used the YUM repository to install the appropriate version of docker:

I was lucky, it was already at the latest version.

Next, I wanted to pull the containers from the Oracle Container Registry, so I logged in from docker:

docker login container-registry.oracle.com

providing my OPN username and password.

 

Next, I considered a new place for where the docker containers are stored, because /var/lib/docker is mounted under "/", which is not a good idea to my opinion:

  • Backup if current dir
tar -zcC /var/lib docker > /u02/pd0/var_lib_docker-backup-$(date +%s).tar.gz

Move it to a new filesystem with sufficient space

mkdir /u02/pd0/docker
mv /var/lib/docker /u02/pd0/docker
  • Link it to the original location
ln -s /u02/pd0/docker /var/lib/docker

Quick and a slight dirty, but sufficient

 

First of all, a bridge network must be created, to enable containers to connect with eachother, like the SOA containers with their dehydration store:

docker network create -d bridge SOANet

 

Creating database container

Now on https://container-registry.oracle.com/ there are some instructions of how to set up a SOA Docker environment, however these instructions are not totally correct.

Creating a SOA Suite database, the parameters is the db.env.list were not correct

 

ORACLE_SID=<db sid="">
ORACLE_PDB=<pdb id="">
ORACLE_PWD=<password>

 

These weren't correct, in this case these properties were ignored and a default dummy name was used like ORCL...etc

The correct string should be DB_ instead of ORACLE_

DB_SID=soadb
DB_PDB=soapdb
DB_PWD=*******

 

After that, start the docker database container, and database came up with the right SID and Service Name.

docker start soadb
docker ps

Some verification:

- Login with SQL*Plus

- Showed Listener Status

 

Create AdminServer Container

Before creating the AdminServer I first obtained the the image from the registry:

docker pull container-registry.oracle.com/middleware/soasuite:12.2.1.3

 

Also inhere, specific parameters had to be set, although I also encountered some flaws in the original instructions:

  • Although password was set for the db admin, it didn't pick it up, and had to set it manually in the database  " alter user sys...."
  • The SID and Service names were not correct, for the PDB I had to configure it including domain name, so this was finally the correct setup:
CONNECTION_STRING=soadb:1521/soapdb.localdomain
RCUPREFIX=SOA1
DB_PASSWORD=******
DB_SCHEMA_PASSWORD=*****
ADMIN_PASSWORD=******
MANAGED_SERVER=soa_server1
DOMAIN_TYPE=soa

 

And next run the creation of the domain, and start the AdminServer through WLST

docker run -i -t  --name soaadminserver --network=SOANet -p 7001:7001 -v /u02/scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/domains   --env-file ./adminserver.env.list oracle/soa:12.2.1.3

 

When this was finished, I could login to the WebLogic console. The listenadress of the AdminServer was empty so I guess the updListenAdress.py did not do it's work, so I changed it manually

 

Starting the managed server

The image configures already a managed server in the domain, so next is to spin up a container for the SOA managed server:

docker run -i -t  --name soa_server1 --network=SOANet -p 8001:8001   --volumes-from soaadminserver   --env-file ./soaserver.env.list oracle/soa:12.2.1.3 "/u01/oracle/dockertools/startMS.sh"

 

The SOA Managed server came up after a while, though, status in the console was SHUTDOWN, because the startscript did not use the nodemanager. Checking the log I could follow the startup sequence.

 

docker exec -it soaadminserver bash -c "tail -f  /u01/oracle/user_projects/domains/InfraDomain/logs/ms.log"

After that, I logged in into the container and started the nodemanager using the startNodemanager,.sh, to be able to start managed servers through the console and get health info.

 

Conclusion

Just some thoughts and doubts that came up in me; but please correct me if I'm wrong.

Now the million dollar question would be: Is SOA Suite fit for a container platform? It does run, although I haven't tested it yet. In the end to set it up is rather simple.

Apart from some flaws during setup, you might ask yourselve : what are we doing different inhere, instead of spinning up servers and/or VMS

 

Well 1st, all docker containers run on a server, but we skipped the entire server configuration and based on a pre-baked image we could bring rather quick an environment

But looking from an application perspective, we still not doing anything containerized. Even in a docker container monoliths can exist.

So, the complete story is that at platform level we have containers, at application level, not yet.

At this years Developer Tour in Latin America I was selected to speak in Argentina, which would be a great adventure for me. As I have never been on the southern hemisphere, I was really excited and honoured to go. After a very long flight from Amsterdam to Buenos Aires, almost 14 hours, I landed early morning in Buenos Aires, which was in winter time. For me a big switch as in the Netherlands it was around 35 *C when I left.   But who's complaining.

 

Packed with my suitcase and my Oracle Management Cloud bible I entered Buenos Aires... what a city and what contrasts. Beautiful art and architecture but also a lot poverty.

I like the urban lifestyle so I found my way in Buenos Aires and visited some hotspots every tourst must see. If you ever plan to go, wear some good shoes because the streets are sometimes hard to walk on.

Nevertheless, I could breath the Southern American lifestyle while seeing the Tango live on the streets:

 

IMG_20180807_173001.jpg

 

The conference day

 

The conference took place on my birthday, the 9th of aigust 2018 in the UADE, one of the many universities of Buenos Aires.

IMG_20180807_114614.jpg

 

A 20 minutes walk from my hotel brought me there, and around 9:30 AM the conference was openend in the main Auditorium. My session was planned at 11:05 am, but die to some delay it began a bit later, so I followed some other sessions. Although the majority of the sessions were in Spanish, which isn't my stongest language, I could follow some of it, and was lucky that the slides were in English. As I was on the Analytics track I followed a session from Edelweiss Kammerman about Data Visialization with the Oracle Datawarehouse Cloud Service, and the Session before me from Diego Sanchez, also about the Management Cloud, regarding problem detection and analysis.

 

 

Security Analytics with the Oracle Management Cloud

 

As I was in the Analytics track I had to emphasize on the Analytics capabilities of the Oracle Management Cloud, where Machinelearning, Anomaly Detection and Data Visualization are important topics.

Machinelearning capabilities are essential for this solution; in OMC the following are used:

 

  • Anomaly Detection
    • See the abnomal symptoms. We're not interested in what's going ok, but in the exceptions
  • Clustering
    • Reduce tons of billions of data to a manageabe and understandable pattern. This requires high end technology analysis.
  • Correlation
    • Correlate as might seem different events to eachother to a common recognzed pattern. Such as link by a common attribute, an OrderID, a Personal ID and so on,

 

 

The battle against attacks is always lag behind

 

Let's face it; SOC's are having a hard time to defend against all kinds of hostile actions, which can be from the outside world, or inside by suspected fraude of employees. Some of the bad already happened when they come in action.

The  Security Monitoring and Analystics of OMC can help them make  life a bit easier by doing the following

  • Intelligent monitor security events
  • Investigate using Log Analytics
  • Understand  and interpret attackchains
  • Automatically remediate  to reduce exposure
  • Continually harden systems in response to a threat or weakness

Now a well known pattern of attack is the Cyber Kill Chain where through some certain steps hostile parties can inflitrate into systems without anyone noticing. And don't think of the stereotypes of young guys or girls on their addic, trying to hack. No we think about highly sophisticated attacks, nitiated by machines and well organized groups of maybe governments or criminal organizations.

 

2018-08-10 14_31_48-OMC_analytics_machinelearning.pptx [Protected View] - PowerPoint.png

 

A typical SMA Dashboard Identifying attacks

 

 

Also when you lay a part of your IT in the cloud, you can easily integrate you Access Biroker or Identity Management Systems into The Oracle Management Cloud

2018-08-10 14_32_17-OMC_analytics_machinelearning.pptx [Protected View] - PowerPoint.png

 

 

 

The SMA Engine works with Machine Learning Models and Rules in order to detect any security thread, as I already explained in an earlier blog. But the fact that SMA worls closely together with the Log Analytics modue makes it a strong and well integrated solution for any enterprise to use in it's everlasting battle against attacks.

 

The closing Speakers, ACE and DevChamp dinner

As a traditiion the event was closed with a nice dinner at a  restaurant to try out the Argentinian meat culture, where I met some of my colleagues which I did not had the chance to meet.

By surpise, Jennifer Nicholson from the ACE program announced a new Java Developer Champion: Hillmer Chona,,, Congratulations and well done!

 

 

Big thanks

 

Finnaly I would like to thank the Argentinian Oracle User group for the organization and hope to see you maybe next year.

Monitoring your IT landscape is in many cases an under estimated topic, however IT departments spend time on doing it. But there are hardly no standards how to approach this, and in a typical IT operations team, every team uses its own tools. Either it can be some scripts, monitoring software from different vendors or some freeware/opensource.

Companies who do Oracle, often use the Oracle Enterprise Manager Cloud Control(EMC), which is a agent based centralized repository to gather diagnostic information of all connected systems, database, middleware and applications. It's a very broad and complete solution to monitor and manage IT systems.

But often the case is, that this platform is owned by one team, typically a DBA team, because EMC finds its origin from the Oracle Database. And they use it also to monitor their database. But a company running an Oracle SOA Suite or BPM or any other Fusion Middleware, it is not a common habit to use the EMC to monitor their FMW applications. The reason why is:

  • There are no management packs licensed
  • There is no or not enough knowledge how to implement and use these management packs
  • The team who owns the platform does not allow other teams to use the EMC

Management Packs are layers for specific tasks or platforms to extend the management capability of the EMC.

 

To overcome owner issues, the Oracle Management Cloud(OMC) can help. Teams can order there own subscriptions, or do it as a joined company effort to monitor their applications in the Cloud. Although OMC is not a replacement of EMC you migh notice some similarities, especially in the Infrastructure Monitoring and Compliancy Management.

But other than EMC, OMC is a more coherent solution where the different modules work closely together, more or less out of the box. An Application Performance Management uses the log analytics to drill down deep into Application Issues.

 

Now what if your company uses the EMC but wants to make uses of some of all the features of the OMC? Or better, why would a company want that? Well, a good usecase for a company is to try out the OMC, or wants to make use of one of the modules such as Log Analytics. But how to get all the information from your on premises EMC to the OMC?

You can make use of the:

 

 

Data Collector

The OMC comes with a variety of agents:

  • The Gateway Agent. An inbetween agent when your systems are not supposed to be eposed to the outside world, a Gateway Agent can be placed in your DMZ, gathers all information from all your OMC connected applications and DB's and pushes it outside the datacenter to the OMC.
  • The Cloud Agent - An agent installed to collect server information and gather logfiles for the log analytics
  • The APM Agent - An agent specificaly used for Application Performance Diagnostics. Has to be implemented in the application server infrastructure, can be a Java , Node JS, Apple ord Android or .Net agent.
  • The Data Collector. This agent can be useful to collect all the data from your EMC and to be showed in your OMC on the Infrastructure Monitoring Module.

 

 

Data Collector Implementation

 

To implement the Data Collector, you need to locate your EMC system. This can be one host or maybe separate if the OMS application and OMS database run on separate servers. It us sufficcient only to install it on the OMS appliction host ( OMS = Oracle Management System - the engine that runs EMC).

 

First step is to download the Data Collector Agent from your OMC platform:

Transfrom the package to your EMC host and unzip it in a directory.

The next task to do is to modify the agent.rsp. Modify the following

 

TENANT_ID=<YOUR OMC TENANT>

UPLOAD_ROOT=https://<youromc.europe.oraclecloud.com/

AGENT_REGISTRATION_KEY=*********************** --> to be found in

Administration --> Agents --> Registration Keys.

AGENT_BASE_DIRECTORY=/u01/app/omcagent

DATA_COLLECTOR_USERNAME=omc_collector

DATA_COLLECTOR_USER_PASSWORD=**************

OMR_USERNAME=sys

OMR_USER_PASSWORD=***********

OMR_HOST_USERNAME=oracle

OMR_STAGE_DIR=/u01/app/omcstage

 

OMR is the EMC repository. The datacollector schema will be installed in this repository.

Next is it just to run the AgentInstall.sh script, and if finished, after a while you can start the agent form your omcagent directory

/u01/app/omc/agent_inst/bin/omcli start agent

/u01/app/omc/agent_inst/bin/omcli status agent

 

 

 

From here your databases and systems monitored in EMC are now to be showned in OMC, but basic stuff. If you want to see more you will have to specify a json file and register databases against the OMC. Oracle provides for various types of json scripts for various database flavours.

Since some years I attend the annual partner forum, and though I had a very busy time at my customers, plus the week after I had to present on the Analytics and Data Summit,  I still decided for my self to attend to this forum. After leaving my "Oracle Red Stack" car behind on the airport I flew to Budapest on Monday morning.

DYEWX8dWkAAxFyf.jpg

Arriving on my final destination, the Boscolo Autograph hotel, I was amazed about this very nice hotel, which happened to be very comfortable and luxury, very nice architecture on the in side. My compliments to Jurgen and his team to book this hotel for the conference.

Day 1: ACE Sessions

This afternoon is reserved for ACES and ACE Directors from partners who like to speak of some customer success story, or putting a great contribution to some product of Oracle. One of my favorite sessions was the one from the https://twitter.com/JarvisPizzeria  team which we're very active with their blogs about the Oracle PCS. They deserved their community award!

I was surprised about the many countries from where partners we're attending to this forum. Luis Weir, together with Phil Wilkins spoke passionate about their favourite subject Oracle API CS integrated in a full solution with a Oracle JET application, integration layer with OIC and a SaaS layer consisting of some SaaS solutions such as Oracle Taleo.

The other sessions we're also very interesting to hear; it is always a pleasure to listen to Simon Haslam, he spoke about provisioning the Oracle Cloud in a secure way, which was very useful to hear; I recognized the same he told about using the the Oracle Cloud GUI; provisioning  one instance is not such an issue but doing 20 ends up in a lot of filling in and typing.

 

Furthermore I was very pleased about the fact that the Oracle Management Cloud is adopted in the PaaS and it was this year an important topic. Last ACE session was about a customer case telling the using the analysis capabilities to search for root causes in bad performing applications.

Unfortunately I was running out of time else I would have definitely spoken about my experiences with the Oracle Management Cloud.

Finally Jurgen handed out his Community Awards, congratulations to all the winners!

 

 

Day 2 : General Sessions

 

This day is filled with keynotes and presentations from various members of Oracle's Product Management. The openings keynote was held by Jean-Marc Gottero to give a high level overview of the position of the Oracle Cloud in EMEA and the role of partners

 

The entire day was filled with presentations so I give here some highlights which popped up out of my memory.

 

Ed Zou made an important announcement in the line of the Autonomous Database-  Autonomous PaaS Services. Now this was very fresh news so not much to elaborate -  yet, But again I was very pleased to hear the Oracle Management Cloud would play a very important role in this solution. The exact date of releasing these services are not known, and also not known what will be in it. He also told about other new areas Oracle will cover such as blockchain, Intelligent Bots, AI and Machine Learning.

 

Pyounguk Cho presented ( in his enthusiastic style ) the various Cloud platforms  from an AppDev perspective, such as JCS, ACCS, Stack Manager. In fact about Enterprise Java these days and it's position in the cloud. Highlights for development, infrastructure, data management , security and monitoring were important topics.

JCS comes now with a concept called quick-start instances, comparable with the quick-start package for WebLogic developers can download to set up fast and easy a WebLogic domain.

Other new features like snapshot and cloning and the integration of the Identity Cloud in JCS passed the stage.

Another topic about the Application Cloud, a docker based polyglot platform showed us compatibility with non Java such as Python, Node JS,  PHP, and the easiness of building and deploy these native applications making use of their belonging docker images. These different applications can run within one single tenant and exposed to end users through Oracle's cloud loadbalancer. The elastic feature is also a very nice feature which customers can use to scale up or down belonging to their needs.

Oracle's messaging and streaming platform - Event Hub Cloud Service - based on Apache Kafka showed us the need of replacement of traditional message brokers and data integrators.

Finally, the stack manager was discussed which showed a platform for managing all these different cloud solutions into one where customers can group and manage their services into one atomic stack.

 

Robert Wunderlich and Jakub Nesetril spoke about API Management. Funny detail I found was that Robert mentioned the Rabobank about their current path to API management. and Open Banking. Especially important for partners to know their position and how they can fit in bringing these solutions to customers.

They announced some new coming partnerships with companies which are into the API Management market.

Better  authentication integration was one of the topics, explaining how OAUTH can be better configured in the API platform, plus integration with other technology partners and their solutions.

 

Later that they I went off for a video interview about, as you might guess , about my favourite topic, the management cloud. This will be soon published along with the other interview held in the forum.

 

The evening program was very well organized with a nice dinner and networking event held at the "Kiosk" in Budapest

 

Day 3:

 

Unfortunately I could not attend the entire because of my flight schedule, though I could attend a few of my favourite topics

 

It was a nice seeing back with Maciej Gruszka, as he spoke about developing micro services and serverless applications and new patterns of development, compared against the

monolithic applications with significant footprint managed by Application Servers. Pattern such as micro services will become more common among architects as the viable architecture. More vendors these days are designing applications called functions or serverless applications and they implemented it on their cloud. Maciej showed containerized environments with Kubernetes as scheduling infrastructure for Docker based applications running in Oracle Cloud. and the Managed Kubernetes service as the core engine for running microservices framework and serverless

applications.

 

 

 

Pyounguk Cho continued where he left the day before, showing the broad options of the JCS. Another good session was the one from Jurgen Fleisher, from Product Development of the Oracle Management Cloud. Now the presentation was not entire new for me but still hearing it from another perspective gave me new insights and inspirations. He gave a good explanation of what Machine Learning means nowadays, and the role the Oracle Management Cloud can play in any IT organization as it comes to Monitoring, analysis, DevOps and autonomous platforms.

 

Key take aways

 

Because I'm deep into Oracle technology, some of the content I already know. However also some new topics pass by and I think it's very important to put them into the right place of what I already heard. Further more as a partner from Oracle it's evident to really partner  with Oracle and exchange and absorb knowledge and experience.

For me, attending this event is a must.

 

Finally, lots of thanks to Jurgen and everyone in his team responsible for this excellent organized forum.

I've become a huge fan of the Oracle Management Cloud. Why? Because Oracle has broaden it's limit and the OMC doesn't just monitor Oracle based systems and applications, it has plugins for many non Oracle technologies, which makes the OMC very flexible and Enterprise worthy to be used as a complete solution for monitoring.

 

Security Monitoring and Analytics ( SMA)

Oracle also realized customers have great concerns about security in general but even more in the cloud, so they've put up a service in the cloud which has really powerful capabilities.

One of these powerful modules inside the Management Cloud is the Security Monitoring & Analystics, or SMA. With this module any SIEM or SOC can detect, identify and monitor the following:

  • Securiity threats from in and outside the company
  • Fraud detection
  • Compliancy violations

Inside SMA

When you are in SMA it pretty much look like the other OMC components, but it has it's focus on security. Entering the first dashboard you can see immediately an overview of the activity of you users and their possible risky actionsWhen you login into OMC, you can click on the SMA module if you have the proper cloud subscriptions

 

Inhere you will start in the main SMA landing page showing the “Users” Dashboard, but you can configure dashboards for yourself if you want.In this page you see:

1. Users – shows the total number of risky users

2. Threats - shows total, critical, high, medium and low risk threats

3. Assets – shows the total number of risky assetsClicking on the threats you'll can get more details on persons actions which came out of the analysis of the identity management logs or via user data upload. You can see the company, manager, wand specific user details and status such as lockouts, locations, email adresses and so on.To look down deeper you can identify a kill chain. A kill chain is a series of executions which might lead into some kind of destruction or illegal access/actions.

  • Threats by category – Threats are categorized by the SMA engine into different kill-chain steps such as
    • reconnaissance --> research, identification and selection of targets
    • infiltration --> Infiltrate into these targets
    • lateral movement --> move into the system in search for keys/access points

it's obvious that this user is been target of a hostile attack executing this kill chain.

  • Top Risky Assets by Threats – Detects if a cetrtain asset  which can be any system, host or database  is being targeted more usual.

 

Clicking on the threats, you can clearly see what is happening, the killchain is clearly exposed. But how can we see this?

 

Based on the killchain components, we identify:

  • An anomaly (WebAccessAnomaly) is detected by a analytics machine learning model which saw the user going to a  URL that was not expected based on peer group baseline of the websites visited . This User visited a site and downloaded a malware onto his machine which could have triggered this attack.
  • An attack which was detected by  the rule “MultipleFailedLogin”   which gets triggered when five or more failed login attempts on different accounts are seen
  • An infiltration attack which is detected by “TargetedAccountAttack

Furthermore some  infiltration attacks are  captured by the “BruteForceAttackSuccess” rule which gets triggered by 5 failed login attempts on the same account, followed by a successful login in a one-minute period. A conclusion of this is that the attacker has gained the user credentials. But it still not the end.....Again an anomaly is  captured by the rule PrivSQLAnomaly on a database – this is a SQL anomaly detection that shows that attacker is doing some unauthorized or anomalous transactions on the associated asset FINDB. SMA’s SQLAnomaly detection detected thisLooking at the killchain the last action is detected. the lateral movement with the rule MultipleUserCreation –  created 3 or more users in the oracle database within a 5 minute period, by an attackerImmediately you can see that a kill chain (anomaly->recon->infiltration->lateral movement) attack is in progress. Attacker attacked a critical asset (finance host and FINDB) via this user . Ypu can not only see point threats but the entire kill chain view with SMA which gives faster insight what's happening

 

(orginal source : OMC SMA and Configuration and Compliance -DemoScript)

 

 

Machine Learning

Machinelearning in SMA helps identify attacks and threats. If you look at the PrivSQLAnomaly, you see that based on an analysis of logdata a pattern is recognized which is within abnormal ranges. In this example you see an action of a certain user which is not within the normal range, looking at the function of this user.Further investigation shows up that this user has visited a hostile website, from which malware was installed on the users computer. Using the WebAccessAnomaly together with someLog Analytics query results shows that some other user separate from the user we already had an eye on also shows up. In this case we can do some preventive actions to prevent another kill chain such as:

 

  • Force password reset on all the compromised accounts.
  • Cut-off access of the two users from rest of the network.
  • Trigger malware scans/removal from the user machines.
  • Black-list malicious website and add it to your web-filtering solution

Rules and Models

These mechanisms described are based on rules and models The analysis of potential security actions have to be detected and reported. Within SMA you can define rules for that purpose. These rules apply for the systems or applications which needs to be alerted in case of a security breach.

These rules are used to detect any suspicious action and can be configured on any desired level, for instance within a certain time window an event must happen, how many times, and what action has to follow up when detected.

 

Models

To detect anomalies,  machine learning models are used. These models are used along with  the log analysis and can be:

  • Peer Models - based on an organization , group
  • SQL Models - based on analysis of database actions
  • User Models - based on analysis of individual users

In combination with the log and data analysis which come from log or uploaded files, more and more suspicious patterns can be identified and recognized, in order to report, alert, and take the necessary actions to it.

 

 

Based on further analysis the attacker created multiple users in a short period of time, so the security officer can identify what is going on, what kind of attacks have been done on which systems.

 

 

Conclusions

The above is just an example of the broad capabilities the Oracle SMA has. I haven't seen any other product yet which has these powerful capabilities, and even better, it can be positioned enterprise wide, and not only for Oracle systems.

I used the OMC demo site and collateral's, plus some Hands On Labs on Oracle OpenWorld which really amazed me of this powerful solution!

Since I work with WebLogic, 18 years now already, every year a new road-map appears about the new and coming features of Oracle WebLogic and this is presented during Oracle OpenWorld. While everybody is at this moment already back to business as usual, I'd like to give an overview of the already existing and new coming features discussed last year in San Francisco.

 

 

Everything is "Serverless" - " No SQL" -"Low Code" - "SOA is dead", "Micro-everything and death to the Monolith!"

Of course these terms does not fully represent what they appear to at first sight, but still, when you're from the "old school server/ sys admin" I can understand it sometimes is dazzling and sometimes hard to put them all together.  But when you look down deeper, you will discover the relationships between these terms and more in specific what they mean.

 

WebLogic Server "Current" and "Next"

 

Nowadays, we don't only speak of WebLogic Server anymore but also about the Java Cloud Service, which is WebLogic as PaaS. In this post I will give my view of the new and coming features.

WebLogic Server will still exists as the key Java Application Server from Oracle, however it will be the " next generation " application server where old and new concepts go hand in hand. Especially the move to the cloud which is already happening for a few years will be more and more emphasized by Oracle. How ever, either speaking of WebLogic Server of Java Cloud Service, the features are pretty much the same so I will speak of WebLogic Server, it will also mean it's Java Cloud Service.

 

Current WebLogic Server versions

Generally speaking, current most important and used version are:

  • 10.3.6 ( 11gR1 including all patchlevels ) which came out in 2009
  • 12.1.3 (12cR1 including all patchlevels) which came out in 2011
  • 12.2 (12cR2 including all patchlevels) which came out in 2015

 

12.2 made an important step to continuous availibility and multitenancy:

 

  • Multidatacenter availability with Oracle Traffic Director and Coherence and automated failover with SiteGuard

 

 

  • Cross Domain Transaction Recovery

  • Federated caching with Coherence in Multidatacenters
  • Zero Downtime Deployments with automated rollout and error rollback

  • Auto Scaling features:
    • Automated Elasticity for Clusters with:Manage server life cycle,
    • Rules-based decisions based on capacity, demand or schedule,
    • WLDFWatches, Notifications changed to Policies, Actions

And under the hood more and improved features regarding JDBC, REST, JMS, deployment.

 

WebLogic and Java EE8 Certification

Java EE 8 came out in late 2017, and will be supported within WebLogic in this year, 2018. Where in Java EE 7 the focus was on more productivity, in EE8 the focus is more on simplicity. Some of the most important changes:

  • Servlet 4.0 : Servlet is one of the most used API's with support for the newest HTTP/2 protocol for better web performance
  • JAX RS 2.1 for RESTful WebServices
  • Further "lightweight" web improvements
  • Still Java EE full transactional support ( JMS, JDBC, RMI)
  • Better integration with Microservices technology

 

What does this mean for WebLogic? The current last version is still on Java EE7 and JDK8. The next major version is planned to come out late 2018, my expectation that it will be around september. In the line of some already existing 13c i expect it will be the same on WebLogic, but more important is that it will support full Java EE8 and JDK 9.

 

WebLogic Patchsets

There are several patchsets released in 2017 which are:

  • PS1 – bug fixes and feature completion of Continuous Availability best practices
  • PS2 – bug fixes and feature completion of Docker image updates and  App2Cloud migration tooling
  • PS3 – bug fixes and feature completion of Secured production mode and Zero Downtime patching improvements

 

WebLogic Multitenancy

Although containerized platforms such as Docker supports WebLogic, also the strategy of WebLogic itself will be more on containers instead of a platform

 

WebLogic/JCS, Docker and Kubernetes

 

Already, WebLogic is certified with Docker, and sample dockerfiles are available on GitHub. It supports multiple topologies and can be used either on premise and in the Cloud.

Kubernetes orchestration is on the way to be certified.

Supported versions for Docker are WebLogic 12c R1 and 12c R2 with Docker 1.9, which runs on Linux 6 or 7

 

Supported topologies:

  • Non clustered domain in docker on a single host
  • Clustered domain in Docker on a single host
  • Clustered domain in Docker on multiple hosts

 

wlsdocker.png

 

An announcement was made to the orchestration for Docker technology, the Kubernetes platform which will be supported somewhere during 2018. Samples are already available. Support is including the tools which come with Kubernetes, Prometheus and Grafana for graphical monitoring dashboards. The WebLogic team has developed a tool to export WLDF watches, SmartRules and policies, in order that these metrics can be picked up by Prometheus and represented in a Grafana Dashboard. Also supported will be auto scaling with WLS Dynamic clustering.

 

Coherence "Next"

Coherence, which became an integrated part of WebLogic also got some new and improved features, such as:

  • Docker Support
  • Coherence RX, an addon Open Source API for Coherence
  • Dynamic Active Persistence Quorum Policy, a built in policy to ensure an adequate number of cluster storage members that are available for recovery
  • Federated Cache improvements to support Multidatacenter toplogiies.
  • Improved Proxy Metrics
  • Zero Downtime Patching following WLS
  • Incremental Snapshot
  • HotCache multi-threading, JMX monitoring and MultiTenancy suupport
  • Coherence *JS, JavaScript Support
  • And Coherence is available in the Oracle Cloud, where it can be chose as an extra container in the Java Cloud Service.

 

Conclusion

 

Is it because maybe I get older "   But the world in IT seems to go faster and faster, which makes it more and more interesting and exploring new technologies and methods. I sometimes consider writing a new book, but because of the speed of frquency of innovations, what today's HOT tomorrow its NOT. Still I think this overview doesn't include all the new and improved features but it gives you an idea about which direction we are going.

 

Have an interesting and very good 2018!!

One of the services which are delivered by the Oracle Management Cloud is the monitoring of infrastructure. Now in this case, infrastructure spans from host to software platform. In this blogpost I will try to explain how you can effectively behind issues in your Oracle SOA Suite operational environment. OMC can monitor some parts specific to the SOA suite, in fact all the engines which are enclosed in a SOA Suite running envrioment. These are:

  • The BPEL engine
  • The Mediator engine
  • The Descion Service Engine or Business Rules Engine

 

I simulated a test which processed a lot of transactions through the payment process of my company. SOA Suite was handling this payment process through an OSB service in the frontend and an enrichment though a SOA composite which did a validation based on some rules.

The company deployed a new release and adapter to get more benefit out of it.

During tests, I suddeny received an alert from OMC that the error rate on the BPEL engine was increased. When I looked into the Performance table of my soainfra entity I found the following:

You can see the errors /min in this screenshot.

Click on the SOA Composite field I could detect which composite was causing the error, the ValidatePayment

 

Now I had to find why this composite was causing the error, so I jumped into the other OMC feature, log analytics. The best logs in this case to look at, are the FMW Diagnostics logs for the Oracle Diagnostics Logging Framework. So I chose to put the entity on the running SOA Server:

And the logs we're filtered for the soa specific operations. In the right field I choce FMW Diagnostics Log from the pie chart

To group the log messages, I clustered them by choosing the cluster visualize option

Then I could find out very easily there was a JCA Adapter issue, further investigation that pointed out that while doing the SOA composite deploy, some EIS JCA Adapters we're changed.

 

After resolving this, the issues we're gone. But it proved the power of the Oracle Management Cloud, in minimum time I discovered what was going on and could solve it!