Skip navigation
ANNOUNCEMENT: is currently Read only due to planned upgrade until 28-Sep-2020 9:30 AM Pacific Time. Any changes made during Read only mode will be lost and will need to be re-entered when the application is back read/write.

2020-07-16 22_32_52-Clipboard.png



As Oracle announced on Oracle OpenWorld 2019 ( why does that seem like ages ago?? ) WebLogic 14 version, and later that years this release, also was announced this version was certified to run with GraalVM. Now that makes WebLogic more than only the Java EE application server that it used to be, because now other application technologies such as JavaScript, Ruby or Python could theoretically run on WebLogic.


This made me curious and I decided to research and develop a way to rollout a WebLogic platform running with GraalVM, on Oracle's managed Kubernetes Engine.


Requirements and necessary steps

To run it on Kubernetes, I had to prepare the following


  • Obtain the GraalVM EE software package
  • Build a GraalVM EE Docker Image and push it to my private container registry
  • Build a WebLogic 14 image using the GraalVM EE Image and push it to my private regsitry
  • Create a WebLogic 14 domain on Kubernetes using the WebLogic Kubernetes Operator





GraalVM EE  software


Now GraalVM is partial OpenSource, thats the Communtiy Edition ( GraalVM CE), but for using it with WebLogic, the GraalVM EE is required. At time of writing this blog the version is 19.3.2, to be downloaded fro Package is to be downloaded as  V995577-01.tar,gz.

Because there is no GraalVM EE container image public available, I built one myself, using docker build tools.


Build a layered container Image structure


Flow of building and pushing images



As to be seen in this flow, several images need to be build, for running it on Kubernetes


Build the GraalVM Image

The GraalVM Image uses the Oracle Linux 8 Slim distro, which caused some issues for me. Now, the aim is to keep your images as lightweighted as possible, however I needed to install a few extra packages:

- coreutils

- zip

- diff--> Because the  Introspection job of the WebLogic Kubernetes Operator uses this command to inspect the generated SerializedSystemIni.dat, diff is used. If not there, the job fails.


Dockerfile Graalvm

FROM oraclelinux:8-slim

# Note: If you are behind a web proxy, set the build variables for the build:
#       E.g.:  docker build --build-arg "https_proxy=..." --build-arg "http_proxy=..." --build-arg "no_proxy=..." ...

ENV GRAALVM_PKG=V995577-01.tar

RUN microdnf remove -y coreutils-single
RUN microdnf update -y oraclelinux-release-el8 \
    && microdnf --enablerepo ol8_codeready_builder install bzip2-devel coreutils diffutils ed gcc gcc-c++ gcc-gfortran gzip file fontconfig less libcurl-devel make openssl openssl-devel readline-devel shadow-utils tar zip\
    vi which xz-devel zlib-devel findutils glibc-static libstdc++ libstdc++-devel libstdc++-static zlib-static \
    && microdnf clean all

RUN fc-cache -f -v

ADD /usr/local/bin/gu
COPY V995577-01.tar /tmp
#RUN gunzip /opt/${GRAALVM_PKG} | tar x -C /opt/* \
RUN mv /tmp/V995577-01.tar /opt && tar -xvf /opt/V995577-01.tar \
    # Set alternative links
    && mkdir -p "/usr/java" \
    && ln -sfT "$JAVA_HOME" /usr/java/default \
    && ln -sfT "$JAVA_HOME" /usr/java/latest \
    && for bin in "$JAVA_HOME/bin/"*; do \
    base="$(basename "$bin")"; \
[ ! -e "/usr/bin/$base" ]; \
    alternatives --install "/usr/bin/$base" "$base" "$bin" 20000; \
    done \
    && chmod +x /usr/local/bin/gu

CMD java -version



Tag and push it:


Build the WebLogic 14 GraalVM Image

Next, the WebLogic 14.1.1 image needs to be build with our previous GraalVM image. To make use of the GraalVM image, specify it is used as builder

FROM as builder


Dockerfile(part of it, not all)


#FROM oracle/jdk:11 as builder
FROM as builder

# Labels
# ------
LABEL "provider"="Oracle"                                               \
      "maintainer"="Monica Riccelli <>"       \
      "issues"=""         \
      "port.admin.listen"="7001"                                        \

# Common environment variables required for this build (do NOT change)
# --------------------------------------------------------------------
ENV ORACLE_HOME=/u01/oracle \
    USER_MEM_ARGS="" \

# Setup filesystem and oracle user
# Adjust file permissions, go to /u01 as user 'oracle' to proceed with WLS installation
# ------------------------------------------------------------
RUN mkdir -p /u01 && \
    chmod a+xr /u01 && \
    adduser -b /u01 -d /u01/oracle -m -s /bin/bash oracle

# Environment variables required for this build (do NOT change)
# -------------------------------------------------------------

# Copy packages
# -------------
COPY $FMW_PKG install.file oraInst.loc /u01/
RUN chown oracle:oracle -R /u01

# Install
# ------------------------------------------------------------
USER oracle

RUN cd /u01 && ${JAVA_HOME}/bin/jar xf /u01/$FMW_PKG && cd - && \
    ${JAVA_HOME}/bin/java -jar /u01/$FMW_JAR -silent -responseFile /u01/install.file -invPtrLoc /u01/oraInst.loc -jreLoc $JAVA_HOME -ignoreSysPrereqs -force -novalidation ORACLE_HOME=$ORACLE_HOME INSTALL_TYPE="WebLogic Server" && \
    rm /u01/$FMW_JAR /u01/$FMW_PKG /u01/install.file && \
    rm -rf /u01/oracle/cfgtoollogs


Image WebLogic now with GraalVM


After you've done this, you can follow the steps as described in New Oracle WebLogic 14.1.1 on Oracle Kubernetes Engine , but beware to specify in your domain resource creation your own build and pushed WebLogic GraalVM Image:


After setting up, all is runnig fine in Kubernetes



Testing WebLogic / GraalVM

I did not have any representative application available. Of course, some Java appplication wil run on 14, but it is also nice to see a Javascript application will run, or a Python application.


I found some applications which ran on Payara, and on Github, I saw this test application from , so I borrowed this, build is with maven and deployed it. Thanks to Jeroen Ninck Blok




So this is a very simple example but  in the coming time I will try to set up some more complicated testcases.



In all the rapid changes in the world of IT regarding how infrastucture and application landscapes are designed, developed, build and operated, a lot need to be clear of what the future will be. Will customers still use WebLogic in the future, or are small size like Payara the future, or will everything run in containers....? Still a very interesting develoment to be researched.

In my series of blogs around WebLogic, containers and Kubernetes I'd like to tell you about "old meets new". Well "Old" is maybe an inappropriate term in this case, but what I actually mean is how WebLogic's relation to traditional infrastructure like servers and VMs and from a container based perspective with a container orchestrator platform such as Kubernetes, and Oracle's cloud implementation of it, Oracle Container Clusters(OKE) holds.




Kubernetes as a platform knows all about it's pods, services, policies, persitent volumes and so on, but as demands of what to containerize became more demanding, it was not sufficient anymore. If you have a stateless web app to control, in a lightweight container, kubernetes can handle it well. But an entire database or application server platform in a container is something different. Specific tasks and details regarding all kinds of configurations and operations can never be handled by kubernetes, similar to a VM or server; they can't do that either.

Here Operators will be implemented.

Operators :

  • They extend  the K8S api
  • Configure & manage  more complex instances
  • They leverage more experience based knowledge to Kubernetes



This schema represents the basic components of an operator work and an operator uses components such as:

  • The Operator Framework SDK
  • Custom Resource Definitions (CRD) - For adding own custom objects as they were Kubernetes objects
  • ConfigMaps - configuration properties injected to pods
  • Additional built in tools to manage, build, package and deploy an environment in Kubernetes



Operators are written for many OpenSource and commercial products, and Oracle has one written for Oracle WebLogic which can do simpler WebLogic management in Kubernetes by managing overall WebLogic environment through Kubernetes APIs such as:

    • Load Balancer, Network,
    • Ingress Controllers,
    • Security,
    • HA restart, upgrade, scaling
    • Persistent storage

And it ensures WebLogic best practices regarding configuation and administration are followed.



This schema represents some of the WebLogic Kubernetes operators tasks for management.


Upgrading the WebLogic Operator

The WebLogic Operator source code is available on GitHub:, and the operator images are pushed to the docker hub and Oracle's own container registry:

Docker Hub:



The version I was running on my cluster was 2.5.0 which I installed with Helm. As you might know, Helm is a package and release management tool for Kubernetes.


First of all check the status of the current deployed operator:

helm status weblogic-operator --namespace weblogic-operator-namespace

Checkthe deployed version in the namespace

helm list --namespace weblogic-operator-namespace


The newest version released in May 2020 is 3.0.0-rc1, so lets upgrade, so in my custom-values.yaml I map to the newest version from the container registry:


Note: this could also be from the docker hub.


And do the upgrade:

helm upgrade   --reuse-values   --set "domainNamespaces={sample-domains-ns1}"   --set "javaLoggingLevel=FINE"   --wait   weblogic-operator kubernetes/charts/weblogic-operator


Where kubernetes/charts/weblogic-operator is the location of my custom values

After pulling the image the bew version was installed:



And to complete, I tested using some lifecycle functions such as restarting the domain:


kubectl patch domain soaosbdomain -n soa-ns --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'
kubectl patch domain soaosbdomain -n soa-ns --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "IF_NEEDED" }]'

Which triggers the domain to stop and start. I admit it's not a great test in a production live situation but for me it was just a test.


I haven't tested any new features yet, that might come in some later stage.




Operators are evident in container landscapes these days, but they doesn't make life less complex. Howver they are necessary in a complex environment as a WebLogic platform can be. Hope this helps you a bit further in your journey.

Last week, Oracle Product Director Will Lyons announced the long expected release of Oracle WebLogic 14.1.1. All  details about this release you can find  and on my blog: so I won't go into details about all the nice new features of it.


I'd rather now wanted to test out if this version was ready to run on a Kubernetes platform, and in this case the obvious choice was thee Oracle Kubernetes Engine(OKE), although RedHat OpenShift and Microsoft Azure Kubernetes Service could also be one of the options.



Ingredients for running WebLogic 14.1.1 domain

The following components belonged to my toolset to build this newest version:


WebLogic 14 docker image

To build my private image and push it to my private registry, I first build my local image.

There is a shell script provided, but let's look into the dockerfiles

I chose the generic file and the downloaded WebLogic package

So, with the shell script I build my docker image, only in the script I had to change the docker into the podman command:

# ################## #
# ################## #
echo "Building image '$IMAGE_NAME' ..."

# BUILD THE IMAGE (replace all environment variables)
BUILD_START=$(date '+%s')
podman build --force-rm=$NOCACHE --no-cache=$NOCACHE $PROXY_SETTINGS -t $IMAGE_NAME -f Dockerfile.$DISTRIBUTION . || {
  echo "There was an error building the image."
  exit 1
BUILD_END=$(date '+%s')



And run the script:

./ -v -g


After the build is done, I used podman to inspect the image:

In this example you also see my private OCI registry version which I will cover next


Push to OCIR

This image I wanted to have in my OCI container registry so I setup a push to OCIR. But first you need to set up an auth token in your OCI:

- Go to User Settings

- Under resource you can find how to generate

- Beware to save the passphrase because you wont see that one anymore

- Use the passphrase to login, use double quotes!


podman login
Username: frce4kd4ndqf/oracleidentitycloudservice/
Login Succeeded!


podman images



Now tag my WebLogic 14 container image for pushing:


podman tag oracle/weblogic:
podman push



WebLogic 14 on Kubernetes

Next, to install WebLogic on Kubernetes, the following actions need to be done:

  1. Configure Helm for installing the WebLogic Kubernetes Operator 2.5 and pushing it to my OCIR. I now used the Helm 2 version however for OKE v3 is supported.



cat << EOF | kubectl apply -f -
> apiVersion:
> kind: ClusterRoleBinding
> metadata:
>   name: helm-user-cluster-admin-role
> roleRef:
>   apiGroup:
>   kind: ClusterRole
>   name: cluster-admin
> subjects:
> - kind: ServiceAccount
>   name: default
>   namespace: kube-system


2. Get the Helm client and configure it with

helm init

3. Pull the WebLogic Kunernetes Operator Image


podman pull oracle/weblogic-kubernetes-operator:2.5.0




4. Tag and push it ot OCIR


podman tag oracle/weblogic-kubernetes-operator:2.5.0
podman push



5.Add the github repo to helm



helm repo add weblogic-operator




helm repo list







helm install weblogic-operator/weblogic-operator --name weblogic-operator






helm status weblogic-operator


gives you the current status of your deployment.


WebLogic Domain

With the provided scripts from github a lot of stuff is taken care of, however beware of some extra actions:

  • Creation of NFS on your OKE nodes; there is no solid instruction in the Oracle Github docs
  • Permissions to execute stuff within you containers
  • Adjust parameters in the yamls to your own needs



If you want to make use of persistent volumes on OKE you need to create an NFS share. A good instruction is here:

However you need to set up console connections for your nodes, this is easy to do.

Click on all you compute instances, under resources you can find the link

It gives you the detaied command how to connect, however: I encountered to get the login prompt, and for the standard OPC user there is no password known.

You can find here to reset your opc user, after this you can login.  section Resetting the OPC user password via the console

Once logged in you can setup NFS


WebLogic Domain provisioning

Now, after done alle of the above it's time to provison your WebLogic domain. In the cloned repository you can find them in weblogic-kubernetes-operator/kubernetes/samples/scripts


Domain on Persistent Volume

First I create a persistent volume and claim to store WebLogic Artefacts out using the yaml files:

# The version of this inputs file.  Do not modify.
version: create-weblogic-sample-domain-pv-pvc-inputs-v1

# The base name of the pv and pvc
baseName: weblogic-fourteen

# Unique ID identifying a domain.
# If left empty, the generated pv can be shared by multiple domains
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.

# Name of the namespace for the persistent volume claim
namespace: default

# Persistent volume type for the persistent storage.
# The value must be 'HOST_PATH' or 'NFS'.
# If using 'NFS', weblogicDomainStorageNFSServer must be specified.
weblogicDomainStorageType: HOST_PATH

# The server name or ip address of the NFS server to use for the persistent storage.
# The following line must be uncomment and customized if weblogicDomainStorateType is NFS:

# Physical path of the persistent storage.
# When weblogicDomainStorageType is set to HOST_PATH, this value should be set the to path to the
# domain storage on the Kubernetes host.
# When weblogicDomainStorageType is set to NFS, then weblogicDomainStorageNFSServer should be set
# to the IP address or name of the DNS server, and this value should be set to the exported path
# on that server.
# Note that the path where the domain is mounted in the WebLogic containers is not affected by this
# setting, that is determined when you create your domain.
# The following line must be uncomment and customized:
weblogicDomainStoragePath: /scratch

# Reclaim policy of the persistent storage
# The valid values are: 'Retain', 'Delete', and 'Recycle'
weblogicDomainStorageReclaimPolicy: Retain

# Total storage allocated to the persistent storage.
weblogicDomainStorageSize: 10Gi


 ./ -i create-pv-pvc-inputs.yaml -o pvc -e

Which creates the pv and pvc yaml files plus the objects in your K8S cluster


So now create the jobs which are going to be used by the operator, to create the AdminServer and Managed Server Pods

In create-weblogic-domain/domain-home-on-pv directory you find the necessary scripts, you can amend for your own needs

I changed name of the domain and the location of my WebLogic image:

First you need to create 2 secrets:

- For pulling the image:

- WebLogic credentials, script can be found in

- create the domain job:

./ -i create-domain-inputs.yaml -o weblogic-fourteen -e


After this the pods we're created, however they remained in status Init:Error.

kubectl logs <pod> didn't gave much information; a way to get behind why the pods we're failing was to do this:


kubectl get pods
kubectl describe po < name of the pod>


Look at he part of the Container ID that is in init status

kubectl logs <pod-name> -c <init-containerid>


There I found that the NFS share was not writable for the scripts to build up the domain. To resolve, I changed the values in the create-domain-job-template.yaml

The spec / initContainer had value 0, which I changed in the opc userid (1000)

After this myWebLogic domain was created successfully!



Now this proved that WebLogic 14.1.1 runs as equal as the 12 versions. I also proved that Podman can be used instead of the docker-cli. The commands are more or less the same.

Now this is an empty domain, so a view leftovers for me:

- Deploying applications

- Using Istio with JMS

- Use GraalVM instead of Oracle Java

- Deploy polyglot applications

- Configure ELK

- Configure Prometheus and Grafana

In this surrealistic time of the COVID 19 virus, a lot of conferences where I was going to speak this year were cancelled, so I decided to write my presentation in the form of an article. Not sure if I’m able to tell this story somewhere in the world this year, so here it is!


Delivering high quality software in a 24*7 environment is a challenge for a lot of teams these days. Teams of multinationals need to deliver more functionality more quickly. Some of these teams are able to deliver new software every hour. How is it possible to meet these demands?


You must have heard the term many times these days. Maybe you already work like this in your team. Anyway, methods of working are changing rapidly, and the DevOps’s way of working is the way a lot of teams aim towards. Either to start with or to improve their own way of working and meet the high demands of their clients, business teams or goals set by their company.

Other ways of Development & Operations

Besides the methods of working, teams are also facing new technologies and innovations of how to develop, build, deploy, operate and monitor applications. A lot of companies these days shift from on-premises to “Cloud Native” applications. This means, for instance, that a build pipeline of an application might look a little different from traditional applications. Also take into consideration that the application landscape is being redesigned into a (partially) microservices or serverless landscape, supported by container-native platforms and private or commercial cloud vendors.

DevOps challenges – the “Ops” in DevOps

I won’t go through all aspects of DevOps, but one challenge DevOps teams face is the Ops part of it. These are business critical applications, that don’t allow any downtime or performance loss. A lot of teams are not well focused on the Ops part. This leads to error rates of applications being high: 32% of production apps have problems and even worse: that are often only discovered when they are reported by customers.


There are several reasons for this to happen:

  • Lack of continuity and/or automation
  • Lack of visibility



To become more successful in DevOps, Forrester developed a model for DevOps teams, to help them achieve their goals. The so-called CALMMS model:




This whitepaper will focus on the technology part, which supports certain highlighted aspects of the CALMSS model, especially the “Oracle way” of how to interpret these.

The last few years, Oracle has done the major job of adopting and keeping up Cloud Native technologies and embedding them in their current cloud offerings, which will be discussed in the coming part.


Solutions for Cloud Native Deployments

Through the years, the industry has developed some solutions for automating deployments as much as possible, in line with the DevOps way of working.



Oracle Container Pipeline – A Cloud Native Container Pipeline

To meet a high level of automation, it is essential to automate your software delivery from development to operations. Specially for in the cloud, Oracle has some technologies to support this. The ingredients you may need are the following: Versioning & Container Registry, Containers & Orchestration Engine, Provisioning, Container Pipelines and Packaging & Deployments. Having these ingredients will enable your team to meet a higher level of delivering, because less manual work needs to be done when it’s implemented. Of course, the implementation itself takes time and investment, but it will become beneficial later, when you are up to speed with it.





Versioning & Container Registry

For setting up a continuous, and in this case a container pipeline, you need a mechanism to version you source code and register your application containers in a registry. There are various source code repositories:

  • Git (commonly used with GitHub)
  • SubVersion
  • Bitbucket

The most common is GitHub. There are even movements to let GitHub be the “source of truth” and implement a GitOps methodology. With this, you declaratively describe the entire desired state of your system in Git. Everything is controlled and automated with YAMLs and pull requests.

Container registries are used to store container images created during the application development process. Container images placed in the registry can be used during application development. Companies often use a public registry, but it is recommended to have private registry. Within the Oracle Cloud, you can set up your own private registry, with benefits like high availability and maintenance.






Set up your private container registry at Oracle Cloud Infrastructure

To be able to push and pull your containers to a private OCI registry, the following steps need to be applied:

  • Set up an auth token in your Cloud Console



  • From your OCI client, login to your OCI registry with the docker interface(fra is the Frankfurt region)
docker login

Credentials to be filled in. You can find your user details in your cloud user settings:


Then you can either pull an image, or your local build image and push it to your private registry

Tag the image for pushing

docker tag <docker hub name>/helloworld:latest

Region, tenancy etc should apply to your own situation.

And push

docker push


You can verify the push aferwards




Containers & Orchestration Engine

From bare metal to virtualization to containerization. Containers have gained significant popularity the last few years. There are many container technologies to choose from, but there is a general lack of knowledge about the subtle differences in these technologies, and when to use what. Docker has become the most popular and more or less the de facto standard for container engines and runtime, though there are more, such as:

  • CoreOS - Rocket
  • Linux Containers (LXC). Docker was built on top of it.
  • Kata containers

For a long time, since 1979, the technology for containers was already there, hidden in UNIX and later in Linux as chroot, where the root directory of a process and its children were isolated to a new location in the filesystem. This was the beginning of process isolation: segregating file access for each process.

So, some basic characteristics of containers are:

  • Container: Configurable unit for small set of services/applications. Light weighted images
  • Share the OS kernel – No hypervisor. Except for Kata Containers which has its own kernel
  • Isolated resources (CPU, Memory, Storage)
  • Application & Infrastructure software as a whole


In a container-based landscape, there can be a large number of containers, which leads to questions such as: How to manage and structure these? An orchestration platform can be the solution. Kubernetes is such a platform. It manages storage, compute resources, networking where “Infrastructure as code” is the way for lifecycle management. The orchestration platforms present these days are:

  • Docker Swarm
  • Kubernetes, on premises or Cloud Engines by Microsoft (AKE), Google (GKE), IBM/RedHat, Amazon and Oracle (OKE)
  • Red Hat OpenShift

Kubernetes might now be considered as a standard platform and adopted technology to orchestrate containers. Once initiated as an internal project at Google, Kubernetes is now a framework for building distributed platforms. It manages and orchestrates container processes. It takes care of the lifecycle management of runtime containers.




Some of the basic concepts of Kubernetes are:

  • Master: Controls Kubernetes nodes.
  • Node: Perform requested, assigned tasks by the master
  • Pod: Scheduler entity for a group of one or more containers
  • Replication controller:  Control of identical pods across a cluster.
  • Service: Work definitions and connection between containers and pods.
  • kubelet: Reads container manifests to watch containers’ lifecycles
  • kubectl: command line configuration tool for Kubernetes
  • etcd: Key – Value store holding all Cluster Configs





It’s important to provision your “Infrastructure as Code”. Now, every Cloud vendor has a portal where you can set up a Kubernetes Cluster in minute, but when you want to do it repeatedly and more automated, it’s recommended to use the Terraform Provider. Terraform is a tool to provision Cloud environments. These scripts can be easily integrated in your container pipeline, as seen in this whitepaper. For a detailed setup of Terraform, look at: http://







Packaging & Deployments

In the Opensource community, it’s all about adoption. If there is a fine technology or good initiative, it will be embraced and finally become some sort of a standard. Speaking of packaging and deployment, Helm has become a widely used tool for doing this. It’s a Release and Package Management tool for Kubernetes and can be integrated with CI build tools (Maven, Jenkins, and Wercker)

This is a simple setup of the Helm components



Helm workflow according to V3





Container Pipelines

Setting up a Cloud Native Container pipeline can be done with different technologies. There are a lot of Opensource technologies to facilitate this. A few well known are:

  • JenkinsX and Tekton: you could run the default Jenkins in the cloud, but JenkinsX and Tekton are cloud native
  • Knative, Originated by Google
    1. K8S-native OSS framework for creating (CI/CD) systems
  • Spinnaker: an open source, multi-cloud continuous delivery platform initiated in 2014 by Netflix
  • OpenShift Pipelines
  • Azure DevOps Pipelines
  • Oracle Container Pipelines (Wercker): a Docker and Kubernetes native CI CD platform for Kubernetes and Microservices Deployment
    1. Former Wercker, acquired by Oracle
    2. Partial Open Source (the CLI)


Melting this all together in: Oracle Container Pipelines

Oracle Container Pipelines is fully web based and integrated with tools like GitHub/lab or Bitbucket. It contains all the build workflows, and has dependencies, environment settings and permissions.

Before Oracle acquired it, it was called Wercker. It is a Container-native Open Source CI/CD Automation platform for Kubernetes & Microservice Deployments. Every Artifact can be a packaged Docker Container.

In the base, it is a CI/CD tool designed specifically for container development. It’s pretty much codeless, meant to work with containers, to be used with any stack in any environment. Its central concept is pipelines, which operate on code and artifacts to generate a container. A pipeline generates an artifact, which can be packaged into a container. The aim is to work directly with containers through the entire process, from Development to Operation. This means the code is containerized from the beginning, with a development environment that's almost the same as the production one.



Oracle Container Pipelines works with some of following characteristic:

  • Organizations
    • This is the team, group or department grouped together to work on a certain project, as a unit in Wercker. It hosts applications, and users can be part of one or more Organizations.
  • Applications
    • This contains the build workflows and consist of dependencies, environment configuration and permissions. Applications are linked to a versioning system, usually to a project on Github, Gitlab, or Bitbucket, or your own SCN.
  • Steps
    • Stages in the pipeline with an Isolated script or compiled binary for accomplishing specific automation tasks.
    • Such as install, build, configure, test. You can add an npm-install, maven build or a python script to test your build.
    • Or a Docker action (Push, Pull etc.).
  •   Pipelines (pipeline consists of steps)
    • A series of steps that are triggered on a Git push, or the completion of another pipeline. This is more or less a GitOps approach
    •   Workflows is a set of chained branched pipelines to form multi-stage, multi-branch complex CI/CD flows. These are configured in the Web UI and depend on the wercker.yaml where the pipelines are configured, but they are not part of that yaml. Variations are based on branch.
      Workflows can be executed in parallel.
  • werker.yaml: the central file that must be on the root of your Git repository, and defines the build of your application using the steps and pipelines you configured in Wercker



example wercker.yaml


Configure Wercker for OKE

To be able to deploy applications to your managed Kubernetes Engine, you need to set some configuration Items on the configuration tab of your Wercker Application, in order to access your OKE:





Blend in Helm and Terraform in your Wercker Workflow

Building an entire Oracle Kubernetes Engine and deployment of your applications can be achieved by creating pipelines that consists of all the necessary steps, such as:

  • Use a lightweight image for the build
  • Performing all the steps needed: Terraform commands, Helm commands, specific kubectl commands or scripts
  • Configurations for API keys (in case of Terraform build of OKE), OCI and Kubernetes details





Terraform provision

Set up a temp Terraform box


Provision Kubernetes OKE cluster


Helm Steps

The Helm steps are basically the same as for other steps:

  • Push a build container image to your Container Registry
  • Setup a temp box for helm install
  • Fetch Helm repo and generate charts for install your application container



Example of the running pipeline


These are more or less the steps to take in order to set up a container-based pipeline where you:

  • Provision Infrastructure with Terraform
  • Do Helm initialization and repository Fetch
  • Install application containers with Helm



Wercker (or Oracle Container Pipelines) is, in my opinion, a good option for your containerized pipelines, with lots of options for different methods and technologies. It requires some work to set up, but components can be integrated on different kind of levels.

To me, it is currently unclear how this will evolve, especially with other more well-known platforms such as Jenkins-X and Tekton. I will closely follow the different solutions!




Besides all the exciting stuff about containers I still work sometimes with my hands in the mud . Today I encountered some strange behaviour on starting WebLogic, which I wanted to share with you, because it's hard to find the cause.

If you read the error message, you might think, this is an easy one; lots of blogposts and solutions are written. However, not everything is what it' looks like.


Executing the script somehow, creates a file for the AdminServer which is located in the <DOMAIN_HOME>servers/AdminServer/tmp, usually called as <WebLogic Server Name>.lok, like in here AdminServer.lok.

This is file is claimed by the java process which WebLogic uses to start and prevents duplicate startups. If a process is running you might get the above error.  Solution was to stop the duplicate process, remove the file and start again.

However: In this case there was no process running...... The file was created but WebLogic failed to start. A removal of the file did not help, and every time I tried to start the file was created.

So I started to investigate on other startscripts, such as the NodeManager..... same results.


Further investigation:


The domain home was located on an NFS Share, with a separate admin and managed Server homes. I suspected that it had something to do with NFS.


So I created a testfile to see if this was the case, a file called






public class TestLock


  public static void main(String[] args)




    File f = new File(".homelock");


    FileOutputStream fos = new FileOutputStream(f);



    catch (IOException e)








And compiled it to a runnable class, and run it

This resulted in an error. Testing on other NFS did not gave me this error

Diagnosing with the linux command dmesg gave a lot of output, but the one which was applicable:



After the storage admin resolved the issues with lockd and statd, file locking was available again and WebLogic could startup normally.

In the previous posts I wrote about how to transform a traditional application server such as WebLogic into a containerized platform, based on Docker containers managed by a Kubernetes Cluster. The basics are that there hasn't been any effort yet in looking how a huge and complex environment such as Oracle SOA Suite could fit into a container based strategy, but it's more or less lift and shift the current platform to run as Kubernetes managed containers.


There are ways to run a product such as Oracle SOA Suite in Kubernetes, here's the way I did this.


Oracle SOA Suite on OKE


Other that the standard PaaS service Oracle provides, the SOA Cloud service, this implementation I did is based on IaaS, on the Oracle Cloud Infrastructure, where I configured this Kubernetes Cluster as I described in previous posts. However this also can be done on an on premises infrastructure.





The following parts are involved to set up a SOA Suite domain based on the version


  • Docker Engine
  • Kubernetes base setup
  • Oracle Database Image
  • Oracle SOA Suite docker image ( either self build or from Oracle Container Registry
  • Github
  • A lot of patience


Set up the SOA Suite repository


A SOA Suite installation requires a repository which can be Oracle or some other flavours, to dehydrate SOA instance data and store metadata from composite deployments. I used a separate namespace to setup a database in Kubernetes.


The database I created uses the image, so I used the database yaml I obtained, where I had to add an ephemeral storage because after the first time of deploy I got this message in Kubernetes about exhausted ephemeral storage, so I solved it with this

  • Create a namespace for the database
  • Create a secret to be able to pull images from the container registry
kubectl create secret docker-registry ct-registry-pull-secret \ \
  --docker-username=********* \
  --docker-password=********* \


  • Apply the database to Kubernetes. To see progress, you can look into it to see progress of db creation:
kubectl get pods -n database-namespace
NAME                        READY     STATUS    RESTARTS   AGE
database-7b45749f44-kjr97   1/1       Running   0          6d

kubectl exec -ti database-7b45749f44-kjr97 /bin/bash -n database-namespace


Or use

kubectl logs database-7b45749f44-kjr97 -n database-namespace


So far so good. The only thing left is to create a service for the database to be exposed:

kubectl expose deployment database --type=LoadBalancer --name=database-svc -n database-namespace





Repository Creation with RCU


To do this, run of a temp resource pod of the SOA Suite image was sufficient to run rcu from it:

kubectl run rcu --generator=run-pod/v1 --image --overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "regsecret"}] } }' -- sleep-infinity


And run rcu from it:
kubectl exec -ti rcu /bin/bash

/u01/oracle/oracle_common/bin/rcu \
  -silent \
  -createRepository \
  -databaseType ORACLE \
  -connectString \
  -dbUser sys \
  -dbRole sysdba \
  -useSamePasswordForAllSchemaUsers true \
  -selectDependentsForComponents true \
  -schemaPrefix FMW1 \
  -component SOAINFRA \
  -component UCSUMS \
  -component ESS \
  -component MDS \
  -component IAU \
  -component IAU_APPEND \
  -component IAU_VIEWER \
  -component OPSS  \
  -component WLS  \
  -component STB


Nevertheles this is not completely silent as you have to fill in manually your passwords



Creation of the SOA domain


I used the WebLogic Kubernetes Operator GIT repository to create my SOA domain and changed it to what I needed.

General steps to take:

  • Install the WebLogic Kubernetes Operator
  • Create persistent volumes and claimes (PV/PVC)
  • Create a domain:
    • namespace
      • secrets:
        • RCU secrets
        • WebLogic domain secrets

Use the by oracle provided scripts and tools

  • Rollout the domain


Install the WebLogic Operator

I used helm to do this. In the github repository, there are charts available at kubernetes/charts/weblogic-operator. Specifiy in the values .yaml which namespace needs to be managed

The SOA Domain needs to be managed:

  - "default"
  - "domain-namespace-soa"


Use the lates operator

# image specifies the docker image containing the operator code.
image: "oracle/weblogic-kubernetes-operator:2.2.1"


And install:

helm install kubernetes/charts/weblogic-operator   --name weblogic-operator --namespace weblogic-operator-namespace   --set "javaLoggingLevel=FINE" --wait


Persistent volumes and claimes (PV/PVC)

When running s WebLogic domain in Kubernetes pods, 2 different models can be chosen:

  • Domain on image, when all artifacts will be stored in the container
  • Domain on a persistent volume, where domain artifacts can be stored stateful

In the git repository there are some ready to go scripts for creating PV's:



Now provide your own specifics in the inputfile such as:

# The version of this inputs file.  Do not modify.
version: create-weblogic-sample-domain-pv-pvc-inputs-v1
# The base name of the pv and pvc
baseName: soasuite
# Unique ID identifying a domain. 
# If left empty, the generated pv can be shared by multiple domains
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.
domainUID: soa-domain1
# Name of the namespace for the persistent volume claim
namespace: domain-namespace-soa
# Persistent volume type for the persistent storage.
# The value must be 'HOST_PATH' or 'NFS'. 
# If using 'NFS', weblogicDomainStorageNFSServer must be specified.
weblogicDomainStorageType: HOST_PATH
# The server name or ip address of the NFS server to use for the persistent storage.
# The following line must be uncomment and customized if weblogicDomainStorateType is NFS:
#weblogicDomainStorageNFSServer: nfsServer
# Physical path of the persistent storage.
# When weblogicDomainStorageType is set to HOST_PATH, this value should be set the to path to the
# domain storage on the Kubernetes host.
# When weblogicDomainStorageType is set to NFS, then weblogicDomainStorageNFSServer should be set
# to the IP address or name of the DNS server, and this value should be set to the exported path
# on that server.
# Note that the path where the domain is mounted in the WebLogic containers is not affected by this
# setting, that is determined when you create your domain.
# The following line must be uncomment and customized:
weblogicDomainStoragePath: /u01/soapv
# Reclaim policy of the persistent storage
# The valid values are: 'Retain', 'Delete', and 'Recycle'
weblogicDomainStorageReclaimPolicy: Retain
# Total storage allocated to the persistent storage.
weblogicDomainStorageSize: 20Gi

and run it:

./ -i create-pv-pvc-inputs.yaml -o soapv -e


Where -e is just a path where you put your yamls local on your client.


Create WebLogic domains access:

kubectl -n domain-namespace-soa \
        create secret generic domain1-soa-credentials \
        --from-literal=username=weblogic \


Create SOA repository access

./ -u fmw1_opss -p qualogy123 -a sys -q qualogy123 -d soa-domain-3 -n domain-namespace-soa -s opss-secret
secret "opss-secret" created
secret "opss-secret" labeled


Do this for all the schema's for SOA Suite

Domain rollout

Because domain rollout is a complicated process, this is all enclosed in a pod which runs several jobs to configure a domain

The only thing you have to to to fill in some inputs in a yaml file, where using a shell script will create a job  which will implement all your values

Some important ones:

# Port number for admin server
adminPort: 7001
# Name of the Admin Server
adminServerName: admin-server
# Unique ID identifying a domain.
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.
domainUID: soa-domain-1 (no underscores!)
# Home of the WebLogic domain
# If not specified, the value is derived from the domainUID as /shared/domains/<domainUID>
domainHome: /u01/domains/soa-domain-1
# Determines which WebLogic Servers the operator will start up
# Legal values are "NEVER", "IF_NEEDED", or "ADMIN_ONLY"
serverStartPolicy: IF_NEEDED
# Cluster name
clusterName: soa-cluster-1
# Number of managed servers to generate for the domain
configuredManagedServerCount: 3
# Number of managed servers to initially start for the domain
initialManagedServerReplicas: 2
# Base string used to generate managed server names
managedServerNameBase: soa-ms
# Port number for each managed server
managedServerPort: 8001
# WebLogic Server Docker image.
# The operator requires WebLogic Server with patch 29135930 applied.
# The existing WebLogic Docker image, `store/oracle/weblogic:`, was updated on January 17, 2019,
# and has all the necessary patches applied; a `docker pull` is required if you already have this image.
# Refer to [WebLogic Docker images](../../../../../site/ for details on how
# to obtain or create the image.
# Image pull policy
# Legal values are "IfNotPresent", "Always", or "Never"
imagePullPolicy: IfNotPresent
# Name of the Kubernetes secret to access the Docker Store to pull the WebLogic Server Docker image
# The presence of the secret will be validated when this parameter is enabled.
imagePullSecretName: ct-registry-pull-secret
# Boolean indicating if production mode is enabled for the domain
productionModeEnabled: true
# Name of the Kubernetes secret for the Admin Server's username and password
# The name must be lowercase.
# If not specified, the value is derived from the domainUID as <domainUID>-weblogic-credentials
weblogicCredentialsSecretName: domain1-soa-credentials
# Whether to include server .out to the pod's stdout.
# The default is true.
includeServerOutInPodLog: true
# The in-pod location for domain log, server logs, server out, and node manager log files
# If not specified, the value is derived from the domainUID as /shared/logs/<domainUID>
logHome: /u01/domains/logs/soa-domain-3
# Port for the T3Channel of the NetworkAccessPoint
t3ChannelPort: 30012
# Public address for T3Channel of the NetworkAccessPoint.  This value should be set to the
# kubernetes server address, which you can get by running "kubectl cluster-info".  If this
# value is not set to that address, WLST will not be able to connect from outside the
# Name of the domain namespace
namespace: domain-namespace-soa
#Java Option for WebLogic Server
javaOptions: -Dweblogic.StdoutDebugEnabled=false
# Name of the persistent volume claim
# If not specified, the value is derived from the domainUID as <domainUID>-weblogic-sample-pvc
persistentVolumeClaimName: soa-domain1-soasuite-pvc
# Mount path of the domain persistent volume.
domainPVMountPath: /u01/domains
# Mount path where the create domain scripts are located inside a pod
# The `` script creates a Kubernetes job to run the script (specified in the
# `createDomainScriptName` property) in a Kubernetes pod to create a WebLogic home. Files
# in the `createDomainFilesDir` directory are mounted to this location in the pod, so that
# a Kubernetes pod can use the scripts and supporting files to create a domain home.
createDomainScriptsMountPath: /u01/weblogic
# RCU configuration details

# The schema prefix to use in the database, for example `SOA1`.  You may wish to make this
# the same as the domainUID in order to simplify matching domains to their RCU schemas.
rcuSchemaPrefix: FMW1

# The database URL

# The kubernetes secret containing the database credentials
rcuCredentialsSecret: opss-secret
rcuCredentialsSecret: iau-secret
rcuCredentialsSecret: iauviewer-secret
rcuCredentialsSecret: iauappend-secret
rcuCredentialsSecret: wls-secret
rcuCredentialsSecret: soainfra-secret
rcuCredentialsSecret: mds-secret
rcuCredentialsSecret: wls-secret
rcuCredentialsSecret: wlsruntime-secret
rcuCredentialsSecret: stb-secret


Execute the script

./ -i create-domain-inputs-soa.yaml -o wlssoa -e -v



You can follow it using the logs:

kubectl logs -f soa-domain-2-create-fmw-infra-sample-domain-job-572r6 -n domain-namespace-soa


So this is basically what it takes, and next time I will do a more deep dive in how to manage a SOA Suite domain in kubernetes.


Unfortunately at the moment the internal configuration does not complete entire successful....


Could be the case as described in MOS Doc ID 2284797.1


Update 7 august 2019

Indeed as I expected, the above issue was due to the choosage of the password, theis had to be in the structure as described in this MOS Document. So no actual Kubernetes issue.


To be continued!!!

Traditionally, when you install Oracle WebLogic, you simply download the  necessary WebLogic and FMW Infrastructure jar files, spin up one or multiple servers, install the software and configure your domain, either automated or manual.

Depending on which Cloud strategie you choose you could either:


  • Use pure IaaS, so in essence this means you obtain compute power, storage and network from a certain cloud provider, which can be any you choose: AWS, Azure, Google or maybe Oracle.
  • Use PaaS, where your application server platform and generic middleware is part of the cloud subscription. In this scenario you might choose for Oracle's Java Cloud Service  or some other PaaS such as SOA Cloud Service, depending on the needs. Looking at Java based applications, other vendors also offer Java in the Cloud:

        But when you come from the WebLogic application server, the 1st obvious choice seems the Java Cloud Service. This is only one part of the stage of a future roadmap of your application landscape, because the  applications you develop can still be monoliths. I will come to that later.


Along with a strategy of "breaking up the monoliths", DevOps, and Cloud, also containerizing your infrastructure is inevitable for the future state of your applicationlandscape. Now Oracle Product Development stated during Oracle OpenWorld, that regarding containerization, they will follow the strategy of the Cloud Native Compute Foundation which means that products like WebLogic will be developed with respect to container technology such as Docker, CoreOS and Kubernetes.



Install WebLogic on a Kubernetes platform


To install a WebLogic Domain on a Kubernetes platform in the cloud, I used the Oracle Kubernetes Engine which is through the OCI console very easy to set up.

  1. Login to you overall Cloud Dashboard and select Compute in the left pane; this brings you into the Oracle Cloud Infrastructure Dashboard
  2. Click on Developer and create the OKE cluster. Be sure that Helm and Tiller are ticked


You have to create a compartment in OCI before creating the OKE. You can find this here

After that in your root compartment, you have to create a policy to manage OKE resources, so select in the left pane:

Identity-->policies and create the following policy"


Now the base platform is ready, and we use a linux client to access, manage and build further on our Kubernetes platform.

For that we need to obtain the kubeconfig file from the OKE and place it on our client:


Local we create a hidden directory


mkdir -p $HOME/.kube


Next, you need to install the cloud commandline: and then configure it to use with you cloud tenant


oci setup config


This sets some basic config, generates an API key pair and creates the config file. The public key you will have to upload to you console:

Then create the kubeconfig file locally:

Then, see if the cluster is accessible


The Kubernetes Operator for WebLogic

Before we are going to install our WebLogic domain into Kubernetes, first a so called Operator is required. An operator is an extended api on top of the basic K8S when you setup a K8S cluster.  A WebLogic platform has so many specifics in it, which can't be managed by standard K8S api's, so the operator takes care of that. Operations such as WebLogic clustering, shared domain artifacts, t3 and rmi channel access, and lots more are handled by this operator.

To obtain the operator, clone it from github to a directory you prefer:

git clone 


Go to the directory weblogic-kubernetes-operator, where we will install the operator using helm


Install the operator using Helm

Before we install the operator, first a role binding needs to be setup for helm in the k8s cluster:

cat << EOF | kubectl apply -f -
kind: ClusterRoleBinding
  name: helm-user-cluster-admin-role
  kind: ClusterRole
  name: cluster-admin
- kind: ServiceAccount
  name: default
  namespace: kube-system


Next, out from the cloned git repository you can install the operator using helm:


And after a while the operator pod comes online and running


Some operational actions can be done with helm. You can inspect the values from your helm chart and see how it's implemented.

So from you directory where the helm chart is located:

To whats implemented:

The operator pod to be seen from your K8S Dashboard




More about the how and why about operators, I advise you to read and the documentation about it which is available on



Preparing and creating a WebLogic domain

Before preparing, you should know what kind of domain model you would like to choose: the domain in docker image or the domain on persistent volume.

If you are really have to preserve state, or make logfiles accessible outside your domain you should us the one on persistent volume.


Create a Persistent volume

Using an input file with and changing the values to your to be created domain, namespace, etc:



# Copyright 2018, Oracle Corporation and/or its affiliates.  All rights reserved.
# Licensed under the Universal Permissive License v 1.0 as shown at

# The version of this inputs file.  Do not modify.
version: create-weblogic-sample-domain-pv-pvc-inputs-v1

# The base name of the pv and pvc
baseName: weblogic-storage

# Unique ID identifying a domain. 
# If left empty, the generated pv can be shared by multiple domains
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.

# Name of the namespace for the persistent volume claim
namespace: wls-domain-namespace-1

# Persistent volume type for the persistent storage.
# The value must be 'HOST_PATH' or 'NFS'. 
# If using 'NFS', weblogicDomainStorageNFSServer must be specified.
weblogicDomainStorageType: HOST_PATH

# The server name or ip address of the NFS server to use for the persistent storage.
# The following line must be uncomment and customized if weblogicDomainStorateType is NFS:
#weblogicDomainStorageNFSServer: nfsServer

# Physical path of the persistent storage.
# When weblogicDomainStorageType is set to HOST_PATH, this value should be set the to path to the
# domain storage on the Kubernetes host.
# When weblogicDomainStorageType is set to NFS, then weblogicDomainStorageNFSServer should be set
# to the IP address or name of the DNS server, and this value should be set to the exported path
# on that server.
# Note that the path where the domain is mounted in the WebLogic containers is not affected by this
# setting, that is determined when you create your domain.
# The following line must be uncomment and customized:
weblogicDomainStoragePath: /u01

# Reclaim policy of the persistent storage
# The valid values are: 'Retain', 'Delete', and 'Recycle'
weblogicDomainStorageReclaimPolicy: Retain

# Total storage allocated to the persistent storage.
weblogicDomainStorageSize: 10Gi
weblogicDomainStoragePath: /u01

Now in that directory there is a create script to execute:



Now the both generated yaml files can be used to apply to the k8s WebLogic namespace:

And in our namespace the storage is created:



Generating yaml files and create the domain


Oracle provides on Github several solutions to create domains. There  domain models I chose was the one to have a perstistent volume to store logs and other artifacts. The storageclass I used was the oci storageclass.

To generation took care of the following:



  • Create a Job for the entire domaincreation
  • Create a ConfigMap in K8S for parameterizing the WebLogic Domain( based on the yaml inputs)
  • Generation of a domain yaml file out of the input and template yamls
  • Start up scheduled pods for creating the entire domain
  • Final creation of WebLogic Admin Server and Managed Server Pods and start them using the scripts which were included.

And the pod which creates the domain




When this job has finished the final empty WebLogic domain will be created.


The road to transformation


Now the question is, or this is already production worthy. My opinion is no, because these setups re based on what;s there on github at the moment; so I'd recommend first to start lightweighted and set some of these models up to try them out. To set up an enterprise ready WebLogic Kubernetes platform, also aspects such as automation, loadbalancers, networking, and so on needs to be sorted out how this can be setup in a way that WebLogic can act in a containerized world.


In one of my next article I will look at migrating existing WebLogic domain to Kubernetes.

Companies are on the verge of making important decisions regarding containerization of their IT Landscape. And whether, how and when they should move to the cloud.


This whitepaper helps companies make the right decisions regarding the Container Orchestration Platform, and how it works together with the Oracle Cloud Infrastructure. It contains a lot of tips, takeaways and things to consider. So you can make the optimal choice for your company’s strategy.  for downloading the whitepaper

When you look at AWS, it has a possibility to create rather easily an Elastic Search Cluster. With a few simple clicks you have a cluster up and running.

Unfortunately the Oracle Cloud Infrastructure doesn't have this feature yet, but that doesn't mean you can't setup Elastic Search on OCI . Using the magic Terraform wand magic is about to happen...


Get the terraform Elastic Search OCI Plugin


On github there is an OCI ELK plugin available on, so perform an easy


git clone


for a local repository download, which leaves you with a bunch of files:


in the env-vars you will have tot set your region, compartment and userid, and also you fingerprint and ssh keypairs which wasj generated with

ssh-keygen -b 2048 -t rsa

and left the keypairs in the .ssh directory, so the env-vars would look like this


Some other files could acquire my attention:

  • where I can specify VCN, shapes and VM image location, I chose shape VM.Standard1.8

other files specified loadbalancer, compute nodes, storage etc.


Next run the env-vars script, another way is to put it into your bash_profile


. ./env-vars
terraform plan


Immediately it returned some errors like:

error: oci_core instance.Bastion Host: "image": required field is not set 


To debug this error, I set the terraform loglevel to trace



Still, this did not bring me al lot of new information, until I performed terraform version, which gave the out put that the current version for this provider was outdated.

To upgrade the provided, the binary had to be dowloaded and replaced

After download

sudo cp -p terraform /usr/local/bin/


and in the directory of the oci elk provider:

terraform init

which upgraded the provider.

Finally, explicitly the version had to be set in the


When ready the terraform plan and terraform apply went well and gave me a fully running ELK cluster on OCI.


In part 2, I will dive deeper in how to setup Elasic Search, Kibana and Logstash.

Kubernetes becomes the defacto standard if it comes to managing and scaling your container platform, where you might consider that containers are the next gen infrastructure platforms as a follow up on virtual machines, where every process application or infrastructure component can run in a docker container, autonomous, lightweight and independent, as an application, or a piece of runtime platform software ( such as a Java JDK )

However,  in the greater whole dockercontainers don't stand for themselves and they need some management; they need to be orchestrated and configured in a meaningful way. One of those platforms is called Kubernetes, developed by Google and since it development more and more technologies embraced Kubernetes as the orchestrator platform for containers.


Oracle these days is aiming to get customers into cloud, so in this way they developed a Kubernetes cloud solution called OKE which stands for Oracle Kubernetes Engine and is available from the Oracle Cloud Infrastructure.


2018-08-29 16_42_32-Clipboard.png


In here you can configure all your way through to setup a Kubernetes engine,




The good news is that there is an automated way to fo this, using Terraform.


Terrafom is a solution which fits perfectly in a DevOps methodology where Infrastructure as a code and automation are keywords to support the DevOps way of working.  Every confgurational aspect to setup networks, loadbalancers, vm's, containers etc can be rolled out using Terraform, and especially cloud infrastructures

Oracle supports Terraform for its Cloud infrastructure since april 2017.


Looking at schematics, the Terraform plugin works as below

Now the steps to roll this out are pretty simple however there are some triviial aspects to consider


  • You need an OCI userid and obtainan API key by generating a Public and Private key pair

To set this up use a local linux server to do this:

  • Generate the keypair and convert to PEM format
  • Extract the finger print
  • Add an API key to you OCI userid and paste the Public key contents in it


Generate and extract the fingerprint


Configure Terraform


Lucky for me, there is a OKE Terraform installer present on Github, see so I had to pull and update this github to my local repository

Next, run some pre commands:

cd terraform-kubernetes-installer
terraform init

To rollout the OKE, the TFVars envfile needed the OCI configuration:


tenancy_ocid = "the cloud tenancy id"
compartment_ocid = "the compartment id" This one you need to create in the OCI console
fingerprint = "<extracted from the PubPriv Keyoair"
private_key_path = "/home/oracle/.oci/oci_api_key.pem"
user_ocid = "<the OCI user id>"
Region = "< the region of your OCI > (like eu-frankfurt-1)"


Comparment creation to be done in the OCI menu->Identity->Compartment.


Now when execute the Terraform configurations, the TFVars need to be exported so the best to do this in the .bash_profile

Before applying, evaluate the plan

terraform plan


Finally apply the configuration to OKE


terraform apply


It took a some time but after a while my kubernetes master and worker nodes where created in the OCI. This proved that setting up using Terraform is a fast and simple way if you know the trivial parts ( tenant ocid, userocid etc )


In this post I am exploring the sense of configuring and running an Oracle SOA Suite 12.2 domain on docker and managed by Kubernetes, to discover if SOA is a good candidate to run on Docker.

The server where I will install it is running Oracle Linux 6.8; unfortunately Kubernetes now is supported on Linux 7, so my next post will handle that subject.

First of all here are my install bits and experiences.



Setting up Docker



Before installing I had to add some YUM repo to get the right docker package:


export DOCKERURL=""
 rm /etc/yum.repos.d/docker*.repo
yum-config-manager --add-repo "$DOCKERURL/oraclelinux/docker-ce.repo"
yum-config-manager --enable docker-ce-stable-17.05


The installation took place on a Linux VM running Oracle Linux 6.8. I used the YUM repository to install the appropriate version of docker:

I was lucky, it was already at the latest version.

Next, I wanted to pull the containers from the Oracle Container Registry, so I logged in from docker:

docker login

providing my OPN username and password.


Next, I considered a new place for where the docker containers are stored, because /var/lib/docker is mounted under "/", which is not a good idea to my opinion:

  • Backup if current dir
tar -zcC /var/lib docker > /u02/pd0/var_lib_docker-backup-$(date +%s).tar.gz

Move it to a new filesystem with sufficient space

mkdir /u02/pd0/docker
mv /var/lib/docker /u02/pd0/docker
  • Link it to the original location
ln -s /u02/pd0/docker /var/lib/docker

Quick and a slight dirty, but sufficient


First of all, a bridge network must be created, to enable containers to connect with eachother, like the SOA containers with their dehydration store:

docker network create -d bridge SOANet


Creating database container

Now on there are some instructions of how to set up a SOA Docker environment, however these instructions are not totally correct.

Creating a SOA Suite database, the parameters is the db.env.list were not correct


ORACLE_SID=<db sid="">
ORACLE_PDB=<pdb id="">


These weren't correct, in this case these properties were ignored and a default dummy name was used like ORCL...etc

The correct string should be DB_ instead of ORACLE_



After that, start the docker database container, and database came up with the right SID and Service Name.

docker start soadb
docker ps

Some verification:

- Login with SQL*Plus

- Showed Listener Status


Create AdminServer Container

Before creating the AdminServer I first obtained the the image from the registry:

docker pull


Also inhere, specific parameters had to be set, although I also encountered some flaws in the original instructions:

  • Although password was set for the db admin, it didn't pick it up, and had to set it manually in the database  " alter user sys...."
  • The SID and Service names were not correct, for the PDB I had to configure it including domain name, so this was finally the correct setup:


And next run the creation of the domain, and start the AdminServer through WLST

docker run -i -t  --name soaadminserver --network=SOANet -p 7001:7001 -v /u02/scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/domains   --env-file ./adminserver.env.list oracle/soa:


When this was finished, I could login to the WebLogic console. The listenadress of the AdminServer was empty so I guess the did not do it's work, so I changed it manually


Starting the managed server

The image configures already a managed server in the domain, so next is to spin up a container for the SOA managed server:

docker run -i -t  --name soa_server1 --network=SOANet -p 8001:8001   --volumes-from soaadminserver   --env-file ./soaserver.env.list oracle/soa: "/u01/oracle/dockertools/"


The SOA Managed server came up after a while, though, status in the console was SHUTDOWN, because the startscript did not use the nodemanager. Checking the log I could follow the startup sequence.


docker exec -it soaadminserver bash -c "tail -f  /u01/oracle/user_projects/domains/InfraDomain/logs/ms.log"

After that, I logged in into the container and started the nodemanager using the startNodemanager,.sh, to be able to start managed servers through the console and get health info.



Just some thoughts and doubts that came up in me; but please correct me if I'm wrong.

Now the million dollar question would be: Is SOA Suite fit for a container platform? It does run, although I haven't tested it yet. In the end to set it up is rather simple.

Apart from some flaws during setup, you might ask yourselve : what are we doing different inhere, instead of spinning up servers and/or VMS


Well 1st, all docker containers run on a server, but we skipped the entire server configuration and based on a pre-baked image we could bring rather quick an environment

But looking from an application perspective, we still not doing anything containerized. Even in a docker container monoliths can exist.

So, the complete story is that at platform level we have containers, at application level, not yet.

At this years Developer Tour in Latin America I was selected to speak in Argentina, which would be a great adventure for me. As I have never been on the southern hemisphere, I was really excited and honoured to go. After a very long flight from Amsterdam to Buenos Aires, almost 14 hours, I landed early morning in Buenos Aires, which was in winter time. For me a big switch as in the Netherlands it was around 35 *C when I left.   But who's complaining.


Packed with my suitcase and my Oracle Management Cloud bible I entered Buenos Aires... what a city and what contrasts. Beautiful art and architecture but also a lot poverty.

I like the urban lifestyle so I found my way in Buenos Aires and visited some hotspots every tourst must see. If you ever plan to go, wear some good shoes because the streets are sometimes hard to walk on.

Nevertheless, I could breath the Southern American lifestyle while seeing the Tango live on the streets:




The conference day


The conference took place on my birthday, the 9th of aigust 2018 in the UADE, one of the many universities of Buenos Aires.



A 20 minutes walk from my hotel brought me there, and around 9:30 AM the conference was openend in the main Auditorium. My session was planned at 11:05 am, but die to some delay it began a bit later, so I followed some other sessions. Although the majority of the sessions were in Spanish, which isn't my stongest language, I could follow some of it, and was lucky that the slides were in English. As I was on the Analytics track I followed a session from Edelweiss Kammerman about Data Visialization with the Oracle Datawarehouse Cloud Service, and the Session before me from Diego Sanchez, also about the Management Cloud, regarding problem detection and analysis.



Security Analytics with the Oracle Management Cloud


As I was in the Analytics track I had to emphasize on the Analytics capabilities of the Oracle Management Cloud, where Machinelearning, Anomaly Detection and Data Visualization are important topics.

Machinelearning capabilities are essential for this solution; in OMC the following are used:


  • Anomaly Detection
    • See the abnomal symptoms. We're not interested in what's going ok, but in the exceptions
  • Clustering
    • Reduce tons of billions of data to a manageabe and understandable pattern. This requires high end technology analysis.
  • Correlation
    • Correlate as might seem different events to eachother to a common recognzed pattern. Such as link by a common attribute, an OrderID, a Personal ID and so on,



The battle against attacks is always lag behind


Let's face it; SOC's are having a hard time to defend against all kinds of hostile actions, which can be from the outside world, or inside by suspected fraude of employees. Some of the bad already happened when they come in action.

The  Security Monitoring and Analystics of OMC can help them make  life a bit easier by doing the following

  • Intelligent monitor security events
  • Investigate using Log Analytics
  • Understand  and interpret attackchains
  • Automatically remediate  to reduce exposure
  • Continually harden systems in response to a threat or weakness

Now a well known pattern of attack is the Cyber Kill Chain where through some certain steps hostile parties can inflitrate into systems without anyone noticing. And don't think of the stereotypes of young guys or girls on their addic, trying to hack. No we think about highly sophisticated attacks, nitiated by machines and well organized groups of maybe governments or criminal organizations.


2018-08-10 14_31_48-OMC_analytics_machinelearning.pptx [Protected View] - PowerPoint.png


A typical SMA Dashboard Identifying attacks



Also when you lay a part of your IT in the cloud, you can easily integrate you Access Biroker or Identity Management Systems into The Oracle Management Cloud

2018-08-10 14_32_17-OMC_analytics_machinelearning.pptx [Protected View] - PowerPoint.png




The SMA Engine works with Machine Learning Models and Rules in order to detect any security thread, as I already explained in an earlier blog. But the fact that SMA worls closely together with the Log Analytics modue makes it a strong and well integrated solution for any enterprise to use in it's everlasting battle against attacks.


The closing Speakers, ACE and DevChamp dinner

As a traditiion the event was closed with a nice dinner at a  restaurant to try out the Argentinian meat culture, where I met some of my colleagues which I did not had the chance to meet.

By surpise, Jennifer Nicholson from the ACE program announced a new Java Developer Champion: Hillmer Chona,,, Congratulations and well done!



Big thanks


Finnaly I would like to thank the Argentinian Oracle User group for the organization and hope to see you maybe next year.

Monitoring your IT landscape is in many cases an under estimated topic, however IT departments spend time on doing it. But there are hardly no standards how to approach this, and in a typical IT operations team, every team uses its own tools. Either it can be some scripts, monitoring software from different vendors or some freeware/opensource.

Companies who do Oracle, often use the Oracle Enterprise Manager Cloud Control(EMC), which is a agent based centralized repository to gather diagnostic information of all connected systems, database, middleware and applications. It's a very broad and complete solution to monitor and manage IT systems.

But often the case is, that this platform is owned by one team, typically a DBA team, because EMC finds its origin from the Oracle Database. And they use it also to monitor their database. But a company running an Oracle SOA Suite or BPM or any other Fusion Middleware, it is not a common habit to use the EMC to monitor their FMW applications. The reason why is:

  • There are no management packs licensed
  • There is no or not enough knowledge how to implement and use these management packs
  • The team who owns the platform does not allow other teams to use the EMC

Management Packs are layers for specific tasks or platforms to extend the management capability of the EMC.


To overcome owner issues, the Oracle Management Cloud(OMC) can help. Teams can order there own subscriptions, or do it as a joined company effort to monitor their applications in the Cloud. Although OMC is not a replacement of EMC you migh notice some similarities, especially in the Infrastructure Monitoring and Compliancy Management.

But other than EMC, OMC is a more coherent solution where the different modules work closely together, more or less out of the box. An Application Performance Management uses the log analytics to drill down deep into Application Issues.


Now what if your company uses the EMC but wants to make uses of some of all the features of the OMC? Or better, why would a company want that? Well, a good usecase for a company is to try out the OMC, or wants to make use of one of the modules such as Log Analytics. But how to get all the information from your on premises EMC to the OMC?

You can make use of the:



Data Collector

The OMC comes with a variety of agents:

  • The Gateway Agent. An inbetween agent when your systems are not supposed to be eposed to the outside world, a Gateway Agent can be placed in your DMZ, gathers all information from all your OMC connected applications and DB's and pushes it outside the datacenter to the OMC.
  • The Cloud Agent - An agent installed to collect server information and gather logfiles for the log analytics
  • The APM Agent - An agent specificaly used for Application Performance Diagnostics. Has to be implemented in the application server infrastructure, can be a Java , Node JS, Apple ord Android or .Net agent.
  • The Data Collector. This agent can be useful to collect all the data from your EMC and to be showed in your OMC on the Infrastructure Monitoring Module.



Data Collector Implementation


To implement the Data Collector, you need to locate your EMC system. This can be one host or maybe separate if the OMS application and OMS database run on separate servers. It us sufficcient only to install it on the OMS appliction host ( OMS = Oracle Management System - the engine that runs EMC).


First step is to download the Data Collector Agent from your OMC platform:

Transfrom the package to your EMC host and unzip it in a directory.

The next task to do is to modify the agent.rsp. Modify the following




AGENT_REGISTRATION_KEY=*********************** --> to be found in

Administration --> Agents --> Registration Keys.









OMR is the EMC repository. The datacollector schema will be installed in this repository.

Next is it just to run the script, and if finished, after a while you can start the agent form your omcagent directory

/u01/app/omc/agent_inst/bin/omcli start agent

/u01/app/omc/agent_inst/bin/omcli status agent




From here your databases and systems monitored in EMC are now to be showned in OMC, but basic stuff. If you want to see more you will have to specify a json file and register databases against the OMC. Oracle provides for various types of json scripts for various database flavours.

Since some years I attend the annual partner forum, and though I had a very busy time at my customers, plus the week after I had to present on the Analytics and Data Summit,  I still decided for my self to attend to this forum. After leaving my "Oracle Red Stack" car behind on the airport I flew to Budapest on Monday morning.


Arriving on my final destination, the Boscolo Autograph hotel, I was amazed about this very nice hotel, which happened to be very comfortable and luxury, very nice architecture on the in side. My compliments to Jurgen and his team to book this hotel for the conference.

Day 1: ACE Sessions

This afternoon is reserved for ACES and ACE Directors from partners who like to speak of some customer success story, or putting a great contribution to some product of Oracle. One of my favorite sessions was the one from the  team which we're very active with their blogs about the Oracle PCS. They deserved their community award!

I was surprised about the many countries from where partners we're attending to this forum. Luis Weir, together with Phil Wilkins spoke passionate about their favourite subject Oracle API CS integrated in a full solution with a Oracle JET application, integration layer with OIC and a SaaS layer consisting of some SaaS solutions such as Oracle Taleo.

The other sessions we're also very interesting to hear; it is always a pleasure to listen to Simon Haslam, he spoke about provisioning the Oracle Cloud in a secure way, which was very useful to hear; I recognized the same he told about using the the Oracle Cloud GUI; provisioning  one instance is not such an issue but doing 20 ends up in a lot of filling in and typing.


Furthermore I was very pleased about the fact that the Oracle Management Cloud is adopted in the PaaS and it was this year an important topic. Last ACE session was about a customer case telling the using the analysis capabilities to search for root causes in bad performing applications.

Unfortunately I was running out of time else I would have definitely spoken about my experiences with the Oracle Management Cloud.

Finally Jurgen handed out his Community Awards, congratulations to all the winners!



Day 2 : General Sessions


This day is filled with keynotes and presentations from various members of Oracle's Product Management. The openings keynote was held by Jean-Marc Gottero to give a high level overview of the position of the Oracle Cloud in EMEA and the role of partners


The entire day was filled with presentations so I give here some highlights which popped up out of my memory.


Ed Zou made an important announcement in the line of the Autonomous Database-  Autonomous PaaS Services. Now this was very fresh news so not much to elaborate -  yet, But again I was very pleased to hear the Oracle Management Cloud would play a very important role in this solution. The exact date of releasing these services are not known, and also not known what will be in it. He also told about other new areas Oracle will cover such as blockchain, Intelligent Bots, AI and Machine Learning.


Pyounguk Cho presented ( in his enthusiastic style ) the various Cloud platforms  from an AppDev perspective, such as JCS, ACCS, Stack Manager. In fact about Enterprise Java these days and it's position in the cloud. Highlights for development, infrastructure, data management , security and monitoring were important topics.

JCS comes now with a concept called quick-start instances, comparable with the quick-start package for WebLogic developers can download to set up fast and easy a WebLogic domain.

Other new features like snapshot and cloning and the integration of the Identity Cloud in JCS passed the stage.

Another topic about the Application Cloud, a docker based polyglot platform showed us compatibility with non Java such as Python, Node JS,  PHP, and the easiness of building and deploy these native applications making use of their belonging docker images. These different applications can run within one single tenant and exposed to end users through Oracle's cloud loadbalancer. The elastic feature is also a very nice feature which customers can use to scale up or down belonging to their needs.

Oracle's messaging and streaming platform - Event Hub Cloud Service - based on Apache Kafka showed us the need of replacement of traditional message brokers and data integrators.

Finally, the stack manager was discussed which showed a platform for managing all these different cloud solutions into one where customers can group and manage their services into one atomic stack.


Robert Wunderlich and Jakub Nesetril spoke about API Management. Funny detail I found was that Robert mentioned the Rabobank about their current path to API management. and Open Banking. Especially important for partners to know their position and how they can fit in bringing these solutions to customers.

They announced some new coming partnerships with companies which are into the API Management market.

Better  authentication integration was one of the topics, explaining how OAUTH can be better configured in the API platform, plus integration with other technology partners and their solutions.


Later that they I went off for a video interview about, as you might guess , about my favourite topic, the management cloud. This will be soon published along with the other interview held in the forum.


The evening program was very well organized with a nice dinner and networking event held at the "Kiosk" in Budapest


Day 3:


Unfortunately I could not attend the entire because of my flight schedule, though I could attend a few of my favourite topics


It was a nice seeing back with Maciej Gruszka, as he spoke about developing micro services and serverless applications and new patterns of development, compared against the

monolithic applications with significant footprint managed by Application Servers. Pattern such as micro services will become more common among architects as the viable architecture. More vendors these days are designing applications called functions or serverless applications and they implemented it on their cloud. Maciej showed containerized environments with Kubernetes as scheduling infrastructure for Docker based applications running in Oracle Cloud. and the Managed Kubernetes service as the core engine for running microservices framework and serverless





Pyounguk Cho continued where he left the day before, showing the broad options of the JCS. Another good session was the one from Jurgen Fleisher, from Product Development of the Oracle Management Cloud. Now the presentation was not entire new for me but still hearing it from another perspective gave me new insights and inspirations. He gave a good explanation of what Machine Learning means nowadays, and the role the Oracle Management Cloud can play in any IT organization as it comes to Monitoring, analysis, DevOps and autonomous platforms.


Key take aways


Because I'm deep into Oracle technology, some of the content I already know. However also some new topics pass by and I think it's very important to put them into the right place of what I already heard. Further more as a partner from Oracle it's evident to really partner  with Oracle and exchange and absorb knowledge and experience.

For me, attending this event is a must.


Finally, lots of thanks to Jurgen and everyone in his team responsible for this excellent organized forum.

I've become a huge fan of the Oracle Management Cloud. Why? Because Oracle has broaden it's limit and the OMC doesn't just monitor Oracle based systems and applications, it has plugins for many non Oracle technologies, which makes the OMC very flexible and Enterprise worthy to be used as a complete solution for monitoring.


Security Monitoring and Analytics ( SMA)

Oracle also realized customers have great concerns about security in general but even more in the cloud, so they've put up a service in the cloud which has really powerful capabilities.

One of these powerful modules inside the Management Cloud is the Security Monitoring & Analystics, or SMA. With this module any SIEM or SOC can detect, identify and monitor the following:

  • Securiity threats from in and outside the company
  • Fraud detection
  • Compliancy violations

Inside SMA

When you are in SMA it pretty much look like the other OMC components, but it has it's focus on security. Entering the first dashboard you can see immediately an overview of the activity of you users and their possible risky actionsWhen you login into OMC, you can click on the SMA module if you have the proper cloud subscriptions


Inhere you will start in the main SMA landing page showing the “Users” Dashboard, but you can configure dashboards for yourself if you want.In this page you see:

1. Users – shows the total number of risky users

2. Threats - shows total, critical, high, medium and low risk threats

3. Assets – shows the total number of risky assetsClicking on the threats you'll can get more details on persons actions which came out of the analysis of the identity management logs or via user data upload. You can see the company, manager, wand specific user details and status such as lockouts, locations, email adresses and so on.To look down deeper you can identify a kill chain. A kill chain is a series of executions which might lead into some kind of destruction or illegal access/actions.

  • Threats by category – Threats are categorized by the SMA engine into different kill-chain steps such as
    • reconnaissance --> research, identification and selection of targets
    • infiltration --> Infiltrate into these targets
    • lateral movement --> move into the system in search for keys/access points

it's obvious that this user is been target of a hostile attack executing this kill chain.

  • Top Risky Assets by Threats – Detects if a cetrtain asset  which can be any system, host or database  is being targeted more usual.


Clicking on the threats, you can clearly see what is happening, the killchain is clearly exposed. But how can we see this?


Based on the killchain components, we identify:

  • An anomaly (WebAccessAnomaly) is detected by a analytics machine learning model which saw the user going to a  URL that was not expected based on peer group baseline of the websites visited . This User visited a site and downloaded a malware onto his machine which could have triggered this attack.
  • An attack which was detected by  the rule “MultipleFailedLogin”   which gets triggered when five or more failed login attempts on different accounts are seen
  • An infiltration attack which is detected by “TargetedAccountAttack

Furthermore some  infiltration attacks are  captured by the “BruteForceAttackSuccess” rule which gets triggered by 5 failed login attempts on the same account, followed by a successful login in a one-minute period. A conclusion of this is that the attacker has gained the user credentials. But it still not the end.....Again an anomaly is  captured by the rule PrivSQLAnomaly on a database – this is a SQL anomaly detection that shows that attacker is doing some unauthorized or anomalous transactions on the associated asset FINDB. SMA’s SQLAnomaly detection detected thisLooking at the killchain the last action is detected. the lateral movement with the rule MultipleUserCreation –  created 3 or more users in the oracle database within a 5 minute period, by an attackerImmediately you can see that a kill chain (anomaly->recon->infiltration->lateral movement) attack is in progress. Attacker attacked a critical asset (finance host and FINDB) via this user . Ypu can not only see point threats but the entire kill chain view with SMA which gives faster insight what's happening


(orginal source : OMC SMA and Configuration and Compliance -DemoScript)



Machine Learning

Machinelearning in SMA helps identify attacks and threats. If you look at the PrivSQLAnomaly, you see that based on an analysis of logdata a pattern is recognized which is within abnormal ranges. In this example you see an action of a certain user which is not within the normal range, looking at the function of this user.Further investigation shows up that this user has visited a hostile website, from which malware was installed on the users computer. Using the WebAccessAnomaly together with someLog Analytics query results shows that some other user separate from the user we already had an eye on also shows up. In this case we can do some preventive actions to prevent another kill chain such as:


  • Force password reset on all the compromised accounts.
  • Cut-off access of the two users from rest of the network.
  • Trigger malware scans/removal from the user machines.
  • Black-list malicious website and add it to your web-filtering solution

Rules and Models

These mechanisms described are based on rules and models The analysis of potential security actions have to be detected and reported. Within SMA you can define rules for that purpose. These rules apply for the systems or applications which needs to be alerted in case of a security breach.

These rules are used to detect any suspicious action and can be configured on any desired level, for instance within a certain time window an event must happen, how many times, and what action has to follow up when detected.



To detect anomalies,  machine learning models are used. These models are used along with  the log analysis and can be:

  • Peer Models - based on an organization , group
  • SQL Models - based on analysis of database actions
  • User Models - based on analysis of individual users

In combination with the log and data analysis which come from log or uploaded files, more and more suspicious patterns can be identified and recognized, in order to report, alert, and take the necessary actions to it.



Based on further analysis the attacker created multiple users in a short period of time, so the security officer can identify what is going on, what kind of attacks have been done on which systems.




The above is just an example of the broad capabilities the Oracle SMA has. I haven't seen any other product yet which has these powerful capabilities, and even better, it can be positioned enterprise wide, and not only for Oracle systems.

I used the OMC demo site and collateral's, plus some Hands On Labs on Oracle OpenWorld which really amazed me of this powerful solution!