In the previous posts I wrote about how to transform a traditional application server such as WebLogic into a containerized platform, based on Docker containers managed by a Kubernetes Cluster. The basics are that there hasn't been any effort yet in looking how a huge and complex environment such as Oracle SOA Suite could fit into a container based strategy, but it's more or less lift and shift the current platform to run as Kubernetes managed containers.


There are ways to run a product such as Oracle SOA Suite in Kubernetes, here's the way I did this.


Oracle SOA Suite on OKE


Other that the standard PaaS service Oracle provides, the SOA Cloud service, this implementation I did is based on IaaS, on the Oracle Cloud Infrastructure, where I configured this Kubernetes Cluster as I described in previous posts. However this also can be done on an on premises infrastructure.





The following parts are involved to set up a SOA Suite domain based on the version


  • Docker Engine
  • Kubernetes base setup
  • Oracle Database Image
  • Oracle SOA Suite docker image ( either self build or from Oracle Container Registry
  • Github
  • A lot of patience


Set up the SOA Suite repository


A SOA Suite installation requires a repository which can be Oracle or some other flavours, to dehydrate SOA instance data and store metadata from composite deployments. I used a separate namespace to setup a database in Kubernetes.


The database I created uses the image, so I used the database yaml I obtained, where I had to add an ephemeral storage because after the first time of deploy I got this message in Kubernetes about exhausted ephemeral storage, so I solved it with this

  • Create a namespace for the database
  • Create a secret to be able to pull images from the container registry
kubectl create secret docker-registry ct-registry-pull-secret \ \
  --docker-username=********* \
  --docker-password=********* \


  • Apply the database to Kubernetes. To see progress, you can look into it to see progress of db creation:
kubectl get pods -n database-namespace
NAME                        READY     STATUS    RESTARTS   AGE
database-7b45749f44-kjr97   1/1       Running   0          6d

kubectl exec -ti database-7b45749f44-kjr97 /bin/bash -n database-namespace


Or use

kubectl logs database-7b45749f44-kjr97 -n database-namespace


So far so good. The only thing left is to create a service for the database to be exposed:

kubectl expose deployment database --type=LoadBalancer --name=database-svc -n database-namespace





Repository Creation with RCU


To do this, run of a temp resource pod of the SOA Suite image was sufficient to run rcu from it:

kubectl run rcu --generator=run-pod/v1 --image --overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "regsecret"}] } }' -- sleep-infinity


And run rcu from it:
kubectl exec -ti rcu /bin/bash

/u01/oracle/oracle_common/bin/rcu \
  -silent \
  -createRepository \
  -databaseType ORACLE \
  -connectString \
  -dbUser sys \
  -dbRole sysdba \
  -useSamePasswordForAllSchemaUsers true \
  -selectDependentsForComponents true \
  -schemaPrefix FMW1 \
  -component SOAINFRA \
  -component UCSUMS \
  -component ESS \
  -component MDS \
  -component IAU \
  -component IAU_APPEND \
  -component IAU_VIEWER \
  -component OPSS  \
  -component WLS  \
  -component STB


Nevertheles this is not completely silent as you have to fill in manually your passwords



Creation of the SOA domain


I used the WebLogic Kubernetes Operator GIT repository to create my SOA domain and changed it to what I needed.

General steps to take:

  • Install the WebLogic Kubernetes Operator
  • Create persistent volumes and claimes (PV/PVC)
  • Create a domain:
    • namespace
      • secrets:
        • RCU secrets
        • WebLogic domain secrets

Use the by oracle provided scripts and tools

  • Rollout the domain


Install the WebLogic Operator

I used helm to do this. In the github repository, there are charts available at kubernetes/charts/weblogic-operator. Specifiy in the values .yaml which namespace needs to be managed

The SOA Domain needs to be managed:

  - "default"
  - "domain-namespace-soa"


Use the lates operator

# image specifies the docker image containing the operator code.
image: "oracle/weblogic-kubernetes-operator:2.2.1"


And install:

helm install kubernetes/charts/weblogic-operator   --name weblogic-operator --namespace weblogic-operator-namespace   --set "javaLoggingLevel=FINE" --wait


Persistent volumes and claimes (PV/PVC)

When running s WebLogic domain in Kubernetes pods, 2 different models can be chosen:

  • Domain on image, when all artifacts will be stored in the container
  • Domain on a persistent volume, where domain artifacts can be stored stateful

In the git repository there are some ready to go scripts for creating PV's:



Now provide your own specifics in the inputfile such as:

# The version of this inputs file.  Do not modify.
version: create-weblogic-sample-domain-pv-pvc-inputs-v1
# The base name of the pv and pvc
baseName: soasuite
# Unique ID identifying a domain. 
# If left empty, the generated pv can be shared by multiple domains
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.
domainUID: soa-domain1
# Name of the namespace for the persistent volume claim
namespace: domain-namespace-soa
# Persistent volume type for the persistent storage.
# The value must be 'HOST_PATH' or 'NFS'. 
# If using 'NFS', weblogicDomainStorageNFSServer must be specified.
weblogicDomainStorageType: HOST_PATH
# The server name or ip address of the NFS server to use for the persistent storage.
# The following line must be uncomment and customized if weblogicDomainStorateType is NFS:
#weblogicDomainStorageNFSServer: nfsServer
# Physical path of the persistent storage.
# When weblogicDomainStorageType is set to HOST_PATH, this value should be set the to path to the
# domain storage on the Kubernetes host.
# When weblogicDomainStorageType is set to NFS, then weblogicDomainStorageNFSServer should be set
# to the IP address or name of the DNS server, and this value should be set to the exported path
# on that server.
# Note that the path where the domain is mounted in the WebLogic containers is not affected by this
# setting, that is determined when you create your domain.
# The following line must be uncomment and customized:
weblogicDomainStoragePath: /u01/soapv
# Reclaim policy of the persistent storage
# The valid values are: 'Retain', 'Delete', and 'Recycle'
weblogicDomainStorageReclaimPolicy: Retain
# Total storage allocated to the persistent storage.
weblogicDomainStorageSize: 20Gi

and run it:

./ -i create-pv-pvc-inputs.yaml -o soapv -e


Where -e is just a path where you put your yamls local on your client.


Create WebLogic domains access:

kubectl -n domain-namespace-soa \
        create secret generic domain1-soa-credentials \
        --from-literal=username=weblogic \


Create SOA repository access

./ -u fmw1_opss -p qualogy123 -a sys -q qualogy123 -d soa-domain-3 -n domain-namespace-soa -s opss-secret
secret "opss-secret" created
secret "opss-secret" labeled


Do this for all the schema's for SOA Suite

Domain rollout

Because domain rollout is a complicated process, this is all enclosed in a pod which runs several jobs to configure a domain

The only thing you have to to to fill in some inputs in a yaml file, where using a shell script will create a job  which will implement all your values

Some important ones:

# Port number for admin server
adminPort: 7001
# Name of the Admin Server
adminServerName: admin-server
# Unique ID identifying a domain.
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.
domainUID: soa-domain-1 (no underscores!)
# Home of the WebLogic domain
# If not specified, the value is derived from the domainUID as /shared/domains/<domainUID>
domainHome: /u01/domains/soa-domain-1
# Determines which WebLogic Servers the operator will start up
# Legal values are "NEVER", "IF_NEEDED", or "ADMIN_ONLY"
serverStartPolicy: IF_NEEDED
# Cluster name
clusterName: soa-cluster-1
# Number of managed servers to generate for the domain
configuredManagedServerCount: 3
# Number of managed servers to initially start for the domain
initialManagedServerReplicas: 2
# Base string used to generate managed server names
managedServerNameBase: soa-ms
# Port number for each managed server
managedServerPort: 8001
# WebLogic Server Docker image.
# The operator requires WebLogic Server with patch 29135930 applied.
# The existing WebLogic Docker image, `store/oracle/weblogic:`, was updated on January 17, 2019,
# and has all the necessary patches applied; a `docker pull` is required if you already have this image.
# Refer to [WebLogic Docker images](../../../../../site/ for details on how
# to obtain or create the image.
# Image pull policy
# Legal values are "IfNotPresent", "Always", or "Never"
imagePullPolicy: IfNotPresent
# Name of the Kubernetes secret to access the Docker Store to pull the WebLogic Server Docker image
# The presence of the secret will be validated when this parameter is enabled.
imagePullSecretName: ct-registry-pull-secret
# Boolean indicating if production mode is enabled for the domain
productionModeEnabled: true
# Name of the Kubernetes secret for the Admin Server's username and password
# The name must be lowercase.
# If not specified, the value is derived from the domainUID as <domainUID>-weblogic-credentials
weblogicCredentialsSecretName: domain1-soa-credentials
# Whether to include server .out to the pod's stdout.
# The default is true.
includeServerOutInPodLog: true
# The in-pod location for domain log, server logs, server out, and node manager log files
# If not specified, the value is derived from the domainUID as /shared/logs/<domainUID>
logHome: /u01/domains/logs/soa-domain-3
# Port for the T3Channel of the NetworkAccessPoint
t3ChannelPort: 30012
# Public address for T3Channel of the NetworkAccessPoint.  This value should be set to the
# kubernetes server address, which you can get by running "kubectl cluster-info".  If this
# value is not set to that address, WLST will not be able to connect from outside the
# Name of the domain namespace
namespace: domain-namespace-soa
#Java Option for WebLogic Server
javaOptions: -Dweblogic.StdoutDebugEnabled=false
# Name of the persistent volume claim
# If not specified, the value is derived from the domainUID as <domainUID>-weblogic-sample-pvc
persistentVolumeClaimName: soa-domain1-soasuite-pvc
# Mount path of the domain persistent volume.
domainPVMountPath: /u01/domains
# Mount path where the create domain scripts are located inside a pod
# The `` script creates a Kubernetes job to run the script (specified in the
# `createDomainScriptName` property) in a Kubernetes pod to create a WebLogic home. Files
# in the `createDomainFilesDir` directory are mounted to this location in the pod, so that
# a Kubernetes pod can use the scripts and supporting files to create a domain home.
createDomainScriptsMountPath: /u01/weblogic
# RCU configuration details

# The schema prefix to use in the database, for example `SOA1`.  You may wish to make this
# the same as the domainUID in order to simplify matching domains to their RCU schemas.
rcuSchemaPrefix: FMW1

# The database URL

# The kubernetes secret containing the database credentials
rcuCredentialsSecret: opss-secret
rcuCredentialsSecret: iau-secret
rcuCredentialsSecret: iauviewer-secret
rcuCredentialsSecret: iauappend-secret
rcuCredentialsSecret: wls-secret
rcuCredentialsSecret: soainfra-secret
rcuCredentialsSecret: mds-secret
rcuCredentialsSecret: wls-secret
rcuCredentialsSecret: wlsruntime-secret
rcuCredentialsSecret: stb-secret


Execute the script

./ -i create-domain-inputs-soa.yaml -o wlssoa -e -v



You can follow it using the logs:

kubectl logs -f soa-domain-2-create-fmw-infra-sample-domain-job-572r6 -n domain-namespace-soa


So this is basically what it takes, and next time I will do a more deep dive in how to manage a SOA Suite domain in kubernetes.


Unfortunately at the moment the internal configuration does not complete entire successful....


Could be the case as described in MOS Doc ID 2284797.1


Update 7 august 2019

Indeed as I expected, the above issue was due to the choosage of the password, theis had to be in the structure as described in this MOS Document. So no actual Kubernetes issue.


To be continued!!!