Skip navigation

Traditionally, when you install Oracle WebLogic, you simply download the  necessary WebLogic and FMW Infrastructure jar files, spin up one or multiple servers, install the software and configure your domain, either automated or manual.

Depending on which Cloud strategie you choose you could either:

 

  • Use pure IaaS, so in essence this means you obtain compute power, storage and network from a certain cloud provider, which can be any you choose: AWS, Azure, Google or maybe Oracle.
  • Use PaaS, where your application server platform and generic middleware is part of the cloud subscription. In this scenario you might choose for Oracle's Java Cloud Service  or some other PaaS such as SOA Cloud Service, depending on the needs. Looking at Java based applications, other vendors also offer Java in the Cloud:

        But when you come from the WebLogic application server, the 1st obvious choice seems the Java Cloud Service. This is only one part of the stage of a future roadmap of your application landscape, because the  applications you develop can still be monoliths. I will come to that later.

 

Along with a strategy of "breaking up the monoliths", DevOps, and Cloud, also containerizing your infrastructure is inevitable for the future state of your applicationlandscape. Now Oracle Product Development stated during Oracle OpenWorld, that regarding containerization, they will follow the strategy of the Cloud Native Compute Foundation which means that products like WebLogic will be developed with respect to container technology such as Docker, CoreOS and Kubernetes.

 

 

Install WebLogic on a Kubernetes platform

 

To install a WebLogic Domain on a Kubernetes platform in the cloud, I used the Oracle Kubernetes Engine which is through the OCI console very easy to set up.

  1. Login to you overall Cloud Dashboard and select Compute in the left pane; this brings you into the Oracle Cloud Infrastructure Dashboard
  2. Click on Developer and create the OKE cluster. Be sure that Helm and Tiller are ticked

 

You have to create a compartment in OCI before creating the OKE. You can find this here https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcompartments.htm

After that in your root compartment, you have to create a policy to manage OKE resources, so select in the left pane:

Identity-->policies and create the following policy"

 

Now the base platform is ready, and we use a linux client to access, manage and build further on our Kubernetes platform.

For that we need to obtain the kubeconfig file from the OKE and place it on our client:

 

Local we create a hidden directory

 

mkdir -p $HOME/.kube

 

Next, you need to install the cloud commandline: https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliinstall.htm and then configure it to use with you cloud tenant

 

oci setup config

 

This sets some basic config, generates an API key pair and creates the config file. The public key you will have to upload to you console:

Then create the kubeconfig file locally:

Then, see if the cluster is accessible

 

The Kubernetes Operator for WebLogic

Before we are going to install our WebLogic domain into Kubernetes, first a so called Operator is required. An operator is an extended api on top of the basic K8S when you setup a K8S cluster.  A WebLogic platform has so many specifics in it, which can't be managed by standard K8S api's, so the operator takes care of that. Operations such as WebLogic clustering, shared domain artifacts, t3 and rmi channel access, and lots more are handled by this operator.

To obtain the operator, clone it from github to a directory you prefer:

git clone https://github.com/oracle/weblogic-kubernetes-operator.git 

 

Go to the directory weblogic-kubernetes-operator, where we will install the operator using helm

 

Install the operator using Helm

Before we install the operator, first a role binding needs to be setup for helm in the k8s cluster:

cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: helm-user-cluster-admin-role
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system
EOF

 

Next, out from the cloned git repository you can install the operator using helm:

 

And after a while the operator pod comes online and running

 

Some operational actions can be done with helm. You can inspect the values from your helm chart and see how it's implemented.

So from you directory where the helm chart is located:

To whats implemented:

The operator pod to be seen from your K8S Dashboard

 

 

 

More about the how and why about operators, I advise you to read https://www.qualogy.com/techblog/oracle/whitepaper-kubernetes-and-the-oracle-cloud and the documentation about it which is available on https://docs.helm.sh/

 

 

Preparing and creating a WebLogic domain

Before preparing, you should know what kind of domain model you would like to choose: the domain in docker image or the domain on persistent volume.

If you are really have to preserve state, or make logfiles accessible outside your domain you should us the one on persistent volume.

 

Create a Persistent volume

Using an input file with and changing the values to your to be created domain, namespace, etc:

 

 

# Copyright 2018, Oracle Corporation and/or its affiliates.  All rights reserved.
# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.

# The version of this inputs file.  Do not modify.
version: create-weblogic-sample-domain-pv-pvc-inputs-v1

# The base name of the pv and pvc
baseName: weblogic-storage

# Unique ID identifying a domain. 
# If left empty, the generated pv can be shared by multiple domains
# This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster.
domainUID:

# Name of the namespace for the persistent volume claim
namespace: wls-domain-namespace-1

# Persistent volume type for the persistent storage.
# The value must be 'HOST_PATH' or 'NFS'. 
# If using 'NFS', weblogicDomainStorageNFSServer must be specified.
weblogicDomainStorageType: HOST_PATH

# The server name or ip address of the NFS server to use for the persistent storage.
# The following line must be uncomment and customized if weblogicDomainStorateType is NFS:
#weblogicDomainStorageNFSServer: nfsServer

# Physical path of the persistent storage.
# When weblogicDomainStorageType is set to HOST_PATH, this value should be set the to path to the
# domain storage on the Kubernetes host.
# When weblogicDomainStorageType is set to NFS, then weblogicDomainStorageNFSServer should be set
# to the IP address or name of the DNS server, and this value should be set to the exported path
# on that server.
# Note that the path where the domain is mounted in the WebLogic containers is not affected by this
# setting, that is determined when you create your domain.
# The following line must be uncomment and customized:
weblogicDomainStoragePath: /u01

# Reclaim policy of the persistent storage
# The valid values are: 'Retain', 'Delete', and 'Recycle'
weblogicDomainStorageReclaimPolicy: Retain

# Total storage allocated to the persistent storage.
weblogicDomainStorageSize: 10Gi
weblogicDomainStoragePath: /u01

Now in that directory there is a create script to execute:

 

 

Now the both generated yaml files can be used to apply to the k8s WebLogic namespace:

And in our namespace the storage is created:

 

 

Generating yaml files and create the domain

 

Oracle provides on Github several solutions to create domains. There  domain models I chose was the one to have a perstistent volume to store logs and other artifacts. The storageclass I used was the oci storageclass.

To generation took care of the following:

 

 

  • Create a Job for the entire domaincreation
  • Create a ConfigMap in K8S for parameterizing the WebLogic Domain( based on the yaml inputs)
  • Generation of a domain yaml file out of the input and template yamls
  • Start up scheduled pods for creating the entire domain
  • Final creation of WebLogic Admin Server and Managed Server Pods and start them using the scripts which were included.

And the pod which creates the domain

 

 

 

When this job has finished the final empty WebLogic domain will be created.

 

The road to transformation

 

Now the question is, or this is already production worthy. My opinion is no, because these setups re based on what;s there on github at the moment; so I'd recommend first to start lightweighted and set some of these models up to try them out. To set up an enterprise ready WebLogic Kubernetes platform, also aspects such as automation, loadbalancers, networking, and so on needs to be sorted out how this can be setup in a way that WebLogic can act in a containerized world.

 

In one of my next article I will look at migrating existing WebLogic domain to Kubernetes.

Companies are on the verge of making important decisions regarding containerization of their IT Landscape. And whether, how and when they should move to the cloud.

 

This whitepaper helps companies make the right decisions regarding the Container Orchestration Platform, and how it works together with the Oracle Cloud Infrastructure. It contains a lot of tips, takeaways and things to consider. So you can make the optimal choice for your company’s strategy.

 

https://bit.ly/2BFvKUK  for downloading the whitepaper

When you look at AWS, it has a possibility to create rather easily an Elastic Search Cluster. With a few simple clicks you have a cluster up and running.

Unfortunately the Oracle Cloud Infrastructure doesn't have this feature yet, but that doesn't mean you can't setup Elastic Search on OCI . Using the magic Terraform wand magic is about to happen...

 

Get the terraform Elastic Search OCI Plugin

 

On github there is an OCI ELK plugin available on https://github.com/cloud-partners/oci-elasticsearch, so perform an easy

 

git clone https://github.com/cloud-partners/oci-elasticsearch.git

 

for a local repository download, which leaves you with a bunch of files:

 

in the env-vars you will have tot set your region, compartment and userid, and also you fingerprint and ssh keypairs which wasj generated with

ssh-keygen -b 2048 -t rsa

and left the keypairs in the .ssh directory, so the env-vars would look like this

 

Some other files could acquire my attention:

  • variables.tf where I can specify VCN, shapes and VM image location, I chose shape VM.Standard1.8

other files specified loadbalancer, compute nodes, storage etc.

 

Next run the env-vars script, another way is to put it into your bash_profile

 

. ./env-vars
terraform plan

 

Immediately it returned some errors like:

error: oci_core instance.Bastion Host: "image": required field is not set 

 

To debug this error, I set the terraform loglevel to trace

export TF_LOG=TRACE

 

Still, this did not bring me al lot of new information, until I performed terraform version, which gave the out put that the current version for this provider was outdated.

To upgrade the provided, the binary had to be dowloaded and replaced

https://releases.hashicorp.com/terraform/0.11.10/terraform_0.11.10_linux_amd64.zip

After download

sudo cp -p terraform /usr/local/bin/

 

and in the directory of the oci elk provider:

terraform init

which upgraded the provider.

Finally, explicitly the version had to be set in the provider.tf:

 

When ready the terraform plan and terraform apply went well and gave me a fully running ELK cluster on OCI.

 

In part 2, I will dive deeper in how to setup Elasic Search, Kibana and Logstash.

Kubernetes becomes the defacto standard if it comes to managing and scaling your container platform, where you might consider that containers are the next gen infrastructure platforms as a follow up on virtual machines, where every process application or infrastructure component can run in a docker container, autonomous, lightweight and independent, as an application, or a piece of runtime platform software ( such as a Java JDK )

However,  in the greater whole dockercontainers don't stand for themselves and they need some management; they need to be orchestrated and configured in a meaningful way. One of those platforms is called Kubernetes, developed by Google and since it development more and more technologies embraced Kubernetes as the orchestrator platform for containers.

 

Oracle these days is aiming to get customers into cloud, so in this way they developed a Kubernetes cloud solution called OKE which stands for Oracle Kubernetes Engine and is available from the Oracle Cloud Infrastructure.

 

2018-08-29 16_42_32-Clipboard.png

 

In here you can configure all your way through to setup a Kubernetes engine,

 

imageFile.png

 

The good news is that there is an automated way to fo this, using Terraform.

 

Terrafom is a solution which fits perfectly in a DevOps methodology where Infrastructure as a code and automation are keywords to support the DevOps way of working.  Every confgurational aspect to setup networks, loadbalancers, vm's, containers etc can be rolled out using Terraform, and especially cloud infrastructures

Oracle supports Terraform for its Cloud infrastructure since april 2017.

 

Looking at schematics, the Terraform plugin works as below

Now the steps to roll this out are pretty simple however there are some triviial aspects to consider

 

  • You need an OCI userid and obtainan API key by generating a Public and Private key pair

To set this up use a local linux server to do this:

  • Generate the keypair and convert to PEM format
  • Extract the finger print
  • Add an API key to you OCI userid and paste the Public key contents in it

 

Generate and extract the fingerprint

 

Configure Terraform

 

Lucky for me, there is a OKE Terraform installer present on Github, see https://github.com/oracle/terraform-kubernetes-installer so I had to pull and update this github to my local repository

Next, run some pre commands:

cd terraform-kubernetes-installer
terraform init

To rollout the OKE, the TFVars envfile needed the OCI configuration:

 

tenancy_ocid = "the cloud tenancy id"
compartment_ocid = "the compartment id" This one you need to create in the OCI console
fingerprint = "<extracted from the PubPriv Keyoair"
private_key_path = "/home/oracle/.oci/oci_api_key.pem"
user_ocid = "<the OCI user id>"
Region = "< the region of your OCI > (like eu-frankfurt-1)"

 

Comparment creation to be done in the OCI menu->Identity->Compartment.

 

Now when execute the Terraform configurations, the TFVars need to be exported so the best to do this in the .bash_profile

Before applying, evaluate the plan

terraform plan

...etc

Finally apply the configuration to OKE

 

terraform apply

 

It took a some time but after a while my kubernetes master and worker nodes where created in the OCI. This proved that setting up using Terraform is a fast and simple way if you know the trivial parts ( tenant ocid, userocid etc )

 

In this post I am exploring the sense of configuring and running an Oracle SOA Suite 12.2 domain on docker and managed by Kubernetes, to discover if SOA is a good candidate to run on Docker.

The server where I will install it is running Oracle Linux 6.8; unfortunately Kubernetes now is supported on Linux 7, so my next post will handle that subject.

First of all here are my install bits and experiences.

 

 

Setting up Docker

 

 

Before installing I had to add some YUM repo to get the right docker package:

 

export DOCKERURL="https://yum.dockerproject.org/repo/main/oraclelinux/6"
 rm /etc/yum.repos.d/docker*.repo
yum-config-manager --add-repo "$DOCKERURL/oraclelinux/docker-ce.repo"
yum-config-manager --enable docker-ce-stable-17.05

 

The installation took place on a Linux VM running Oracle Linux 6.8. I used the YUM repository to install the appropriate version of docker:

I was lucky, it was already at the latest version.

Next, I wanted to pull the containers from the Oracle Container Registry, so I logged in from docker:

docker login container-registry.oracle.com

providing my OPN username and password.

 

Next, I considered a new place for where the docker containers are stored, because /var/lib/docker is mounted under "/", which is not a good idea to my opinion:

  • Backup if current dir
tar -zcC /var/lib docker > /u02/pd0/var_lib_docker-backup-$(date +%s).tar.gz

Move it to a new filesystem with sufficient space

mkdir /u02/pd0/docker
mv /var/lib/docker /u02/pd0/docker
  • Link it to the original location
ln -s /u02/pd0/docker /var/lib/docker

Quick and a slight dirty, but sufficient

 

First of all, a bridge network must be created, to enable containers to connect with eachother, like the SOA containers with their dehydration store:

docker network create -d bridge SOANet

 

Creating database container

Now on https://container-registry.oracle.com/ there are some instructions of how to set up a SOA Docker environment, however these instructions are not totally correct.

Creating a SOA Suite database, the parameters is the db.env.list were not correct

 

ORACLE_SID=<db sid="">
ORACLE_PDB=<pdb id="">
ORACLE_PWD=<password>

 

These weren't correct, in this case these properties were ignored and a default dummy name was used like ORCL...etc

The correct string should be DB_ instead of ORACLE_

DB_SID=soadb
DB_PDB=soapdb
DB_PWD=*******

 

After that, start the docker database container, and database came up with the right SID and Service Name.

docker start soadb
docker ps

Some verification:

- Login with SQL*Plus

- Showed Listener Status

 

Create AdminServer Container

Before creating the AdminServer I first obtained the the image from the registry:

docker pull container-registry.oracle.com/middleware/soasuite:12.2.1.3

 

Also inhere, specific parameters had to be set, although I also encountered some flaws in the original instructions:

  • Although password was set for the db admin, it didn't pick it up, and had to set it manually in the database  " alter user sys...."
  • The SID and Service names were not correct, for the PDB I had to configure it including domain name, so this was finally the correct setup:
CONNECTION_STRING=soadb:1521/soapdb.localdomain
RCUPREFIX=SOA1
DB_PASSWORD=******
DB_SCHEMA_PASSWORD=*****
ADMIN_PASSWORD=******
MANAGED_SERVER=soa_server1
DOMAIN_TYPE=soa

 

And next run the creation of the domain, and start the AdminServer through WLST

docker run -i -t  --name soaadminserver --network=SOANet -p 7001:7001 -v /u02/scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/domains   --env-file ./adminserver.env.list oracle/soa:12.2.1.3

 

When this was finished, I could login to the WebLogic console. The listenadress of the AdminServer was empty so I guess the updListenAdress.py did not do it's work, so I changed it manually

 

Starting the managed server

The image configures already a managed server in the domain, so next is to spin up a container for the SOA managed server:

docker run -i -t  --name soa_server1 --network=SOANet -p 8001:8001   --volumes-from soaadminserver   --env-file ./soaserver.env.list oracle/soa:12.2.1.3 "/u01/oracle/dockertools/startMS.sh"

 

The SOA Managed server came up after a while, though, status in the console was SHUTDOWN, because the startscript did not use the nodemanager. Checking the log I could follow the startup sequence.

 

docker exec -it soaadminserver bash -c "tail -f  /u01/oracle/user_projects/domains/InfraDomain/logs/ms.log"

After that, I logged in into the container and started the nodemanager using the startNodemanager,.sh, to be able to start managed servers through the console and get health info.

 

Conclusion

Just some thoughts and doubts that came up in me; but please correct me if I'm wrong.

Now the million dollar question would be: Is SOA Suite fit for a container platform? It does run, although I haven't tested it yet. In the end to set it up is rather simple.

Apart from some flaws during setup, you might ask yourselve : what are we doing different inhere, instead of spinning up servers and/or VMS

 

Well 1st, all docker containers run on a server, but we skipped the entire server configuration and based on a pre-baked image we could bring rather quick an environment

But looking from an application perspective, we still not doing anything containerized. Even in a docker container monoliths can exist.

So, the complete story is that at platform level we have containers, at application level, not yet.

At this years Developer Tour in Latin America I was selected to speak in Argentina, which would be a great adventure for me. As I have never been on the southern hemisphere, I was really excited and honoured to go. After a very long flight from Amsterdam to Buenos Aires, almost 14 hours, I landed early morning in Buenos Aires, which was in winter time. For me a big switch as in the Netherlands it was around 35 *C when I left.   But who's complaining.

 

Packed with my suitcase and my Oracle Management Cloud bible I entered Buenos Aires... what a city and what contrasts. Beautiful art and architecture but also a lot poverty.

I like the urban lifestyle so I found my way in Buenos Aires and visited some hotspots every tourst must see. If you ever plan to go, wear some good shoes because the streets are sometimes hard to walk on.

Nevertheless, I could breath the Southern American lifestyle while seeing the Tango live on the streets:

 

IMG_20180807_173001.jpg

 

The conference day

 

The conference took place on my birthday, the 9th of aigust 2018 in the UADE, one of the many universities of Buenos Aires.

IMG_20180807_114614.jpg

 

A 20 minutes walk from my hotel brought me there, and around 9:30 AM the conference was openend in the main Auditorium. My session was planned at 11:05 am, but die to some delay it began a bit later, so I followed some other sessions. Although the majority of the sessions were in Spanish, which isn't my stongest language, I could follow some of it, and was lucky that the slides were in English. As I was on the Analytics track I followed a session from Edelweiss Kammerman about Data Visialization with the Oracle Datawarehouse Cloud Service, and the Session before me from Diego Sanchez, also about the Management Cloud, regarding problem detection and analysis.

 

 

Security Analytics with the Oracle Management Cloud

 

As I was in the Analytics track I had to emphasize on the Analytics capabilities of the Oracle Management Cloud, where Machinelearning, Anomaly Detection and Data Visualization are important topics.

Machinelearning capabilities are essential for this solution; in OMC the following are used:

 

  • Anomaly Detection
    • See the abnomal symptoms. We're not interested in what's going ok, but in the exceptions
  • Clustering
    • Reduce tons of billions of data to a manageabe and understandable pattern. This requires high end technology analysis.
  • Correlation
    • Correlate as might seem different events to eachother to a common recognzed pattern. Such as link by a common attribute, an OrderID, a Personal ID and so on,

 

 

The battle against attacks is always lag behind

 

Let's face it; SOC's are having a hard time to defend against all kinds of hostile actions, which can be from the outside world, or inside by suspected fraude of employees. Some of the bad already happened when they come in action.

The  Security Monitoring and Analystics of OMC can help them make  life a bit easier by doing the following

  • Intelligent monitor security events
  • Investigate using Log Analytics
  • Understand  and interpret attackchains
  • Automatically remediate  to reduce exposure
  • Continually harden systems in response to a threat or weakness

Now a well known pattern of attack is the Cyber Kill Chain where through some certain steps hostile parties can inflitrate into systems without anyone noticing. And don't think of the stereotypes of young guys or girls on their addic, trying to hack. No we think about highly sophisticated attacks, nitiated by machines and well organized groups of maybe governments or criminal organizations.

 

2018-08-10 14_31_48-OMC_analytics_machinelearning.pptx [Protected View] - PowerPoint.png

 

A typical SMA Dashboard Identifying attacks

 

 

Also when you lay a part of your IT in the cloud, you can easily integrate you Access Biroker or Identity Management Systems into The Oracle Management Cloud

2018-08-10 14_32_17-OMC_analytics_machinelearning.pptx [Protected View] - PowerPoint.png

 

 

 

The SMA Engine works with Machine Learning Models and Rules in order to detect any security thread, as I already explained in an earlier blog. But the fact that SMA worls closely together with the Log Analytics modue makes it a strong and well integrated solution for any enterprise to use in it's everlasting battle against attacks.

 

The closing Speakers, ACE and DevChamp dinner

As a traditiion the event was closed with a nice dinner at a  restaurant to try out the Argentinian meat culture, where I met some of my colleagues which I did not had the chance to meet.

By surpise, Jennifer Nicholson from the ACE program announced a new Java Developer Champion: Hillmer Chona,,, Congratulations and well done!

 

 

Big thanks

 

Finnaly I would like to thank the Argentinian Oracle User group for the organization and hope to see you maybe next year.

Monitoring your IT landscape is in many cases an under estimated topic, however IT departments spend time on doing it. But there are hardly no standards how to approach this, and in a typical IT operations team, every team uses its own tools. Either it can be some scripts, monitoring software from different vendors or some freeware/opensource.

Companies who do Oracle, often use the Oracle Enterprise Manager Cloud Control(EMC), which is a agent based centralized repository to gather diagnostic information of all connected systems, database, middleware and applications. It's a very broad and complete solution to monitor and manage IT systems.

But often the case is, that this platform is owned by one team, typically a DBA team, because EMC finds its origin from the Oracle Database. And they use it also to monitor their database. But a company running an Oracle SOA Suite or BPM or any other Fusion Middleware, it is not a common habit to use the EMC to monitor their FMW applications. The reason why is:

  • There are no management packs licensed
  • There is no or not enough knowledge how to implement and use these management packs
  • The team who owns the platform does not allow other teams to use the EMC

Management Packs are layers for specific tasks or platforms to extend the management capability of the EMC.

 

To overcome owner issues, the Oracle Management Cloud(OMC) can help. Teams can order there own subscriptions, or do it as a joined company effort to monitor their applications in the Cloud. Although OMC is not a replacement of EMC you migh notice some similarities, especially in the Infrastructure Monitoring and Compliancy Management.

But other than EMC, OMC is a more coherent solution where the different modules work closely together, more or less out of the box. An Application Performance Management uses the log analytics to drill down deep into Application Issues.

 

Now what if your company uses the EMC but wants to make uses of some of all the features of the OMC? Or better, why would a company want that? Well, a good usecase for a company is to try out the OMC, or wants to make use of one of the modules such as Log Analytics. But how to get all the information from your on premises EMC to the OMC?

You can make use of the:

 

 

Data Collector

The OMC comes with a variety of agents:

  • The Gateway Agent. An inbetween agent when your systems are not supposed to be eposed to the outside world, a Gateway Agent can be placed in your DMZ, gathers all information from all your OMC connected applications and DB's and pushes it outside the datacenter to the OMC.
  • The Cloud Agent - An agent installed to collect server information and gather logfiles for the log analytics
  • The APM Agent - An agent specificaly used for Application Performance Diagnostics. Has to be implemented in the application server infrastructure, can be a Java , Node JS, Apple ord Android or .Net agent.
  • The Data Collector. This agent can be useful to collect all the data from your EMC and to be showed in your OMC on the Infrastructure Monitoring Module.

 

 

Data Collector Implementation

 

To implement the Data Collector, you need to locate your EMC system. This can be one host or maybe separate if the OMS application and OMS database run on separate servers. It us sufficcient only to install it on the OMS appliction host ( OMS = Oracle Management System - the engine that runs EMC).

 

First step is to download the Data Collector Agent from your OMC platform:

Transfrom the package to your EMC host and unzip it in a directory.

The next task to do is to modify the agent.rsp. Modify the following

 

TENANT_ID=<YOUR OMC TENANT>

UPLOAD_ROOT=https://<youromc.europe.oraclecloud.com/

AGENT_REGISTRATION_KEY=*********************** --> to be found in

Administration --> Agents --> Registration Keys.

AGENT_BASE_DIRECTORY=/u01/app/omcagent

DATA_COLLECTOR_USERNAME=omc_collector

DATA_COLLECTOR_USER_PASSWORD=**************

OMR_USERNAME=sys

OMR_USER_PASSWORD=***********

OMR_HOST_USERNAME=oracle

OMR_STAGE_DIR=/u01/app/omcstage

 

OMR is the EMC repository. The datacollector schema will be installed in this repository.

Next is it just to run the AgentInstall.sh script, and if finished, after a while you can start the agent form your omcagent directory

/u01/app/omc/agent_inst/bin/omcli start agent

/u01/app/omc/agent_inst/bin/omcli status agent

 

 

 

From here your databases and systems monitored in EMC are now to be showned in OMC, but basic stuff. If you want to see more you will have to specify a json file and register databases against the OMC. Oracle provides for various types of json scripts for various database flavours.

Since some years I attend the annual partner forum, and though I had a very busy time at my customers, plus the week after I had to present on the Analytics and Data Summit,  I still decided for my self to attend to this forum. After leaving my "Oracle Red Stack" car behind on the airport I flew to Budapest on Monday morning.

DYEWX8dWkAAxFyf.jpg

Arriving on my final destination, the Boscolo Autograph hotel, I was amazed about this very nice hotel, which happened to be very comfortable and luxury, very nice architecture on the in side. My compliments to Jurgen and his team to book this hotel for the conference.

Day 1: ACE Sessions

This afternoon is reserved for ACES and ACE Directors from partners who like to speak of some customer success story, or putting a great contribution to some product of Oracle. One of my favorite sessions was the one from the https://twitter.com/JarvisPizzeria  team which we're very active with their blogs about the Oracle PCS. They deserved their community award!

I was surprised about the many countries from where partners we're attending to this forum. Luis Weir, together with Phil Wilkins spoke passionate about their favourite subject Oracle API CS integrated in a full solution with a Oracle JET application, integration layer with OIC and a SaaS layer consisting of some SaaS solutions such as Oracle Taleo.

The other sessions we're also very interesting to hear; it is always a pleasure to listen to Simon Haslam, he spoke about provisioning the Oracle Cloud in a secure way, which was very useful to hear; I recognized the same he told about using the the Oracle Cloud GUI; provisioning  one instance is not such an issue but doing 20 ends up in a lot of filling in and typing.

 

Furthermore I was very pleased about the fact that the Oracle Management Cloud is adopted in the PaaS and it was this year an important topic. Last ACE session was about a customer case telling the using the analysis capabilities to search for root causes in bad performing applications.

Unfortunately I was running out of time else I would have definitely spoken about my experiences with the Oracle Management Cloud.

Finally Jurgen handed out his Community Awards, congratulations to all the winners!

 

 

Day 2 : General Sessions

 

This day is filled with keynotes and presentations from various members of Oracle's Product Management. The openings keynote was held by Jean-Marc Gottero to give a high level overview of the position of the Oracle Cloud in EMEA and the role of partners

 

The entire day was filled with presentations so I give here some highlights which popped up out of my memory.

 

Ed Zou made an important announcement in the line of the Autonomous Database-  Autonomous PaaS Services. Now this was very fresh news so not much to elaborate -  yet, But again I was very pleased to hear the Oracle Management Cloud would play a very important role in this solution. The exact date of releasing these services are not known, and also not known what will be in it. He also told about other new areas Oracle will cover such as blockchain, Intelligent Bots, AI and Machine Learning.

 

Pyounguk Cho presented ( in his enthusiastic style ) the various Cloud platforms  from an AppDev perspective, such as JCS, ACCS, Stack Manager. In fact about Enterprise Java these days and it's position in the cloud. Highlights for development, infrastructure, data management , security and monitoring were important topics.

JCS comes now with a concept called quick-start instances, comparable with the quick-start package for WebLogic developers can download to set up fast and easy a WebLogic domain.

Other new features like snapshot and cloning and the integration of the Identity Cloud in JCS passed the stage.

Another topic about the Application Cloud, a docker based polyglot platform showed us compatibility with non Java such as Python, Node JS,  PHP, and the easiness of building and deploy these native applications making use of their belonging docker images. These different applications can run within one single tenant and exposed to end users through Oracle's cloud loadbalancer. The elastic feature is also a very nice feature which customers can use to scale up or down belonging to their needs.

Oracle's messaging and streaming platform - Event Hub Cloud Service - based on Apache Kafka showed us the need of replacement of traditional message brokers and data integrators.

Finally, the stack manager was discussed which showed a platform for managing all these different cloud solutions into one where customers can group and manage their services into one atomic stack.

 

Robert Wunderlich and Jakub Nesetril spoke about API Management. Funny detail I found was that Robert mentioned the Rabobank about their current path to API management. and Open Banking. Especially important for partners to know their position and how they can fit in bringing these solutions to customers.

They announced some new coming partnerships with companies which are into the API Management market.

Better  authentication integration was one of the topics, explaining how OAUTH can be better configured in the API platform, plus integration with other technology partners and their solutions.

 

Later that they I went off for a video interview about, as you might guess , about my favourite topic, the management cloud. This will be soon published along with the other interview held in the forum.

 

The evening program was very well organized with a nice dinner and networking event held at the "Kiosk" in Budapest

 

Day 3:

 

Unfortunately I could not attend the entire because of my flight schedule, though I could attend a few of my favourite topics

 

It was a nice seeing back with Maciej Gruszka, as he spoke about developing micro services and serverless applications and new patterns of development, compared against the

monolithic applications with significant footprint managed by Application Servers. Pattern such as micro services will become more common among architects as the viable architecture. More vendors these days are designing applications called functions or serverless applications and they implemented it on their cloud. Maciej showed containerized environments with Kubernetes as scheduling infrastructure for Docker based applications running in Oracle Cloud. and the Managed Kubernetes service as the core engine for running microservices framework and serverless

applications.

 

 

 

Pyounguk Cho continued where he left the day before, showing the broad options of the JCS. Another good session was the one from Jurgen Fleisher, from Product Development of the Oracle Management Cloud. Now the presentation was not entire new for me but still hearing it from another perspective gave me new insights and inspirations. He gave a good explanation of what Machine Learning means nowadays, and the role the Oracle Management Cloud can play in any IT organization as it comes to Monitoring, analysis, DevOps and autonomous platforms.

 

Key take aways

 

Because I'm deep into Oracle technology, some of the content I already know. However also some new topics pass by and I think it's very important to put them into the right place of what I already heard. Further more as a partner from Oracle it's evident to really partner  with Oracle and exchange and absorb knowledge and experience.

For me, attending this event is a must.

 

Finally, lots of thanks to Jurgen and everyone in his team responsible for this excellent organized forum.

I've become a huge fan of the Oracle Management Cloud. Why? Because Oracle has broaden it's limit and the OMC doesn't just monitor Oracle based systems and applications, it has plugins for many non Oracle technologies, which makes the OMC very flexible and Enterprise worthy to be used as a complete solution for monitoring.

 

Security Monitoring and Analytics ( SMA)

Oracle also realized customers have great concerns about security in general but even more in the cloud, so they've put up a service in the cloud which has really powerful capabilities.

One of these powerful modules inside the Management Cloud is the Security Monitoring & Analystics, or SMA. With this module any SIEM or SOC can detect, identify and monitor the following:

  • Securiity threats from in and outside the company
  • Fraud detection
  • Compliancy violations

Inside SMA

When you are in SMA it pretty much look like the other OMC components, but it has it's focus on security. Entering the first dashboard you can see immediately an overview of the activity of you users and their possible risky actionsWhen you login into OMC, you can click on the SMA module if you have the proper cloud subscriptions

 

Inhere you will start in the main SMA landing page showing the “Users” Dashboard, but you can configure dashboards for yourself if you want.In this page you see:

1. Users – shows the total number of risky users

2. Threats - shows total, critical, high, medium and low risk threats

3. Assets – shows the total number of risky assetsClicking on the threats you'll can get more details on persons actions which came out of the analysis of the identity management logs or via user data upload. You can see the company, manager, wand specific user details and status such as lockouts, locations, email adresses and so on.To look down deeper you can identify a kill chain. A kill chain is a series of executions which might lead into some kind of destruction or illegal access/actions.

  • Threats by category – Threats are categorized by the SMA engine into different kill-chain steps such as
    • reconnaissance --> research, identification and selection of targets
    • infiltration --> Infiltrate into these targets
    • lateral movement --> move into the system in search for keys/access points

it's obvious that this user is been target of a hostile attack executing this kill chain.

  • Top Risky Assets by Threats – Detects if a cetrtain asset  which can be any system, host or database  is being targeted more usual.

 

Clicking on the threats, you can clearly see what is happening, the killchain is clearly exposed. But how can we see this?

 

Based on the killchain components, we identify:

  • An anomaly (WebAccessAnomaly) is detected by a analytics machine learning model which saw the user going to a  URL that was not expected based on peer group baseline of the websites visited . This User visited a site and downloaded a malware onto his machine which could have triggered this attack.
  • An attack which was detected by  the rule “MultipleFailedLogin”   which gets triggered when five or more failed login attempts on different accounts are seen
  • An infiltration attack which is detected by “TargetedAccountAttack

Furthermore some  infiltration attacks are  captured by the “BruteForceAttackSuccess” rule which gets triggered by 5 failed login attempts on the same account, followed by a successful login in a one-minute period. A conclusion of this is that the attacker has gained the user credentials. But it still not the end.....Again an anomaly is  captured by the rule PrivSQLAnomaly on a database – this is a SQL anomaly detection that shows that attacker is doing some unauthorized or anomalous transactions on the associated asset FINDB. SMA’s SQLAnomaly detection detected thisLooking at the killchain the last action is detected. the lateral movement with the rule MultipleUserCreation –  created 3 or more users in the oracle database within a 5 minute period, by an attackerImmediately you can see that a kill chain (anomaly->recon->infiltration->lateral movement) attack is in progress. Attacker attacked a critical asset (finance host and FINDB) via this user . Ypu can not only see point threats but the entire kill chain view with SMA which gives faster insight what's happening

 

(orginal source : OMC SMA and Configuration and Compliance -DemoScript)

 

 

Machine Learning

Machinelearning in SMA helps identify attacks and threats. If you look at the PrivSQLAnomaly, you see that based on an analysis of logdata a pattern is recognized which is within abnormal ranges. In this example you see an action of a certain user which is not within the normal range, looking at the function of this user.Further investigation shows up that this user has visited a hostile website, from which malware was installed on the users computer. Using the WebAccessAnomaly together with someLog Analytics query results shows that some other user separate from the user we already had an eye on also shows up. In this case we can do some preventive actions to prevent another kill chain such as:

 

  • Force password reset on all the compromised accounts.
  • Cut-off access of the two users from rest of the network.
  • Trigger malware scans/removal from the user machines.
  • Black-list malicious website and add it to your web-filtering solution

Rules and Models

These mechanisms described are based on rules and models The analysis of potential security actions have to be detected and reported. Within SMA you can define rules for that purpose. These rules apply for the systems or applications which needs to be alerted in case of a security breach.

These rules are used to detect any suspicious action and can be configured on any desired level, for instance within a certain time window an event must happen, how many times, and what action has to follow up when detected.

 

Models

To detect anomalies,  machine learning models are used. These models are used along with  the log analysis and can be:

  • Peer Models - based on an organization , group
  • SQL Models - based on analysis of database actions
  • User Models - based on analysis of individual users

In combination with the log and data analysis which come from log or uploaded files, more and more suspicious patterns can be identified and recognized, in order to report, alert, and take the necessary actions to it.

 

 

Based on further analysis the attacker created multiple users in a short period of time, so the security officer can identify what is going on, what kind of attacks have been done on which systems.

 

 

Conclusions

The above is just an example of the broad capabilities the Oracle SMA has. I haven't seen any other product yet which has these powerful capabilities, and even better, it can be positioned enterprise wide, and not only for Oracle systems.

I used the OMC demo site and collateral's, plus some Hands On Labs on Oracle OpenWorld which really amazed me of this powerful solution!

Since I work with WebLogic, 18 years now already, every year a new road-map appears about the new and coming features of Oracle WebLogic and this is presented during Oracle OpenWorld. While everybody is at this moment already back to business as usual, I'd like to give an overview of the already existing and new coming features discussed last year in San Francisco.

 

 

Everything is "Serverless" - " No SQL" -"Low Code" - "SOA is dead", "Micro-everything and death to the Monolith!"

Of course these terms does not fully represent what they appear to at first sight, but still, when you're from the "old school server/ sys admin" I can understand it sometimes is dazzling and sometimes hard to put them all together.  But when you look down deeper, you will discover the relationships between these terms and more in specific what they mean.

 

WebLogic Server "Current" and "Next"

 

Nowadays, we don't only speak of WebLogic Server anymore but also about the Java Cloud Service, which is WebLogic as PaaS. In this post I will give my view of the new and coming features.

WebLogic Server will still exists as the key Java Application Server from Oracle, however it will be the " next generation " application server where old and new concepts go hand in hand. Especially the move to the cloud which is already happening for a few years will be more and more emphasized by Oracle. How ever, either speaking of WebLogic Server of Java Cloud Service, the features are pretty much the same so I will speak of WebLogic Server, it will also mean it's Java Cloud Service.

 

Current WebLogic Server versions

Generally speaking, current most important and used version are:

  • 10.3.6 ( 11gR1 including all patchlevels ) which came out in 2009
  • 12.1.3 (12cR1 including all patchlevels) which came out in 2011
  • 12.2 (12cR2 including all patchlevels) which came out in 2015

 

12.2 made an important step to continuous availibility and multitenancy:

 

  • Multidatacenter availability with Oracle Traffic Director and Coherence and automated failover with SiteGuard

 

 

  • Cross Domain Transaction Recovery

  • Federated caching with Coherence in Multidatacenters
  • Zero Downtime Deployments with automated rollout and error rollback

  • Auto Scaling features:
    • Automated Elasticity for Clusters with:Manage server life cycle,
    • Rules-based decisions based on capacity, demand or schedule,
    • WLDFWatches, Notifications changed to Policies, Actions

And under the hood more and improved features regarding JDBC, REST, JMS, deployment.

 

WebLogic and Java EE8 Certification

Java EE 8 came out in late 2017, and will be supported within WebLogic in this year, 2018. Where in Java EE 7 the focus was on more productivity, in EE8 the focus is more on simplicity. Some of the most important changes:

  • Servlet 4.0 : Servlet is one of the most used API's with support for the newest HTTP/2 protocol for better web performance
  • JAX RS 2.1 for RESTful WebServices
  • Further "lightweight" web improvements
  • Still Java EE full transactional support ( JMS, JDBC, RMI)
  • Better integration with Microservices technology

 

What does this mean for WebLogic? The current last version is still on Java EE7 and JDK8. The next major version is planned to come out late 2018, my expectation that it will be around september. In the line of some already existing 13c i expect it will be the same on WebLogic, but more important is that it will support full Java EE8 and JDK 9.

 

WebLogic Patchsets

There are several patchsets released in 2017 which are:

  • PS1 – bug fixes and feature completion of Continuous Availability best practices
  • PS2 – bug fixes and feature completion of Docker image updates and  App2Cloud migration tooling
  • PS3 – bug fixes and feature completion of Secured production mode and Zero Downtime patching improvements

 

WebLogic Multitenancy

Although containerized platforms such as Docker supports WebLogic, also the strategy of WebLogic itself will be more on containers instead of a platform

 

WebLogic/JCS, Docker and Kubernetes

 

Already, WebLogic is certified with Docker, and sample dockerfiles are available on GitHub. It supports multiple topologies and can be used either on premise and in the Cloud.

Kubernetes orchestration is on the way to be certified.

Supported versions for Docker are WebLogic 12c R1 and 12c R2 with Docker 1.9, which runs on Linux 6 or 7

 

Supported topologies:

  • Non clustered domain in docker on a single host
  • Clustered domain in Docker on a single host
  • Clustered domain in Docker on multiple hosts

 

wlsdocker.png

 

An announcement was made to the orchestration for Docker technology, the Kubernetes platform which will be supported somewhere during 2018. Samples are already available. Support is including the tools which come with Kubernetes, Prometheus and Grafana for graphical monitoring dashboards. The WebLogic team has developed a tool to export WLDF watches, SmartRules and policies, in order that these metrics can be picked up by Prometheus and represented in a Grafana Dashboard. Also supported will be auto scaling with WLS Dynamic clustering.

 

Coherence "Next"

Coherence, which became an integrated part of WebLogic also got some new and improved features, such as:

  • Docker Support
  • Coherence RX, an addon Open Source API for Coherence
  • Dynamic Active Persistence Quorum Policy, a built in policy to ensure an adequate number of cluster storage members that are available for recovery
  • Federated Cache improvements to support Multidatacenter toplogiies.
  • Improved Proxy Metrics
  • Zero Downtime Patching following WLS
  • Incremental Snapshot
  • HotCache multi-threading, JMX monitoring and MultiTenancy suupport
  • Coherence *JS, JavaScript Support
  • And Coherence is available in the Oracle Cloud, where it can be chose as an extra container in the Java Cloud Service.

 

Conclusion

 

Is it because maybe I get older "   But the world in IT seems to go faster and faster, which makes it more and more interesting and exploring new technologies and methods. I sometimes consider writing a new book, but because of the speed of frquency of innovations, what today's HOT tomorrow its NOT. Still I think this overview doesn't include all the new and improved features but it gives you an idea about which direction we are going.

 

Have an interesting and very good 2018!!

One of the services which are delivered by the Oracle Management Cloud is the monitoring of infrastructure. Now in this case, infrastructure spans from host to software platform. In this blogpost I will try to explain how you can effectively behind issues in your Oracle SOA Suite operational environment. OMC can monitor some parts specific to the SOA suite, in fact all the engines which are enclosed in a SOA Suite running envrioment. These are:

  • The BPEL engine
  • The Mediator engine
  • The Descion Service Engine or Business Rules Engine

 

I simulated a test which processed a lot of transactions through the payment process of my company. SOA Suite was handling this payment process through an OSB service in the frontend and an enrichment though a SOA composite which did a validation based on some rules.

The company deployed a new release and adapter to get more benefit out of it.

During tests, I suddeny received an alert from OMC that the error rate on the BPEL engine was increased. When I looked into the Performance table of my soainfra entity I found the following:

You can see the errors /min in this screenshot.

Click on the SOA Composite field I could detect which composite was causing the error, the ValidatePayment

 

Now I had to find why this composite was causing the error, so I jumped into the other OMC feature, log analytics. The best logs in this case to look at, are the FMW Diagnostics logs for the Oracle Diagnostics Logging Framework. So I chose to put the entity on the running SOA Server:

And the logs we're filtered for the soa specific operations. In the right field I choce FMW Diagnostics Log from the pie chart

To group the log messages, I clustered them by choosing the cluster visualize option

Then I could find out very easily there was a JCA Adapter issue, further investigation that pointed out that while doing the SOA composite deploy, some EIS JCA Adapters we're changed.

 

After resolving this, the issues we're gone. But it proved the power of the Oracle Management Cloud, in minimum time I discovered what was going on and could solve it!

The Oracle Management Cloud is a very broad platform for every developer or Ops to get out all the information that is needed.

This is why this platform is an ideal one to be used in a DevOps strategy. Why? It has so many features to do analysis from both a developers perspective as an operations perspective. Combine these two and your team will act faster in detecting and solving issues, or even be pro active about possible bottlenecks in applications.

One of the zillion features, although a small one I discovered is the ability to record and dump a Java snapshot, or even better known as Java Flight Recorder Dump.

Through OMC it is possible with the help of the Diagnostic framework from Oracle WebLogic ( WLDF ) to dump a recording, or dump several recordings.

These recordings can be used to analyse the behaviour of the entire JVM - in this case the JVM of  a particular WebLogic Server Instance.

 

 

Oracle Management Cloud - Java Agent version

 

To be able to do a Java Flight recorder dump out of the OMC console, the Java APM agent must be on the 1.22 version. Older version do not have this feature implemented yet. You can see your version at:

So if you're not on this version, upgrade the APM agent to the latest versions.

 

 

 

 

 

Performing a Java Flight Recorder Dump

 

If the version matches then you can do a dump of you WebLogic Server JVM. This is how you do it:

  • Navigate form the OMC home to the APM dashboard

  • In the left pane, select Diagnostc Snapshots

 

 

 

  • Select JFR Dumps. The JFR screen appears. Click on Take JFR Dump, fill in a name or take the default name
  • Click then Select Appserver, a list with the Application Servers will be listed:

The JFR Support can be set to Yes, No or Off. No is because the version of the agent, yes or no dpends or the WebLogic Server JVM has the Commercial feature enabled in it's startup parameters. If not, add them and restart the WebLogic Server JVM

 

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder

 

  • Click on


A dump will be create for analysis with the Java Flightrecorder.

 

Note - Beware that the content of your recording depends on the level on which your diagnostic volume is set on the the particular WebLogic JVM, which can be set to Low, Medium or High. You can set this per WebLogic Server in the Administration Console on the General Tab. Default it is set to Low.

 

Conclusion

 

If you are an experienced administrator, you see that it is rather easy to perform these steps, it's a nice addition in doing some deep level diagnoses with the help of the Oracle Management Cloud.

Since a release ago the Oracle Management Cloud supports monitoring of the Oracle SOA Suite, either on premise as on the Cloud. I've setup using my SOA Cloud instance, which I think is really awesome! I've build a lot of monitoring solutions at customers using Oracle Enterprise Manager Cloud Control and the SOA Management pack, but doing it with the Management Cloud is definitely a go for me!

 

First of all you need to rollout the OMC Cloud and APM agent as I described in my blog https://community.oracle.com/blogs/mnemonic/2017/05/07/oracle-management-cloud-setup-a-simple-intrusion-alert. If you followed that, all kinds of entities will be uploaded to OMC. Entities are in this case all kind of information about the server, the software, the runtime processes and many more.

For every entity you search you can switch on APM monitoring and Analytics. In my case I wanted to switch on the entity SOA Infrastructure. Looking at this entity a lot of attributes we're in this entity such as the SOA Composite with some familiar components:

and a lot more useful information.

However, I could not add the SOA infrastructure because I missed one step. I had to add the soa infra entity on the agent side on the SOA Cloud service environment with ht entity name omc_oracle_soainfra.

My json file then looked like this:

 

"entities": [{
  "name": "QSOACS01_domain",
  "type": "omc_oracle_soainfra",
  "displayName": "QSOACS01_domain",
  "credentialRefs": ["WLSCreds"],
  "timezoneRegion": "CET",
  "properties": {
  "port": {
  "displayName": "Port",
  "value": "9071"
  },
  "protocol": {
  "displayName": "Protocol",
  "value": "t3"
  },
  "admin_server_host": {
  "displayName": "Admin Server Host",
  "value": "qsoacs01-wls-1.compute-gse00010395.oraclecloud.internal"
  },
  "capability": {
  "displayName": "capability",
  "value": "monitoring"
  }
  }
  }]

After that I could add the entity using the command:

 /u01/app/oracle/tools/paas/state/homes/oracle/omc_cloud_agent/agent_inst/bin/omcli add_entity agent omc_oracle_soainfra.json -credential_file omc_oracle_soainfra_creds.json

 

And now I was able to select the entity:

After that, I needed to enable monitoring for this using the Administration menu in OMC

 

 

In a follow up post I will do a more deep dive in what and how to monitor the SOA suite on premise or in the Cloud with OMC, but I'm quite certain that this is a very good solution for customers to monitor their SOA runtime production systems.

Thanks to the OMC team I received my own OMC trial environments to set up some experiments. Looking through the OMC I saw some familiar components such as synthetic tests, and here and there some components that are used in  Oracle RUEI.

The possibilities are huge in OMC, which I will discuss in a later stage, but something I wanted to try out was if  I could create a mechanism to detect if a hackers collective was trying to break into a web applications by using some sort of a password attack.

 

My ingredients:

  • An Oracle Java Cloud Service containig a WebLogic 12c domain, hosting Web applications
  • An Oracle Management Cloud subscription, with the following components:
    • Application Performance management
    • Log Analytics
    • IT Analytics
    • Infrastructure Monitoring

 

Setup the basic needs

Before you can use the OMC some basic steps need to be done. These steps contain:

  • Install the APM agent
  • Install the Cloud agent
  • Enable and register the agents on my JCS environment to the OMC

 

Install the APM Agent

Of course, there is no agent software package, so first of all the software needs to be downloaded. The basic script can be downloaded from you OMC environment:

The script you can place on the servers of your JCS instance, in my case: the database, WebLogic and Oracle Traffic Director

 

After unzipped, the agent download can begin:

Cloud agent:

Java APM Agent:

The registration keys you can obtain in OMC, in the Administration TAB.

 

Then you enter the stage locations and install the agents

./AgentInstall.sh AGENT_TYPE=apm_java_as_agent AGENT_REGISTRATION_KEY=***************************** AGENT_BASE_DIR=/u01/app/oracle/tools/paas/state/homes/oracle/omc_cloud_agent  -staged
./AgentInstall.sh AGENT_TYPE=cloud_agent AGENT_REGISTRATION_KEY=************************* AGENT_BASE_DIR=/u01/app/oracle/tools/paas/state/homes/oracle/omc_cloud_agent  -staged

 

Adding the entities

Oracle provides JSON files for every type of environment which you can use to add your environment specifics to OMC, my example for JCS:

{
    "entities":
[
{
        "name":"QJCS01_server_1",
        "type":"omc_weblogic_j2eeserver",
        "displayName":"QJCS01 Managed Server 1 ",
        "timezoneRegion":"CET",
        "properties":{
                "host_name":
                        {"displayName":"Weblogic Host","value":"qjcs01-wls-1.compute-gse00003036.oraclecloud.internal"},
                "domain_home":
                        {"displayName":"Domain Home","value":"/u01/data/domains/QJCS01_domain"},
                "listen_port":
                        {"displayName":"Listen Port","value":"9073"},
                "listen_port_enabled":
                        {"displayName":"Listen Port Enabled","value":"true"},
                "ssl_listen_port":
                        {"displayName":"SSL Listen Port","value":"9074"},
"server_names":
{"displayName":"Server Names","value":"QJCS01_server_1"}
        },
        "associations":[
                { "assocType":"omc_monitored_by",
                  "sourceEntityName":"QJCS01_d_server_1",
                  "sourceEntityType":"omc_weblogic_j2eeserver",
                  "destEntityName":"QJCS01_domain",
                  "destEntityType":"omc_weblogic_domain"}
        ]
}
]

Together with a JSON credential file you can add all to OMC:

u01/app/oracle/tools/paas/state/homes/oracle/omc_cloud_agent/agent_inst/bin/omcli add_entity agent /u01/app/oracle/tools/paas/state/homes/oracle/omc_cloud_agent/my_entities/qjcs01_domain.json -credential_file cred.json

 

I repeated these steps for my Database and Traffic Director, using their specific JSON files.

 

After adding the entities, you need to provision the APM agent using the script from your APM stage directory:

./ProvisionApmJavaAsAgent.sh -d /u01/data/domains/QJCS01_domain -no-wallet

 

And add  the APM jars to the domain, in the startWebLogic.sh( and restart the WebLogic domain)

 

JAVA_OPTIONS="${JAVA_OPTIONS} -javaagent:${DOMAIN_HOME}/apmagent/lib/system/ApmAgentInstrumentation.jar"
SAVE_JAVA_OPTIONS="${JAVA_OPTIONS}"

 

If all goes OK, you can see your agents being registered in OMC:

 

Now the basic steps are finished. As you click through the OMC, loads of information is already generated from your JCS instance

 

Log Analytics - detect a pattern

 

Now a simple use case: I wanted to discover if users try either unauthenticated(HTTP 401) or unauthorized(HTTP403) access a webapplication. I deployed a simple web application, and some users with different roles, to be able to test with it.

Some users had more permissions than others, so I could test between them.

Second, I wanted a huge load of performing these actions:

  • Accessing the webpage, try to login and do some action ( legal or illegal ).
  • Or try to login with a wrong password

 

For this I created a simple JMeter script to access the webpage and login, and the action within the session, which was an task to close an office, which was only permitted with someone with the managers role

 

I let this script run continuously, to generate the data I needed

 

 

Using  log analytics

A first step to make use of log analytics is that I analyzed the access logs, which gave a clear view of the loads of HTTP 401 and 403 errors.

Now these can happen on every website, and there should be nothing to worry about, but in this case,  a large volume of these errors passed, so this cannot be a mistake or a human error,

I clicked on the log analytics, selected the WebLogic domain which runs in the cloud, and selected in the pie chart the access logs

Then, In the left tab, the field Security Result

 

 

Note that denied count is very high. Next step was to save this search, and very cool was that I could create an alert out of it.

 

And I recieved a mailt with this specific alert, and one at the time the JMeter test had stopped, as that the alert had been cleared

 

Now this is a very first basic step I used OMC to detect hostile actions, so next time I will dive more deeper into all the great features!

This years Community organized by Oracle for it's partners took place in Split, Croatia, a very nice Mediterranean area, which we encountered during the city tour on Tuesday evening.

However, we did not came just for the fun, but to meet and greet other partners, share and absorb knowledge which is evident for companies to serve their customers, explore new technologies and methods, and have a sneak peak in another partners " kitchen", in a week of 5 days program: 1-3 General sessions, 4-5 Handson Workshops, Partner Awards and some networking events. Partners attended from all over the world: EMEA, US, Latin America, Down Under

 

The forum, formerly known as Oracle Fusion Middleware Partner forum has been “ lifted and shifted “ to the Cloud – PaaS the last years, so the focus of the content during this week was all about Oracle's Cloud products, but someone who pays good attention, can also extract the deeper content out of it, to what is useful for ones personally.

For me, I had a double role: of course attend and do knowledge sharing and absorbing for the company I'm into, but also tell something about the successful Oracle Process Cloud implementation my company did in 2016. A lot of these success stories, about developing technologies based on the cloud we're held on Monday during the so-called ACE Sessions.

 

As the forum already indicates, it was all about PaaS, and the presented content was done by VPs, directors and Product Managers from Oracle such as Vikas Anand, Robert Wunderlich and Jean-Marc Gottero. But also the partners had some interesting presentations, presenting about their solutions in all kinds of areas in the Oracle Cloud. The presentations handled about the following topics:

  • Agile DevOps
    • Handled about the DevOps Agility and methodology, and how the Oracle Developer Cloud fulfill a role in bringing  " DevOps"  uptp speed for a software company.
  • API APIPCS and API Management
    • A very interesting subject all about API and the Cloud Services, where all the benefits of APIs and management were brought, such as better security and protection, monitoring, discovery, the new Apiary platform. Also an overview of the existing and coming features; the API firewall caught my attention, and also the monitoring capacities, where I can see great capabilities in combination with  OMC (author sidenote).
  • ICS
    • A session about best practices around implementing integration patterns using ICS, with a clear mind about when to use ICS, and to see that ICS has an overlap with many other PaaS platforms, which is to be expected from an integration point of view Interesting was the topic about exposing databases using REST
  • IoT
    • Apart from the role IoT is already playing nowadays, it is also interesting to see how the combination and integration with cloud fits in, such as the Asset Monitoring Cloud Service where connected devices can be monitored by their location, performance, health and utilization. A very good use case about a companies production floor where every asset is being watched so that any disturbance in the production line can be detected in an early stage.
  • JCS
    • "WebLogic Server in the cloud" has become more mature with all the features you also use when running on premise, but a  lot of work is aready done for you. An overview of the tooling, the DevOps integration and methodologies are embraced as expected by the JCS. Important to know that a lot of the on premise multitenancy features are also available in JCS, and tools and methods to transfer your on premise WebLogic server to the cloud using the DPCT tool.
  • OMC
    • The Oracle Management Cloud has a lot of cool features such as Application Performance monitoring, Log Analytics for trouble shooting, performance analysis, and some other great features like end customer experiences by performing synthetic tests which can absorbe recorded actions users do ( these are some great features from RUEI)
  • PCS
    • PCS - Qualogys sucessfull implementation in the Netherlands. In thr months before I had some discussions with Jurgen and the PCS team but finally we decided I would be the spokesmen to tell  a short overview of what we have been doing

Our customer is a lower/ midsize municipality in the Netherlands and had a lack of insight in personnel manpower, and because of this the onboarding of new personell got stuck. New personnel got  registered in  different systems using different methods, even on an Excel spreadsheet. The Management information was very poor, no relationship between budgets and manpower, and there we're differences between systems regarding financial budgets and accountability.

 

To overcome this we build this solution using the Oracle Process Cloud, working togehther with Qualogy's Forceview HRM solution in the cloud

The PCS team constructed one process to:

  • Register all staff data into one system
  • One uniform way of informing the organization when there are changes
  • Good  quality and coherent management information

 

And all done by:

  • Having no delay in rolling it out
  • Having direct contact with the business about wishes and requirements
  • Using Oracle Process Cloud Service and Qualogy ’s Forceview

 

Screenshot of the PCS environment:

 

 

 

I also told this in a video interview which will soon be published on Oracle's Youtube channel, I keep you updated about that!