Skip navigation

In this blog post I will setup a Java Cloud Service, which is Oracles PAAS service for delivering WebLogic Application Server, . Setting up a Java Cloud Service infrastructure means the following:

  • Setup a Database Cloud Service
  • Setup a Java Cloud Service including a Loadbalance feature. This will result in a basic Oracle Traffic Domain and a WebLogic Domain for your applications to be deployed on.

 

 

 

Setup a Java Cloud Service

 

To setup  a Java Cloud Service, you will setup a complete basic WebLogic Application Server domain. For this you need to set up:

 

Database Cloud Service

 

In my cloud console I first followed the process of setting up a database Cloud Service

 

  • Clicking on the Database Cloud Service Console, create a service

  • Next,you fill in some requested details

  • Last screen some advanced details. Don't forget to tick create Storage Container

    • Service Name: Service Name of the database
    • Key: You can generate a keypair in the console, or if you have done so, select you public key
    • Backup Destination: Should be both Cloud and Local for Java Cloud Service
    • Cloud Storage Container: Must be in the right naming format: Storage-<your identity domain/<storage container name ( name is free to choose>)
    • PDB leave it to the default

 

After a while, you have a database up and running .

Accessing the database can be done in a view ways:

- Through   DB/EM Console; just the familiar way of doing it

To access these console, you need to enable the predefined access rules

 

The Java Cloud Service

Second, the Java Cloud can be setup, which is a relative easy part. From your Cloud Dashboard you can click the  Java Cloud Service Icon, open the Service console

and Create Service.

 

 

 

Also inhere a very straight forward process:

Finally, when all details are filled in:

Think about accessing consoles, otherwise they're inaccessible, but later we will have to secure them more with OTD

Also enable a local loadbalancer, it will cost more but gives more control

After creation, you get 2 domains:

  • Your application domain
  • The Oracle Traffic Director Domain

 

Here's where the basic setup ends. Afterwards you can access your environments as if it's in you companies infrastructure. I deployed an MDB testapplication which generates lots of JMS Messages and CPU cycles.

 

Later on, in part 2 I will dive more into the multitenancy story, to share resources and do isolation.

Preface and Scope

A lot of customers run their business applications on Oracle Fusion Middleware technology. This can be for example a Web Portal which consists of different combined Oracle Fusion Middleware components. These environments produce lots of metadata content, diagnostic information and logging. To avoid bad performant environments by carrying all this kind of data, customers need to have a clear vision and strategy of how they will handle housekeeping, and how to implement this housekeeping in the different Oracle FMW components, each on their own level.

To ensure a stable and good performant environment it is important to be in control and have a proper housekeeping plan.

 

So, what does housekeeping means in relation to OFMW components?

  • Database related:
    • Purging/cleaning ancient OFWM Instance, process and meta data from the OFMW repositories
  • Log related, text or XML format
    • Logfile rotation, retention and cleaning
  • Diagnostic framework related
    • Cleaning diagnostic data of all OFMW environments
  • Implement housekeeping mechanisms on these various levels.
  • Monitor housekeeping in light of Capacity planning and growth trend.

So it is very important that housekeeping strategy and guidelines must be defined because of the following:

  • OFMW products rely strong on their metadata; it must be get information very fast to steer certain processes, but also generates all kinds of historic data which becomes ballast after a period.

To guarantee performance this data must not grow unnecessarily so has to be cleaned

  • Logs and diagnostic data might be useful for some period for analysis and historic perspective but need to be removed from the OFMW and stored somewhere else (Oracle Enterprise Manager Cloud control, log server ).
  • It depends on the stage of the environment to determine how long metadata, process data and logging should be kept; a development or test environment might require a longer retention period to study the outcome of tests or solve bugs in the code.

 

Housekeeping strategy Oracle Fusion Middleware components

Oracle Fusion Middleware generic strategies

This strategy is applicable for every OFMW base environment, either it’s an Oracle Service Bus or WebCenter Portal environment, they all rely on their underlying framework, namely:

  • Java Server Output logging
  • WebLogic Logging Framework
  • WebLogic Diagnostics Framework(WLDF)
  • Oracle Diagnostics Logging  Framework(ODL)
  • Dynamic Monitoring System(DMS)

Every framework need to have a guideline and strategy which covers:

  • Level of logging /diagnostics
    • Depending of the type of environment, the appropriate level must be set to meet the requirements of how deep diagnosis must be able to be done.
  • Retention and archiving guidelines
    • Every company or institute has its own internal guidelines of archiving data, sometimes regulated by law., so retention and archiving must be in line with the Companies policy.
  • Cleaning methodologies and mechanisms.

To be more specific, housekeeping handles:

  • WebLogic Server and domain logging (produced by Admin and managed servers).
  • Oracle Diagnostics Logging (Extended logging for specific Fusion Middleware components.
  • WebLogic Diagnostics Framework.
  • Node Manager Logging.
  • Oracle HTTP access en server logging.

 

WebLogic Server and Domain Logging

A typical WebLogic domain produces a vast amount of logging from all what’s happening inside.Before setting a proper housekeeping, the following needs to be determined

  • Which Rotation frequencies WebLogic Server and Domain Logs
  • Maximum per Logfile Size
  • Logfile retention
  • Levels of Logging(INFO, NOTICE, etc). Only relevant information is needed.

These values differ per stage of environment. Except for production, other stages may need to keep logfiles for a longer period to do analysis, or having another level to produce more logging.

 

Table  Logging Retention

 

Component

Platform component

Stage

Size per log file & Retain Limit

Retention in days

Remarks

WLS Server and domain logging

WebLogic Server

ALL

10MB -100

30

Log level: INFO

WLS Access logging

WebLogic Server

ALL

-

-

Keep it off; will be done in OHS

GC Logging

Java VM

DEV/TEST/ACC

10MB

14

Specific information about JVM Garbage collect times

GC Logging

Java VM

PRD

10MB

7

Specific information about JVM Garbage collect times

STDERR and STDOUT

Java-O/S output

ALL

10MB

30

Use logrotate from Linux

Oracle Diagnostics Logging

OFMW infrastructure

DEV/TEST/ACC

10MB

30

Rotation Policy:

Size based

Loglevel:INFO

 

 

 

 

 

 

Oracle Diagnostics Logging

OFMW infrastructure

PRD

10MB

30

Rotation Policy:

Time and Size based

Loglevel:INFO

Oracle HTTP Server Logs

HTTP

DEV/TEST/ACC

 

31

For OHS, use format ODL, use odl_logration, rotation_type T

Oracle HTTP Server Logs

HTTP

PRD

 

7

For OHS, use format ODL, use odl_logration, rotation_type T

Nodemanager

WebLogic Server

ALL

See remarks

See remarks

LogCount 1

LogLimit 100

LogLevel INFO

FileCount  30

FileMinSize 500

RotationType SIZE

NumberOfFilesLimited true

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Explanations:

WebLogic

If you use Node Manager to start a Managed Server, messages that would otherwise be output to stdout or stderr are written to a single log file for that server instance, SERVER_NAME.out; for example, DOMAIN_NAME\servers\SERVER_NAME\logs\SERVER_NAME.out, where DOMAIN_NAME is the name of the directory in which you located the domain and SERVER_NAME is the name of the server. WebLogic does not rotate STDERR and STDOUT files. Use the Linux logrotate.conf

Every WebLogic Server writes access logs, however in many scenarios there will be some webserver in front acting as a reverse proxy, such as Apache or Oracle HTTP Server, or even Oracle’s full loadbalancer solution Oracle Traffic Director. In these cases, switch access logging on WebLogic level off and leave thata to the webserver. This will minimize extra overhead. If you still want to turn on accesslogging for deeper diagnose, switch it off for at least the AdminServer.

 

 

Oracle Diagnostics Logging

Oracle Diagnostics Logging handles logging different using:

1. Runtime logging levels more product specific logging (SOA, OSB, WCP)

2. Log handlers handle rotation frequency and log file size:

odl-handler

trace handlers

 

Nodemanager

For operational management logging needs to be enabled by setting the Logfile rotation NativeVersionEnabled=false.

Oracle HTTP Logging

 

 

In case your company uses Oracle HTTP or Apache Server writes , you may need to consider in housekeeping the server logfiles.

These files can be housekeep by using one of the two retention mechanisms:

  • odl_rotatelogs which comes default with the installation of OHS
  • rotatelogs, the Apache Version

Use the rotation_type T (time-based), and set the rotation_policy parameter to:

 

frequency(in sec) retentionTime(in sec) startTime(in YYYY-MM-DDThh:mm:ss)

Example

(F)43200:(RT)604800 2017-01-06T10:53:29, the error log is rotated every 43200 seconds (12 hours), rotated log files are retained for maximum of 604800 seconds (7 days)

For details see Table  Logging Retention

 

 

WebLogic Diagnostic framework

 

 

There is a WLDF module for Oracle Fusion Middleware, called the Module-FMWDFW, installed consisting the following components:

Predefined Watches and Diagnostic Dumps to detect, diagnose and resolve problems Use WLS and SOA MBeans and DMS Metrics.

Detect critical failures and collect diagnostic dumps containing relevant diagnostic information like logs, metrics, server images,

ADR (Automatic Diagnostic Repository)for creating incidents WLDF for OFMW is has a default record volume staat on low

 

This module has a so-called record volume. Based on type of stage this level needs to be increased or lowered.

 

Stage

Record Volume

Harvester Sample period

Capture Diagnostic Image

WLDF Data Archive

WLDF EventsDataRetirementPolicy

HarvestedDataRetirementPolicy

DEV

Medium

5000

Yes

File

retirement-time 10

retirement-period 24         retirement-age 168

TEST

Medium

5000

Yes

File

retirement-time 10

retirement-period 24         retirement-age 168

ACC

Medium

5000

Yes

File

retirement-time 10

retirement-period 24         retirement-age 168

PRD

Low

5000

Yes

File

retirement-time 10

retirement-period 24         retirement-age 72

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Oracle Fusion Middleware specific housekeeping

Oracle Service Bus Alert data

Oracle Service Bus has to manage its own alert and diagnostics data. This data consists:

  • Pipeline Alerts (file based)
  • SLA Alerts ( file based )
  • Statistic reports (  File based )
  • Metrics Reports ( JMS Provider and database )

 

  • Oracle Service Bus
    • Alert data Pipeline:
      • OSB services generate pipeline data in every step of a routing or transformation in a service. Based on development, exceptions can be logged into alerts
      • SLA Alerts
        • Additional, individual Business Services can be instrumented with SLA Alerts to monitor performance and availability from OSB Business Services to external components (SOA or JCA to database, AQ or legacy)
      • OSB Metric data
        • OSB Monitoring framework keeps statistics of metrics of services
      • OSB Reporting data
        • Using the internal JMS reporter, developers can instrument services to write exceptions and other logging to a database

Now, all these metrics are not enabled by default, and the development party decides which tools to use. Admins need to configure the operational handling and housekeeping

 

Operational settings per service

 

Not every OSB service, either Proxy or Business Service needs to be monitored, but choose the most critical ones, you would like to monitor, those which are key services to all other operations. That can be 10 or maybe more, that’s upto your company to be decided.

 

Table OSB Service monitoring settings

 

Operational Settings

Default Value Service

State

Enabled

Monitoring

  1. Default Disabled à Per service to determine if it must be enabled

Aggregation Interval

10 minutes

SLA Alerts

Enabled

Pipeline Alerts

Enabled at Normal level or higher

Reports

Enabled at Normal level or higher

Logs

Enabled at Debug level or higher

Execution Tracing

Disabled

Message Tracing

Disabled

Offline Endpoint URIs

Disabled

Throttling State

Disabled

Maximum Concurrency

0

Throttling Queue

0

Message Expiration

0

 

 

 

For OSB a separate data retirement policy needs to be created, apart from the default one (see WebLogic Diagnostic framework for more information).

This specific OSB volume consists  pipeline and SLA alert data, and need regular housekeeping.

The values of this policy must be:

 

Retirement Age: 120 hrs. (PRD )  480 hrs (DEV/TEST/ACC)

Retirement Time: Midnight

Retirement Period: Every 24 hour

These parameter values are the same as the default WLDF data collector.

 

OSB Report data

Developers can use the OSB Reporting Provider to generate extra logging and metrics on specific services. This OSB metric data is being written a database, specifically in WLI_QS_REPORT_ATTRIBUTE, WLI_QS_REPORT_DATA tables.

Housekeeping can be done through:

  • Manual via OSB Console (not preferable)
  • EM Cloud control job or database job to delete the data, which is the most preferable to do.

 

Purge all data older than 1 month.

Stage

Retention

DEV/TEST/ACC

60 days

PRD

30 days

 

 

SOA Suite Housekeeping

SOA Suite keeps track of all process instance flow and tracking data, flow states and deployed artifact data and stores this in the so-called dehydration store, error hospital MDS (SOA and OWSM).

Before speaking of housekeeping, the following prerequisites need to be verified and set

 

  1. Audit levels per DEV-TEST-ACC-PRD
  2.    Check Capture Composite Instance State à on
  3.    Check Data Display Options According to:
    1. Instance fetching
    2. Display restriction

 

Table Audit Level

 

  Stage

Level

DEV

Development

TEST

Development

ACC

Production

PRD

Production

 

 

 

 

 

 

 

One very important thing keeping in mind while discussing SOA Suite housekeeping, is taking care of the SOA Suite Repository. So, a solid database growth strategy must be developed, in cooperation with the database admins. Keep in mind and discuss the following with your DBA:

  • Size/profile of the Database
  • Space and Resources usage and Database Performance
  • Growth Management
  • Space Management

A DBA has the knowledge and tools to do these things, but with strong input of an Oracle Fusion Middleware Administrator. See Database Profile / Size Suggestions, for recommended sizes.

 

The value of the audit level determines how many audit data is generated, so it is very important to have some strategy around this. In production environment, which can generate loads of message per day, it’s not advisable to set the Audit level too high but keep it to Production.

 

SOA Suite purging

Purging a SOA Suite database means: cleaning retired instance, process and metadata without influencing running processes. All this data is stored in tables in some database schema’s, where the most important one is the soainfra schema.

Auto Purge

The SOA Suite 12c installation comes with an auto purge option which can be enabled using the Fusion Middleware console. When enabled, the soa.delete_instances is scheduled as a database job.

To set up, use the following option:

 

Where to review the default auto purge settings; check the default schedule and other possibilities. Note the retention period of 7 days, this must be changed to the actual retention.

 

 

 

 

Purging must be implemented on 2 levels

  • SOA Instance Process data (CUBE, Composite, Mediator, Audit, DLV_MESSAGE)
  • MDS Deployment Labels

Now, purging must be calculated based on the growth of the database tables, so these need to be monitored.

The following instances with their states will be purged:

  • Completed
  • Faulted
  • Terminated by user
  • Stale
  • Unknown

Not purged will be composite instances that are in the following states:

 

  • Running (in-flight)
  • Suspended
  • Pending Recovery

 

One golden rule is:

Retention period = Longest Running Composite

 

But a good starting point is to see below table

 

 

 

Ways of purging

Purge levels and retentions

 

Purging must be implemented on 2 levels

  • SOA Instance Process data (CUBE, Composite, Mediator, Audit, DLV_MESSAGE)
  • MDS Deployment Labels

Now, purging must be calculated based on the growth of the database tables, so these need to be monitored.

The following instances with their states will be purged:

  • Completed
  • Faulted
  • Terminated by user
  • Stale
  • Unknown

Not purged will be composite instances that are in the following states:

 

  • Running (in-flight)
  • Suspended
  • Pending Recovery

 

One golden rule is:

Retention period = Longest Running Composite

 

But a good starting point is to see below table

Component

Retention and keep time

Stage

Remarks

Instance Tracking

60 days

Development, TEST

Composites, Cubes, Mediators, Workflow

Metadata

60 days

Development, TEST

OWSM metadata

Metatalabels

120 days

Development, TEST

MDS Deployment Artifacts ( WSDL’s, XSD’s)

Instance Tracking

60 days

ACC

Composites, Cubes, Mediators, Workflow

Metadata

60 days

ACC

OWSM metadata

Metatalabels

120 days

ACC

MDS Deployment Artifacts (WSDL’s, XSD’s)

Instance Tracking

30 days

PRD

Composites, Cubes, Mediators, Workflow

Metadata

30 days

PRD

OWSM metadata

Metatalabels

60 days

PRD

MDS Deployment Artifacts (WSDL’s, XSD’s)

 

 

 

 

 

So, monitoring composite running length is evident to come to a final strategy, although strategy must be evaluated every month. After a while of measuring and analysis a good view of what it needs to be van be developed, based on one monthly analysis, the real growth trend can be calculated based on this formula, but also keep in mind the Overall company policy on retention of data:

 

Daily-inflow-composite-space = (SOA Schema Size / Period)

 

Inflow data is all data that is being dehydrated to the FMW database which consist:

  • Instance Flow tracking data
  • Adapter report data
  • Error Hospital data
  • Fault alerts

The SOA schema size can be identified in SMALL, MEDIUM and LARGE as displayed in this table. A composite consists of different components such as BPEL, Mediator, Human Workflow. All these components are included in this inflow composite space.

 

 

Database Profile / Size Suggestions

Based om the Daily composite inflow calculation the following is applicable

 

Database Profile

Composite Space Persisted Daily

Minimum Retention of Space

Small

< 10 GB

< 100 GB

Medium

10-30 GB

100-300 GB

Large

> 30 GB

> 300GB

 

 

Above calculations are examples and based on the following reference documentation:

http://docs.oracle.com/middleware/12212/soasuite/administer/GUID-E285CE03-1BE9-4319-AB1D-E34441BF577C.htm#SOAAG97846

However, database must be monitored on real size values and this might influence the size and purging strategy. For example, a load stress test environment will generate loads of data. Based on this a real strategy for production can be predicted.

 

 

Purge Executions

Purge execution can be done in 2 ways. The auto-purge option uses the soa.delete_instances_auto procedure.

Purge ExecutionInitiating procedureProcedureWhen to use
Singlesoa.delete_instances_autosoa.delete_instancesSmall amount of instance deletion
Parallel soa.delete_instances_autosoa.delete_instancesLarge amount of instance deletion

 

 

 

Starting, Single is sufficient enough. However, if purge takes more time, the parallel is the one to be used. Note the following with parallel purging:

  • Must be done during non-business hours
  • For multiple executions, the database server must have enough CPU
  • Indexes must be dropped and re-added

 

 

 

 

2.3.2.2.3. Purge settings

Below table handles the preferred purge setting. Note that this is a starting point and needs to be evaluated as an administrator should always take care of managing database growth of FMW.

Purge setting

Value

Remarks

Frequency

Daily

Purge type is Single/Parallel

Retain

30 *)

Purge type is Single/Parallel

Maximum flows to purge

Use the default

Purge type is Single/Parallel

Batch Size

Use the default

Purge type is Single/Parallel

Degree of Parallel

Use the default

Purge type is Parallel

ignoreState

false

Mandatory false! **)

maxRuntime

Use the default

In case of many instance, increase

 

 

 

*) Retainment is a starting point

**) When set to true, all instances will be deleted, regardless of their state

 

The above table is applicable for PRD, DEV/TEST/ACC will have their own settings.

 

 

 

Some Hints and tips

 

 

For SOA Suite database maintenance, there are some hints to consider, such as:

  • For longer term Database Management, regarding to housekeeping, partitions must be created to accelerate data access and purging
  • Purging schedules are built into SOA infra layer
    • Schedules can be created and managed from FMW Console

 

  • Picking a SOA DB profile enables to get the performance features of Oracle DB for SOA related storage
  • Global hash feature helps with faster querying and helps with EM responsiveness and instance tracking performance
  • The global hash indexes avoid full table scans; this is usable after the purge and improves performance
  • Recreating of indexes periodically helps for better performance