Skip navigation
1 2 Previous Next

Developer Solutions

24 Posts authored by: Abhishek Gupta-Oracle

We will look at

 

  • How to Setup Apache Cassandra on Oracle Compute Cloud
  • Develop: some implementation details
  • Deploy: Run it on Oracle Application Container Cloud using CI/CD feature in Oracle Developer Cloud
  • Secure access: secure channel b/w your application and Cassandra

 

 

 

Hello Cassandra

Apache Cassandra is an open source,  NoSQL database. It's written in Java, (originally) developed at Facebook and it's design is based on/inspired by Amazon's Dynamo and Google’s Bigtable. Some of its salient characteristics are as follows

 

  • Belongs to the Row-oriented family (of NoSQL databases)
  • Distributed, decentralized & elastically scalable
  • Highly available & fault-tolerant
  • Supports Tunable consistency

 

You can read more about Cassandra here

About the sample application

  • The sample application exposes REST endpoints - implemented using Jersey (JAX-RS)
  • Employee - serves as the domain object. You would need to bootstrap the required table
  • DataStax Java driver is used to interact with Cassandra
    • Leverages Cassandra object mapper for CRUD operations

 

It is available here

 

Locking down access to Cassandra

The goal is to allow exclusive access to Cassandra from our application without exposing it's port (e.g. 9042) to the public internet. To enable this, Oracle Application Container Cloud ensures that when you deploy an application, a Security IP list is automatically generated which can be added to a Security rule for a virtual machine (VM) in Oracle Compute Cloud Service. This allows your application and the VM to communicate.

 

The setup details are discussed in an upcoming section

 

Setup Cassandra on Oracle Compute Cloud

 

Quick start using Bitnami

We will use a pre-configured Cassandra image from Bitnami via the Oracle Cloud Marketplace

 

  • Login to your Oracle Compute Cloud dashboard,
  • choose the Create Instance wizard, and then
  • select the required machine image from the Marketplace tab

 

More details here

 

 

 

 

Activate SSH access

We now need to allow SSH connections to our Cassandra virtual machine on Oracle Compute cloud

 

Create a Security Rule

 

 

You should see it in the list once its done

 

 

SSH into the VM

 

 

Reset password

You will need to reset Cassandra password as per this documentation https://docs.bitnami.com/oracle/infrastructure/cassandra/#how-to-reset-the-cassandra-administrator-password. Once you're done, log in using the new credentials

 

 

Oracle Developer Cloud: setup & application deployment

 

You would need to configure Developer Cloud for the Continuous Build as well as Deployment process. You can refer to previous blogs for the same (some of the details specific to this example will be highlighted here)

 

References

 

Provide Oracle Application Container Cloud (configuration) descriptor

 

 

 

Check application details on Oracle Application Container Cloud

 

Deployed Application

 

 

Environment Variables

 

 

Check Security IP List

 

After successful deployment, you will be able to see the application as well the Security IP List information in Oracle Application Container Cloud

 

 

Please note that you will not be able to access/test the application now since the secure communication channel b/w your application and Cassandra is not setup. The next section covers the details

 

Oracle Compute Cloud security configurations

 

Confirm Security IP List

 

You will see the Security IP list created when you had deployed the application on Oracle Application Container cloud (mentioned above). It ensures that the IP of the application deployed on Oracle Application Container cloud is whitelisted for accessing our Cassandra VM on Oracle Compute Cloud

 

 

 

Create Security Application

 

This represents the component you are protecting along with its access type and port number - in this case its TCP and 9042 respectively

 

Create Security Rule

 

The default Security list is created by Oracle Compute Cloud (after the Bitnami image was provisioned)

 

We will create a Security Rule to make use of Security IP list, Security application and Security List

 

 

You should see it in the list of rules

 

 

Test the application

 

Bootstrap the keyspace and table

 

The sample application uses the test keyspace and a table named employee - you would need to create these entities in the Cassandra instance

 

CREATE KEYSPACE test
  WITH REPLICATION = { 
   'class' : 'SimpleStrategy', 
   'replication_factor' : 1 
  };
  
  
 CREATE TABLE test.employee (emp_id uuid PRIMARY KEY, name text);

 

Access REST endpoints

 

Create a few employees

 

curl -X POST <ACCS_APP_URL>/employees -d abhishek // 'abhishek' is the name
curl -X POST <ACCS_APP_URL>/employees -d john // 'john' is the name

 

  • You will receive HTTP 201 (Created) in response
  • The Location (response) header will have the (REST) co-ordinates (URI) for the newly created employee record - use this for search (next step)

 

Search for the new employee

 

curl -X GET <ACCS_APP_URL>/employees/<emp_id>

 

You will receive a XML payload with employee ID and name

 

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<employee>
    <empId>18df5fd1-88d8-4820-984e-3cf0293c3051</empId>
    <name>test1</name>
</employee>

 

Search for all employees

 

curl -X GET <ACCS_APP_URL>/employees/

 

You will receive a XML payload with employees info

 

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<employees>
    <employee>
        <empId>8a841167-6aaf-428f-bc2b-02269f04ce93</empId>
        <name>abhirockzz</name>
    </employee>
    <employee>
        <empId>2e2cfb3c-1530-4099-b6e9-a550f11b25de</empId>
        <name>test2</name>
    </employee>
    <employee>
        <empId>18df5fd1-88d8-4820-984e-3cf0293c3051</empId>
        <name>test1</name>
    </employee>
    <employee>
        <empId>2513a12d-5fc7-4bc6-9f94-d13cea23fe7a</empId>
        <name>abhishek</name>
    </employee>
</employees>

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Glassfish 5 Build 11 is now available - with support for many of the Java EE 8 specifications e.g. JAX-RS 2.1, JPA 2.2, JSON-B 1.0 etc. For more details check out the Aquarium space. This blog covers

 

 

 

Application

 

It's a simple one

 

  • Has a REST endpoint (also a @Stateless bean)
  • Interacts with an embedded (Derby) DB using JPA - we use the jdbc/__TimerPool present in Glassfish to make things easier
  • Test data is bootstrapped using standard JPA features in persistence.xml (drop + create DB along with a SQL source)

 

JSON-B 1.0 in action

 

Primarily makes use of the JSON-B annotations to customize the behavior

 

  • @JsonbProperty to modify the name of the JSON attribute i.e. its different as compared to the POJO field/variable name
  • @JsonbPropertyOrder to specify the lexicographical reverse (Z to A) order for JSON attributes

 

For more, check out Yasson which is the reference implementation

 

JPA 2.2 in action

 

The sample application uses the stream result feature added to Query and TypedQuery interfaces by which it's possible to use JDK 8 Streams API to navigate the result set of a JPA (JPQL, native etc.) query. For other additions in JPA 2.2, please check this

Build the Docker image

 

 

Shortcut

 

Use an existing image from Docker Hub - docker pull abhirockzz/javaee-jsonb-jpa

 

Run on Oracle Container Cloud

 

You can use this section of one of my previous blog or the documentation (create a service, deploy) to get this up and running on Oracle Container Cloud. It's super simple

 

Create a service where you reference the Docker image

 

 

Post service creation

 

 

Initiate a deployment.. and that's it ! You'll see something similar to this

 

 

Test things out

 

Please make a note of the Host IP of your Oracle Container Cloud worker node (basically a compute VM)

 

Fetch all employees

http://<OCCS_HOST_IP>:8080/javaee8-jsonb-jpa/

 

You will get a JSON payload with all employees

 

[
    {
        "salary": 100,
        "name": "abhi",
        "emp_email": "abhirockzz@gmail.com"
    },
    {
        "salary": 99,
        "name": "rockzz",
        "emp_email": "kehsihba@hotmail.com"
    }
]

 

 

Fetch an employee

 

http://<OCCS_HOST_IP>:8080/javaee8-jsonb-jpa/abhirockzz@gmail.com

 

You will see a JSON payload in as a response

 

{
    "salary": 100,
    "name": "abhi",
    "emp_email": "abhirockzz@gmail.com"
}

 

Enjoy Java EE 8 and Glassfish !

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

This blog will demonstrate how to get started with a Redis based Java application

 

  • Run it on Oracle Application Container cloud and CI/CD using Oracle Developer cloud
  • Execute Integration tests using NoSQLUnit
  • Our Redis instance will run in a Docker container on Oracle Container cloud

 

 

 

 

Application

Here is a summary of the application

 

  • Exposes REST endpoints using Jersey
  • Uses Redis as the data store
  • Jedis is used as the Java client for Redis
  • NoSQLUnit is the framework used for integration testing

 

Here is the application

 

NoSQLunit

 

NoSQLUnit is an open source testing framework for applications which use NoSQL databases. It works on the concept of (JUnit) Rules and a couple of annotations. These rules are meant for both database lifecycle (start/stop) as well as state (seeding/deleting test data) management. In the sample application, we use it for state management for Redis instance i.e.

 

  • with the help of a json file, we define test data which will be seeded to Redis before our tests start and then
  • use the annotation (@UsingDataSet) to specify the our modus operandi (in this case - its clean and insert)

 

Our test dataset in json format

 

 

NoSQLUnit in action

 

 

 

 

Setup

Redis on Oracle Container Cloud

  • Use the existing service or create your own (make sure you expose the default Redis port to the host IP) - documentation here
  • Start the deployment - documentation here
  • Note down the host IP of the worker node on which the Redis container is running

 

Configure Oracle Developer Cloud

We'll start with bootstrapping the application in Oracle Developer Cloud. Check this section for reference Project & code repository creation. Once this is done, we can now start configuring our Build which is in the form of a pipeline consisting of the build, deployment, integration test and tear down phases

 

Build & deploy phase

The code is built and deployed on Oracle Application Container cloud. Please note that we are skipping the unit test part in order to keep things concise

 

Build step

 

 

 

 

Post-build (deploy)

 

 

 

Deployment

 

At the end of this phase, our application will be deployed to Application Container Cloud - its time to configure the integration tests

 

Integration test phase

Our integration tests will run directly against the deployed application using the Redis instance (on Oracle Container Cloud which we had setup earlier). For this

  • we define another build job and
  • make sure that it's triggered after the build + deployment phase completes

 

Integration build job

 

 

 

Define the dependency

 

 

Tear Down phase

Thanks to Oracle Developer Cloud integration with Oracle PaaS Service Manager (PSM), it's easy to add a PSMcli build step that invokes Oracle PaaS Service Manager command line interface (CLI) command to stop our ACCS application once the pipeline has been executed. More details in the documentation

 

 

 

Summary

We covered the following

 

  • Built a Java application on top of Redis
  • Orchestrated its build, deployment and integration test using Oracle Developer Cloud and Oracle Application Container Cloud
  • In the process, we also saw how its possible to treat our infrastructure as code and utilize our cloud services efficiently

 

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This blog will demonstrate how to get started with a simple MongoDB based application

 

  • Run it on Oracle Application Container cloud
  • Unit test and CI/CD using Oracle Developer cloud
  • Our MongoDB instance will run in a Docker container on Oracle Container cloud

 

 

 

Application

The sample project is relatively simple

 

  • Its uses JPA to define the data layer along with Hibernate OGM
  • Fongo (in-memory Mongo DB) is used for unit testing
  • Jersey (the JAX-RS implementation) is used to provide a REST interface

 

You can check out the project here

 

MongoDB, Hibernate OGM

MongoDB is an open source, document-based, distributed database.. More information here. Hibernate OGM is a framework which helps you use JPA (Java Persistence API) to work with NoSQL stores instead of RDBMS (which JPA was designed for)

 

  • It has support for a variety of NoSQL stores (document, column, key-value, graph)
  • NoSQL databases it supports include MongoDB (as demonstrated in this blog), Neo4j, Redis, Cassandra etc.

 

More details here

 

In this application

 

  • We define our entities and data operations (create, read) using plain old JPA
  • Hibernate OGM is used to speak JPA with MongoDB using the native Mongo DB Java driver behind the scenes. We do not interact with/write code on top of the Java driver explicitly

 

Here is a snippet from the persistence.xml which gives you an idea of the Hibernate OGM related configuration

 

Setup

Let's configure/setup our Cloud services and get the application up and running...

MongoDB on Oracle Container Cloud

 

 

 

 

Oracle Developer Cloud

You would need to configure Developer Cloud for the Continuous Build as well as Deployment process. You can refer to previous blogs for the same (some of the details specific to this example will be highlighted here)

 

References

 

Make sure you setup Oracle Developer Cloud to provide JUnit results

 

Provide Oracle Application Container Cloud (configuration) descriptor

 

As a part of the Deployment configuration, we will provide the deployment.json details to Oracle Developer Cloud - in this case, it's specifically for setting up the MongoDB co-ordinates in the form of environment variables. Oracle Developer cloud will deal with the intricacies of the deployment to Oracle Application Container Cloud

 

 

JUnit results in Oracle Developer Cloud

 

From the build logs

 

 

From the test reports

 

 

Deployment confirmation in Oracle Developer Cloud

 

 

Post-deployment status in Application Container Cloud

 

Note that the environment variables were seeded during deployment

 

 

Test the application

  • We use cURL to interact with our application REST endpoints, and
  • Robomongo as a (thick) client to verify data in Mongo DB

 

Check the URL for the ACCS application first

 

Add employee(s)

 

curl -X POST https://my-accs-app/employees -d 42:abhirockzz
curl -X POST https://my-accs-app/employees -d 43:john
curl -X POST https://my-accs-app/employees -d 44:jane

 

The request payload is ':' delimited string with employee ID and name

 

Get employee(s)

 

You will get back a XML payload in response

 

curl -X GET https://my-accs-app/employees - all employees
curl -X GET https://my-accs-app/employees/44 - specific employee (by ID)

 

 

Let's peek into MongoDB as well

 

  • mongotest is the database
  • EMPLOYEES is the MongoDB collection (equivalent to @Table in JPA)

 

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

It's time to take Java EE 8 for a spin and try out Glassfish 5 builds on Docker using Oracle Container Cloud. Java EE specifications covered

 

  • Server Sent Events in JAX-RS 2.1 (JSR 370) - new in Java EE 8
  • Asynchronous Events in CDI 2.0 (JSR 365) - new in Java EE 8
  • Websocket 1.1 (JSR 356) - part of the existing Java EE 7 specification

 

 

Application

 

Here is a quick summary of what's going on

 

  • A Java EE scheduler triggers asynchronous CDI events (fireAsync())
    • These CDI events are qualified (using a custom Qualifier)
    • It also uses a custom java.util.concurrent.Executor (based on the Java EE Concurrency Utility ManagedExecutorService) – thanks to the NotificationOptions supported by the CDI API
  • Two (asynchronous) CDI observers (@ObservesAsync) – a JAX-RS SSE broadcaster and a Websocket endpoint
  • SSE & Websocket endpoints cater to their respective clients

 

Notice the asynchronous events running in Managed Executor service thread

action-2.jpg?w=768

You can choose to let things run in the default (container) chosen thread

 

cdi-2-async-events-in-action.jpg?w=768

 

Build the Docker images

 

Please note that I have used my personal Docker Hub account (abhirockzz) as the registry. Feel free to use any Docker registry of your choice

 

git clone https://github.com/abhirockzz/cdi-async-events.git
mvn clean install
docker build -t abhirockzz/gf5-nightly -f Dockerfile_gf5_nightly .
docker build -t abhirockzz/gf5-cdi-example -f Dockerfile_app .

 

Push it to a registry

 

docker push abhirockzz/gf5-cdi-example

 

Run in Oracle Container Cloud

 

Create a service

 

 

 

Deploy it

 

 

You see this once the container (and the application) start..

 

 

 

Drill down into the (Docker) container and check the IP for the host where it's running and note it down

Test it

 

Make use of the Host IP you just noted down

 

http://<occs_host_ip>:8080/cdi-async-events/events/subscribe - You should see a continuous stream of (SSE) events

 

68747470733a2f2f61626869726f636b7a7a2e66696c65732e776f726470726573732e636f6d2f323031372f30362f7373652d6f75747075742e6a7067

 

Pick a Websocket client and use it connect to the Websocket endpoint ws://<occs_host_ip>:8080/cdi-async-events/

 

You will see the same event stream... this time, delivered by a Websocket endpoint

 

 

 

You can try this with multiple clients - for both SSE and Websocket

 

Enjoy Java EE 8 and Glassfish !

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

This blog walks through an example of how to create a test pipeline which incorporates unit as well as integration testing - in the Cloud. What's critical to note is the fact that the cloud service instances (for testing) are started on-demand and then stopped/terminated after test execution

  • Treat infrastructure as code and control it within our pipeline
  • Pay for what you use = cost control

 

We will be leveraging the following Oracle Cloud services

  • Oracle Developer Cloud
  • Oracle Database Cloud
  • Oracle Application Container Cloud

 

 

 

 

 

Oracle Developer Cloud: key enablers

The following capabilities play a critical role

 

The below mentioned features are available within the Build module

 

  • Integration with Oracle PaaS Service Manager (PSM): It's possible to add a PSMcli build step that invokes Oracle PaaS Service Manager command line interface (CLI) commands when the build runs. More details in the documentation
  • Integration with SQLcl: This makes it possible to invoke SQL statements on an Oracle Database when the build runs. Details here

 

Application

The sample application uses JAX-RS (Jersey impementation) to expose data over REST and JPA as the ORM solution to interact with Oracle Database Cloud service (more on Github)

 

 

Here is the test setup

 

Tests

 

Unit

There two different unit tests in the application which use Maven Surefire plugin

  • Using In-memory/embedded (Derby) database: this is invoked using mvn test
  • Using (remote) Oracle Database Cloud service instance: this test is activated by using a specific profile in the pom.xml and its executed using mvn test –Pdbcs-test

 

 

Extract from pom.xml

 

 

 

Integration

In addition to the unit test, we have an integration test layer which is handled using the Maven Failsafe plugin

 

 

Its invoked by mvn integration-test or mvn verify

 

 

Packaging

It's handled using Maven Shade plugin (fat JAR) and Maven assembly plugin (to create a zip file with the ACCS manifest.json)

 

Developer Cloud service configuration

 

Setup

Before we dive into the details, let’s get a high level overview of how you can set this up

 

Project & code repository creation

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

cd <project_folder> //where you unzipped the source code  
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

Once this is done, we can now start configuring our Build

 

  • The pipeline is divided into multiple phases each of which corresponds to a Build
  • These individual phases/builds are then stitched together to create an end-to-end test flow. Let’s explore each phase and its corresponding build configuration

 

Phases

 

Unit test: part I

The JPA logic is tested using the embedded Derby database. It links to the Git repo where we pushed the code and also connects to the Oracle Maven repository

 

 

 

 

The build step invokes Maven

 

 

The post-build step

  • Invokes the next job in the pipeline
  • Archives the test results and enables JUnit test reports availability

 

 

 

 

 

Bootstrap Oracle Database Cloud service

 

  • This phase leverages the PSMcli to first start the Oracle Database Cloud  service and then,
  • SQLcl to create the table and load it up with test data. It is invoked by the previous job

 

 

Please note that the PSM command is asynchronous in nature and returns a Job ID which you can further use (within a shell script) in order to poll the status of the job

 

Here is an example of a such a script

 

VALUE=`psm dbcs stop --service-name test`

echo $VALUE

#Split on ':' which contains the Job ID on the right side of :
OIFS=$IFS
IFS=':'
JSONDATA=${VALUE}

#trying to skip over the left side of : to get the JobID
COUNTER=0
for X in $JSONDATA
do
  if [ "$COUNTER" -eq 1 ]
  then
  #clean string, removing leading white space and tab
  X=$(echo $X |sed -e 's/^[ \t]*//')
  JOBID=$X
  else
  COUNTER=$(($COUNTER+1))
  fi
done

echo "Job ID is "$JOBID

#Repeat check until SUCCEED is in the status
PSMSTATUS=-1
while [ $PSMSTATUS -ne 0 ]; do 

CHECKSTATUS=`psm dbcs operation-status --job-id $JOBID`


  if [[ $CHECKSTATUS == *"SUCCEED"* ]]
  then
  PSMSTATUS=0
    echo "PSM operation Succeeded!"
  else 
  echo "Waiting for PSM operation to complete"
  fi
  sleep 60
done

 

 

Here is the SQLcl configuration which populates the Oracle Database Cloud service table

 

 

 

 

 

Unit test: part II

  • Runs test against the Oracle Database Cloud service instance which we just bootstrapped
  • Triggers application deployment (to Oracle Application Container Cloud)
  • and, like the previous job, this too links to the Git repo and connects to the Oracle Maven repository

 

Certain values for the test code are passed in as parameters

 

 

 

The build step involves invocation of a specific (Maven) profile defined in the pom.xml

 

 

 

The post build section does a bunch of things

  • Invokes the next job in the pipeline
  • Archives the deployment artifact (in this case, a ZIP file for ACCS)
  • Archives the test results and enables test reports availability
  • Invocation of the Deployment step to Application Container Cloud

 

 

 

 

Integration test

Now that we have executed the unit tests and our application is deployed, its now time to execute the integration test against the live application. In this case we test the REST API exposed by our application

 

 

 

Build step invokes Maven goal

 

 

We use the HTTPS proxy in order to access external URL (the ACC application in this case) from within the Oracle Developer Cloud build machines

 

Post build section invokes two subsequent jobs (both of them can run in parallel) as well the test result archive

 

 

 

 

Tear Down

 

  • PSMcli is used to stop the ACCS application and runs in parallel with another job which uses SQLcl to clean up the data in Oracle Database Cloud (drop the table)
  • After that, the final tear down job is invoke, which shuts down the Oracle Database Cloud service instance (again, using PSMcli)

 

 

 

 

 

 

 

Finally, shut down the Oracle Database Cloud service instance

 

 

 

Total recall...

 

  • Split pipeline into phases and implement them using a Build job - the choice of granularity is up to you e.g. you can invoke PSMcli and SQLcl steps in the same job
  • Treat infrastructure (cloud services) as code and manage them from within your pipeline - Developer Cloud makes it easy to across the entire Oracle PaaS platform by PSMcli integration

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

With Oracle Developer Cloud Service, you can integrate your existing Jenkins or Hudson setup - whether they are on-premise or cloud based. Currently, there are three different integration points which are enabled using Webhooks. Let’s look at each of these

 

Jenkins Build notifications

This is made possible by an inbound Webhook which accepts build notifications from a remote Jenkins server

 

Configuration summary

  • Create a Webhook in Developer Cloud (type: Jenkins - Notification Plugin)
  • Configure your external Jenkins to use the URL provided in the Developer Cloud Service Webhook configuration

 

Here is snapshot of the configuration in Oracle Developer Cloud

 

 

This is how the resulting Activity Stream looks like in Oracle Developer Cloud. Clicking on the hyperlinks available in the Activity Stream will redirect you to artifacts in the remote Jenkins instance e.g. build, commit, git repository etc.

 

 

 

You can refer to this documentation section for more details

 

Jenkins Build Trigger integration

You can configure an outbound Webhook which triggers a build on a remote Hudson or Jenkins build server when a Git push occurs in the selected repository in Developer Cloud

 

Configuration summary

  • Configure external Jenkins to allow remote invocation of builds
  • Create a Webhook of type Hudson/Jenkins - Build Trigger
    • Provide basic info, configure authentication and trigger

 

Here is snapshot of the configuration in Oracle Developer Cloud

 

 

 

 

You can refer to this documentation section for more details.

 

Jenkins Git Plugin integration

This is another outbound Webhook which can notify another Hudson or Jenkins build job in response to a Git push in Developer Cloud service. The difference between this and previous Webhook is that this will trigger builds of all the jobs configured for the same Git repository (in Developer Cloud service) as sent in the Webhook payload

 

Configuration summary

  • Create a Webhook of type Hudson/Jenkins Git Plugin
  • Provide the Git repository details as a part of the external Jenkins configuration and activate SCM polling

 

 

You can refer to this documentation section for more details.

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

In this blog we will look at

 

 

 

Application

The application is a simple one which fetches the price of a stock from the cache. It demonstrates other features (in addition to basic caching) such as

  • Cache loader – if the key (stock name) does not exist in the cache (since it was never searched for or has expired), the cache loader logic kicks in and fetches the price using a REST call to an endpoint
  • Serializer – Allows us to work with our domain object (Ticker) and takes care of the transformation logic
  • Expiry – A cache-level expiry is enforced after which the entry is purged from the cache
  • Metrics – get common metrics such as cache size, hits, misses etc.

 

Code

Let’s look at some code snippets for our application and each of the features mentioned above

 

Project is available on Github

 

Cache operations

This example exposes the get cache operation over a REST endpoint implemented using Jersey (JAX-RS API)

 

 

Cache Loader

PriceLoader.java contains the logic to fetch the price from an external source

 

 

Serializer

TickerSerializer.java Converts to and from Ticker.java and its String representation

 

 

 

 

Expiry

Purges the cache entry when this threshold is hit and causes the cache loader invocation is the expired entry is looked up (get) again

 

 

Metrics

Many cache metrics can be extracted – common ones are exposed over a REST endpoint

Some of the metrics are global and other are not. Please refer to the CacheMetrics javadoc for details

 

 

Setup

 

Oracle Application Container Cloud

The only setup required is to create the Cache. It’s very simple and can be done quickly using the documentation.

 

Please make sure that the name of the cache is the same as one used in the code and configuration (Developer Cloud) i.e test-cache. If not, please update the references

 

Oracle Developer Cloud

You would need to configure Developer Cloud for the build as well as Continuous Deployment process. You can refer to previous blogs for the same - some of the details specific to this example will be highlighted here

 

References

 

Provide Oracle App Container Cloud (configuration) descriptors

 

  • The manifest,json provided here will override the one in your zip file (if any) - its not compulsory to provide it here
  • Providing the deployment.json details is compulsory (in this CI/CD scenario ) since it cannot be included in the zip file

 

 

 

Deployment confirmation in Developer Cloud

 

 

 

Status in Application Container Cloud

 

Application URL has been highlighted

 

 

 

 

 

Test the application

Check price

Invoke a HTTP GET (use curl or browser) to the REST endpoint (check the application URL) e.g. https://acc-cache-dcs-domain007.apaas.us6.oraclecloud.com/price/ORCL

 

 

If you try fetching the price of the stock after the expiry (default is 5 seconds), you should see a change in the time attribute (and the price as well - if it has actually changed)

 

Check cache metrics

 

Invoke a HTTP GET (use curl or browser) to the REST endpoint (check the application URL) e.g. https://acc-cache-dcs-domain007.apaas.us6.oraclecloud.com/metrics

 

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

Additional reading/references

 

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

It's pretty easy to get started with a Jenkins instance on Oracle Container Cloud

 

 

Setup Jenkins service

 

You can leverage the out-of-the-box Service (name Jenkins) provided in OCC or create your own. In this example, we will be creating a new service (name yet-another-jenkins)

 

 

Use the existing Docker image or another image (tag) of your choice (e.g. from Docker Hub)

 

Map volumes

 

If you want to save your Jenkins data (e.g. plugins, configuration etc.) after container restart, you would need map the container path to a persistent volume on your host container

 

The Docker Hub Jenkins images stores data in /var/jenkins_home

This can be done easily since Oracle Container Cloud allows SSH access into the worker nodes as well (in addition to the Manager node)

 

All you need to do is the following

 

SSH into your worker node

 

More details here

 

Create the Jenkins data directory

 

This needs to be done on the worker node and permissions need to be assigned

 

 

cd /home/opc
mkdir jenkins
sudo chmod 777 jenkins

 

Configure the volume in the OCCS Jenkins service

 

More info here

 

 

Deploy the service in OCC

That's it.. Now just click Deploy to start your Jenkins container. You should see it in the Deployments list

 

 

Get started

 

Get the administrator password

 

Access the running Jenkins container in OCC and click on View Logs (scroll down to see the password)

 

 

Access Jenkins

 

The Jenkins container exposes port 9002 (by default).Just browse to http://<occs-host-ip>:9002/ and enter the password to get started

 

 

Configure Jenkins as per your requirements....

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

I will be using one of my previous blogs as a background for this one. Although the concept remains the same, the execution is different. We will leverage Oracle Event Hub cloud service as the managed Kafka broker. Our producer & consumer microservices run will run on Oracle Application Container Cloud and leverage its Service Binding feature for Oracle Event Hub cloud. CI/CD for both these applications will be handled by Oracle Developer Cloud service. Specific focus areas include

 

  • Overview of how to get started with Oracle Event Hub cloud including bootstrapping a cluster and topic
  • How to use the Oracle Event Hub Cloud service binding available in Application Container Cloud
  • Configuring Oracle Developer Cloud Service to achieve CI/CD to Application Container Cloud

 

 

Overview

Let's briefly look at what role do the individual cloud services play

 

Oracle Event Hub cloud

This is a fully managed Platform-as-a-Service which makes it dead simple to setup and work with an Apache Kafka cluster

  • You can easily setup clusters and scale them elastically
  • Quickly create topics and add/remove partitions
  • It also provides REST interface as well command line clients to work with your Kafka cluster & topics

 

Oracle Application Container cloud

We continue to use it as the platform to run our producer and consumer microservices. The good thing is that services running on Application Container cloud can easily connect with Oracle Event Hub cloud service using the Service Binding feature. We will this in action

 

Oracle Developer Cloud

This serves as a central hub for source code repository and DevOps pipeline. All we need to do is configure the build and deployment once and it will take care of seamless CI/CD to Application Container Cloud. Although Developer Cloud is capable of a lot more, this blog will focus on these features. Please refer to the documentation for more details

 

Code

The logic for our consumer and producer services is very much the same and the details available here. Let's focus on how to use the Oracle Event Hub service binding

 

Leveraging the Event Hub Service Binding in Application Container Cloud

The service bindings are utilized in the same way in both our services i.e. consumer and producer. The logic uses the Application Container Cloud environment variables (created as a result of the Service Binding) to fetch the location of our Event Hub Kafka cluster as well the topic we want to work with (in this case it’s just a single topic). You do not need to expose ports on the Kafka node(s) for the services on Application Container Cloud to access them. It's all taken care of by the Service Binding internally !

 

Here is a preview

 

Note the usage of OEHCS_TOPIC and OEHCS_EXTERNAL_CONNECT_STRING

public class Consumer implements Runnable {

    private static final Logger LOGGER = Logger.getLogger(Consumer.class.getName());
    private static final String CONSUMER_GROUP = "cpu-metrics-group";
    private final AtomicBoolean CONSUMER_STOPPED = new AtomicBoolean(false);
    private KafkaConsumer<String, String> consumer = null;
    private final String topicName;
    public Consumer() {
        Properties kafkaProps = new Properties();
        LOGGER.log(Level.INFO, "Kafka Consumer running in thread {0}", Thread.currentThread().getName());

        this.topicName = System.getenv().get("OEHCS_TOPIC");
        LOGGER.log(Level.INFO, "Kafka topic {0}", topicName);

        String kafkaCluster = System.getenv().get("OEHCS_EXTERNAL_CONNECT_STRING");
        LOGGER.log(Level.INFO, "Kafka cluster {0}", kafkaCluster);

        kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaCluster);
        kafkaProps.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP);
        kafkaProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        kafkaProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        this.consumer = new KafkaConsumer<>(kafkaProps);
    }

 

 

  • The variable OEHCS_EXTERNAL_CONNECT_STRING allows us get the co-ordinates for the Kafka cluster. This is used in the Kafka configuration represented by a java.util.Properties object
  • OEHCS_TOPIC gives us the name of the topic which is then passed on to the subscribe method of the KafkaConsumer

 

 

 

Setting up Oracle Event Hub cloud service

Let’s see how to setup a managed Kafka cluster in Oracle Cloud

 

Bootstrap the cluster

We first create the Kakfa cluster itself. Use the wizard in the Oracle Event Hub Cloud Service – Platform section to get started

 

 

Enter the required details

 

 

In this case, we are choosing the following configuration

  • Basic deployment where the Kafka cluster and Zookeeper are co-located on the same node
  • A single Kafka node

 

 

 

You can choose from different options such as

  • changing the deployment type to Recommended
  • opting for 3 nodes with the Basic mode
  • deploying a REST proxy alongside your cluster etc.

 

Refer to the official product documentation for more details

 

Click Create to start the provisioning process for your Kafka cluster

 

 

Wait for the process to complete

 

 

Setup the Kafka topic

Once the Kafka cluster is created, you can now create individual topics. To do so, choose Oracle Event Hub Cloud Service from the Platform Services menu

 

 

Click on Create Service and fill in the required details in the subsequent page

 

 

Here we create a topic name cpu-metrics in the cluster kafka-cluster (we just created). The number of partitions is 10 and the retention period is one week (168 hours)

 

 

 

Click Create to conclude the process

 

 

 

Within a few seconds, you should see your newly created topic

 

 

 

Configuring Developer Cloud Service

 

Project & code repository creation

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

Repeat this process for both your application (producer and consumer)

 

cd <project_folder> //where you unzipped the source code  
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

 

Configure build job

Repeat this process for both your application (producer and consumer)

 

Create a New Job

 

 

Basic Configuration

 

Select JDK

 

 

Source Control

 

Choose Git repository

 

 

Build Trigger (Continuous Integration)

 

Set build trigger - this build job will be triggered in response to updated within the Git repository (e.g. via git push)

 

 

Build steps

 

A Maven Build step – to produce the ZIP file to be deployed to Application Container Cloud

 

 

Post-Build actions

 

Activate a post build action to archive the zip file

 

 

   

 

Execute Build

 

Before configuring deployment, we need to trigger the build in order to produce the artifacts which can be referenced by the deployment configuration

 

 

 

After the build is complete, you can check the archived artifacts

 

 

 

Continuous Deployment (CD) to Application Container Cloud

 

Repeat this process for both your application (producer and consumer)

 

Create a New Configuration for deployment

 

 

 

 

  • Enter the required details and configure the Deployment Target
  • Configure the Application Container Cloud instance
  • Configure Automatic deployment option on the final confirmation page
  • Provide content for manifest.json and deployment.json

 

You’ll end up with the below configuration (the view has been split into two parts)

 

 

 

Application Container Cloud defines two primary configuration descriptors – manifest.json and deployment.json, and each of them fulfill a specific purpose (more details here)

 

 

 

Click Save, initiate your deployment and wait for it to finish

 

Confirmation screen

 

 

 

Check your application(s) in Application Container Cloud

 

 

 

In the Deployment sub-section of the application details screen, notice that the required Service Bindings have been automatically wired and the environment variables have been populated as well (only a couple of variables have been highlighted below)

 

 

 

Test the application

 

The details to test the application are the same as described in this section of the previous blog. It’s really simple and here are the high level steps

  • Start your producer application using its REST URL, and
  • Access your consumer application

 

You should see the real time metrics being sent by the producer component to the Event Hub cloud service instance and consumed by the Server-Sent event (SSE) client via the consumer microservice

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This blog covers CI/CD for a Java application deployed on Oracle Application Container Cloud which uses Oracle Database Cloud via its declarative Service Binding feature

 

  • We will focus on setting up and configuring Oracle Developer Cloud Service to achieve end-to-end DevOps and specifically look at
    • Continuous Deployment to Application Container Cloud
    • Using Oracle Maven repository from Developer Cloud Service
  • The scenario depicted here will be used as a reference

 

 

Quick background

Here is an overview

  • APIs used: The application leverages JPA (DB persistence) and JAX-RS (for REST) APIs
  • Oracle Database Cloud Service: The client (web browser/curl etc) invokes a HTTP(s) URL (GET request) which internally calls the JAX-RS resource, which in turn invokes the JPA (persistence) layer to communicate with Oracle Database Cloud instance
  • Application Container Cloud Service bindings in action: Connectivity to the Oracle Database Cloud instance is achieved with the help of a service binding which exposes database connectivity details as environment variables which are then within the code

 

For more details you can refer to the following sections from one of my previous blogs - About the sample and Service Bindings concept

 

Using Oracle Maven within Oracle Developer Cloud

The instructions in the previous blog included a manual step to seed the Oracle JDBC driver (ojdbc7.jar) into the local Maven local repository. In this blog however, we will see leverage Oracle Maven repository (one time registration required for access) for the same. Developers generally need to go through a bunch of steps to before starting to use the Oracle Maven repo (e.g. configuring Maven settings.xml etc.), but Oracle Developer Cloud service handles all this internally! All you need to do is provide your repository credentials along with any customizations if needed. More on this in upcoming section

 

Here is snippet from the pom.xml which highlights the usage of the Oracle Maven repository

 

 

    <repositories>
        <repository>
            <id>maven.oracle.com</id>
            <releases>
                <enabled>true</enabled>
            </releases>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
            <url>https://maven.oracle.com</url>
            <layout>default</layout>
        </repository>
    </repositories>
    <pluginRepositories>
        <pluginRepository>
            <id>maven.oracle.com</id>
            <url>https://maven.oracle.com</url>
        </pluginRepository>
    </pluginRepositories>
    <dependencies>

 

Setting up Developer Cloud Service

 

Project & code repository creation

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

cd <project_folder> //where you unzipped the source code  
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

 

Configure build

Create a New Job

 

 

Basic Configuration

Select JDK

 

 

 

Source Control

Choose Git repository

 

 

 

Build Trigger (Continuous Integration)

Set build trigger - this build job will be triggered in response to updated within the Git repository (e.g. via git push)

 

 

 

Configure Oracle Maven repository

As mentioned above, we will configure Oracle Developer Cloud to use the Oracle Maven repository – the process is quite simple. For more details, refer product documentation

 

 

 

Build steps

A Maven Build step – to produce the ZIP file to be deployed to Application Container Cloud

 

 

Post-Build actions

 

Activate a post build action to archive deployable zip file

 

 

Execute Build

Before configuring deployment, we need to trigger the build in order to produce the artifacts which can be referenced by the deployment configuration

 

 

After the build is complete, you can check the archived artifacts

 

 

Continuous Deployment (CD) to Application Container Cloud

Create a New Confguration for deployment

 

 

 

  • Enter the required details and configure the Deployment Target
  • Configure the Application Container Cloud instance
  • Configure Automatic deployment option on the final confirmation page
  • Provide content for manifest.json and deployment.json

 

You’ll end up with the below configuration (the view has been split into two parts)

 

 

 

Application Container Cloud defines two primary configuration descriptors – manifest.json and deployment.json, and each of them fulfill a specific purpose (more details here). In this case, we have defined the configuration using Developer Cloud service which in turn will override the ones in your application zip (if any) - refer to the documentation for more details

 

 

Confirmation screen

 

 

 

Check your application in Application Container Cloud

 

 

 

In the Deployment sub-section of the application details screen, notice that the required Service Bindings have been automatically wired

 

 

 

Test the application

The testing process remains the same – please refer to this section of the previous blog for details

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

In this blog we will look at how to run a Docker based Java EE microservice in HA/load-balanced mode using HAProxy – all this on the Oracle Container Cloud. Here is a quick overview

 

  • Java EE microservice using Wildfly Swarm: a simple (JAX-RS based) REST application
  • HAProxy: we will use it for load balancing multiple instances of our application
  • Docker: our individual components i.e. our microservice and load balancer services will be packaged as Docker images
  • Oracle Container Cloud: we will stack up our services and run them in a scalable + load balanced manner on Oracle Container Cloud

 

 

Application

 

The application is a very simple REST API using JAX-RS. It just fetches the price for a stock

 

    @GET
    public String getQuote(@QueryParam("ticker") final String ticker) {


        Response response = ClientBuilder.newClient().
                target("https://www.google.com/finance/info?q=NASDAQ:" + ticker).
                request().get();


        if (response.getStatus() != 200) {
            //throw new WebApplicationException(Response.Status.NOT_FOUND);
            return String.format("Could not find price for ticker %s", ticker);
        }
        String tick = response.readEntity(String.class);
        tick = tick.replace("// [", "");
        tick = tick.replace("]", "");


        return StockDataParser.parse(tick)+ " from "+ System.getenv("OCCS_CONTAINER_NAME");
    }

 

 

Wildfly Swarm is used as the (just enough) Java EE runtime. We build a simple WAR based Java EE project and let the Swarm Maven plugin weave its magic – it auto-magically detects and configures required fractions and creates a fat JAR from your WAR.

 

 

<build>
        <finalName>occ-haproxy</finalName>
        <plugins>
            
            <plugin>
                <groupId>org.wildfly.swarm</groupId>
                <artifactId>wildfly-swarm-plugin</artifactId>
                <version>1.0.0.Final</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>package</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
    
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.1</version>
                <configuration>
                    <source>1.7</source>
                    <target>1.7</target>
                    <compilerArguments>
                        <endorseddirs>${endorsed.dir}</endorseddirs>
                    </compilerArguments>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-war-plugin</artifactId>
                <version>2.3</version>
                <configuration>
                    <failOnMissingWebXml>false</failOnMissingWebXml>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-dependency-plugin</artifactId>
                <version>2.6</version>
                <executions>
                    <execution>
                        <phase>validate</phase>
                        <goals>
                            <goal>copy</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${endorsed.dir}</outputDirectory>
                            <silent>true</silent>
                            <artifactItems>
                                <artifactItem>
                                    <groupId>javax</groupId>
                                    <artifactId>javaee-endorsed-api</artifactId>
                                    <version>7.0</version>
                                    <type>jar</type>
                                </artifactItem>
                            </artifactItems>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

 

Alternatives: you can also look into other JavaEE based fat JAR style frameworks such as Payara Micro, KumuluzEE, Apache TomEE embedded etc.

Let’s dive into the nitty gritty….

 

Dynamic load balancing

Horizontal scalability with Oracle Container Cloud is extremely simple - all you need to do it spawn additional instances of your application. This is effective when we have a load balancer to ensure that the consumers of application (users or other applications) do not have to deal with the details of the individual instances – they only need to be aware of the load balancer co-ordinates (host/port). Thw problem is that our load balancer will not be aware of the newly spawned application instances/containers. Oracle Container Cloud helps create a unified Stack where both the back end (REST API in our example) and the (HAProxy) load balancer components are configured as a single unit and can be managed and orchestrated easily as well as provide a recipe for a dynamic HAProxy avatar

 

HAProxy on steroids

We will make use of the artifacts in the Oracle Container Cloud Github repository to build a specialized (Docker) HAProxy image on top of the customized Docker images for confd and runit. confd is a configuration management tool and in this case its used to dynamically discover our application instances on the fly. Think of it as a mini service discovery module in itself which queries the native Service Discovery within Oracle Container Cloud service to detect new application instances

 

Configuring our application to run on Oracle Container Cloud

 

Build Docker images

We will first build the required Docker images. For the demonstration, I will be using my public registry (abhirockzz) on Docker Hub. You can choose to use your own public or private registry

 

Please ensure that Docker engine is up and running

 

Build the application Docker image

 

Dockerfile below

 

FROM anapsix/alpine-java:latest
RUN mkdir app 
WORKDIR "/app"
COPY target/occ-haproxy-swarm.jar .
EXPOSE 8080
CMD ["java", "-jar", "occ-haproxy-swarm.jar"]

 

Run the following command

 

docker build –t <registry>/occ-wfly-haproxy:<tag> . e.g. docker build –t abhirockzz/occ-wfly-haproxy:latest .

 

Build Docker images for runit, confd, haproxy

We will build the images in sequence since they are dependent.To begin with,

 

  • clone the docker-images Github repository, and
  • edit the vars.mk (Makefile) in the ContainerCloud/images/build directory to enter your Docker Hub username

 

 

Now execute the below commands

 

cd ContainerCloud/images
cd runit
make image
cd ../confd
make image
cd ../nginx-lb
make image

Check your local Docker repository

Your local Docker repository should now have all the required images

 

 

Push Docker images

Now we will push the Docker images to a registry (in this case my public Docker Hub registry) so that they can be pulled from Oracle Container Cloud during deployment of our application stack. Execute the below commands

 

Adjust the names (registry and repository) as per your setup

 

docker login
docker push abhirockzz/occ-wfly-haproxy
docker push abhirockzz/haproxy
docker logout

Create the Stack

We will make use of a YAML configuration file to create the Stack. It is very similar to docker-compose. In this specific example, notice how the service name (rest-api) is referenced in the lb (HAProxy) service

 

 

 

This in turn provides information to the HAProxy service about the key in the Oracle Container Cloud service registry which is actually used by the confd service (as explained before) for auto-discovery of new application instances. 8080 is nothing but the exposed port and it is hard coded since it’s also a part of the key within the service registry.

 

Start the process by choosing New Stack from the Stacks menu

 

 

Click on the Advanced Editor and enter the YAML content

 

 

 

 

You should now see the individual services. Enter the Stack Name and click Save

 

 

Initiate Deployment

Go back to the Stacks menu, look for the newly created stack and click Deploy

 

 

In order to test the load balancing capabilities, we will deploy 3 instances our rest-api (back end) service and stick with one instance of the lb (HAProxy) service

 

 

After a few seconds, you should see all the containers in RUNNING state – in this case, its three for our service and one for the ha-proxy load balancer instance

 

 

Check the Service Discovery menu to verify that each instance has its entry here. As explained earlier, this is introspected by the confd service to auto-detect new instance of our application (it would automatically get added to this registry)

 

 

Test

We can access our application via HAProxy. All we need to know is the public IP of the host where our HAProxy container is running. We already mapped port 8886 for accessing the downstream applications (see below snapshot)

 

 

 

Test things out with the following curl command

 

for i in `seq 1 9`; do curl -w "\n" -X GET "http://<haproxy-container-public-IP>:8886/api/stocks?ticker=ORCL"; done

 

All we do is invoke is 9 times, just to see the load balancing in action (among three instances). Here is a result. Notice that the highlighted text points to the instance from which the response is being served – it is load balanced equally among the three instances

 

 

 

Scale up… and check again

You can simply scale up the stack and repeat the same. Navigate to your deployment and click Change Scaling

 

 

After sometime, you’ll see additional instances of your application (five in our case). Execute the command again to verify the load balancing is working as expected

 

 

That’s all for this blog post.

 

Cheers!

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

The underlying theme of this blog continues to remain the same as one of my previous blogs i.e. scalable stream processing microservices on Oracle Cloud. But there are significant changes & additions

 

  • Docker: we will package the Kafka Streams based consumer application as a Docker image
  • Oracle Container Cloud: our containerized application will run and scale on Oracle Container Cloud
  • Service discovery: application is revamped to leverage the service discovery capabilities within Oracle Container Cloud

 

 

Technical Components

 

Here is a quick summary of the technologies used

 

  • Oracle Container Cloud: An enterprise grade platform to compose, deploy and orchestrate Docker containers
  • Docker needs no introduction
  • Apache Kafka: A scalable, distributed pub-sub message hub
  • Kafka Streams: A library for building stream processing applications on top of Apache Kafka
  • Jersey: Used to implement REST and SSE services. Uses Grizzly as a (pluggable) runtime/container
  • Maven: Used as the standard Java build tool (along with its assembly plugin)

 

Sample application

 

By and large, the sample application remains the same and its details can be referred here. Here is a quick summary

 

  • The components: a Kafka broker, a producer application and a consumer (Kafka Streams based) stream processing application
  • Changes (as compared to the setup here): the consumer application will now run on Oracle Container Cloud and the application instance discovery semantics (which were earlier Oracle Application Container Cloud specific) have now been implemented on top of Oracle Container Cloud service discovery capability

 

Architecture

 

To get an idea of the key concepts, I would recommend going through this section of the High level architecture section of one of the previous blogs . Here is a diagram representing the overall runtime view of the system

 

 

It's key takeaway are as follows

 

  • Oracle Container Cloud will host our containerized stream processing (Kafka consumer) applications
  • We will use its elastic scalability features to spin additional containers on-demand to distribute the processing load
  • The contents of the topic partitions in Kafka broker (marked as P1, P2, P3) will be distributed among the application instances

 

Please note that having more application instances than topic partitions will mean that some of your instances will be idle (no processing). It is generally recommended to set the number of topic partitions to a relatively high number (e.g. 50) in order to reap maximum benefit from Kafka

 

Code

 

You can refer to this section in the previous blog for code related details (since the bulk of the logic is the same). The logic for service discovery part (which is covered in-depth below) is the major difference (since it relies on Oracle Container Cloud KV store for runtime information). Here is the relevant snippet for the same

 

/**
     * find yourself in the cloud!
     *
     * @return my port
     */
    public static String getSelfPortForDiscovery() {
        String containerID = System.getProperty("CONTAINER_ID", "container_id_not_found");
        //String containerID = Optional.ofNullable(System.getenv("CONTAINER_ID")).orElse("container_id_not_found");
        LOGGER.log(Level.INFO, " containerID {0}", containerID);


        String sd_key_part = Optional.ofNullable(System.getenv("SELF_KEY")).orElse("sd_key_not_found");
        LOGGER.log(Level.INFO, " sd_key_part {0}", sd_key_part);


        String sd_key = sd_key_part + "/" + containerID;
        LOGGER.log(Level.INFO, " SD Key {0}", sd_key);


        String sd_base_url = "172.17.0.1:9109/api/kv";


        String fullSDUrl = "http://" + sd_base_url + "/" + sd_key + "?raw=true";
        LOGGER.log(Level.INFO, " fullSDUrl {0}", fullSDUrl);


        String hostPort = getRESTClient().target(fullSDUrl)
                .request()
                .get(String.class);


        LOGGER.log(Level.INFO, " hostPort {0}", hostPort);
        
        String port = hostPort.split(":")[1];
        LOGGER.log(Level.INFO, " Auto port {0}", port);


        return port;
    }

Kafka setup

 

On Oracle Compute Cloud

 

You can refer to part I of the blog for the Apache Kafka related setup on Oracle Compute. The only additional step which needs to be executed is opening of the port on which your Zookeeper process is listening (its 2181 by default) –as this is required by the Kafka Streams library configuration. While executing the steps from section Open Kafka listener port section, ensure that you include the Oracle Compute Cloud configuration for 2181 (in addition to the Kafka broker port 9092)

 

On Oracle Container Cloud!

 

You can run a Kafka cluster on Oracle Container Cloud – check out this cool blog post !

 

The Event Hub Cloud is a new offering which provides Apache Kafka as a managed service in Oracle Cloud

 

Configuring our application to run on Oracle Container Cloud

 

Build the application

 

Execute mvn clean package to build the application JAR

 

Push to Docker Hub

 

Create a Docker Hub account if you don't have one already. To build and push the Docker image, execute the below commands

 

Please ensure that Docker engine is up and running

 

docker login
docker build –t <registry>/<image_name>:<tag> . e.g. docker build –t abhirockzz/kafka-streams:latest .
docker push <registry>/<image_name>:<tag> e.g. docker push abhirockzz/kafka-streams:latest

 

Check your Docker Hub account to confirm that the image exists there

 

 

Create the Service

 

To create a new Service, click on New Service in the Services menu

 

 

There are multiple ways in which you can configure your service – one of which is the traditional way of filling in each of the attributes in the Service Editor. You can also directly enter the Docker run command or a YAML configuration (similar to docker-compose) and Oracle Container Cloud will automatically populate the Service details. Let’s see the YAML based method in action

 

 

Populate the YAML editor (highlighted above) with the required configuration

 

version: 2
services:
  kstreams:
    image: "<docker hub image e.g. abhirockzz/kafka-streams>"
    environment:
      - "KAFKA_BROKER=<kafka broker host:port e.g. 149.007.42.007:9092>"
      - "ZOOKEEPER=<zookeeper host:port e.g. 149.007.42.007:2181>"
      - "SELF_KEY={{ sd_deployment_containers_path .ServiceID 8080 }}"
      - "OCCS_HOST={{hostip_for_interface .HostIPs \"public_ip\"}}"
      - "occs:scheduler=random"
    ports:
      - 8080/tcp

Please make sure that you substitute the host:port for your Kafka broker and Zookeeper server in the yaml configuration file

 

 

If you switch to the Builder view, notice that all the values have already been populated

 

 

All you need to do is fill out the Service Name and (optionally) choose the Scheduler and Availability properties and click Save to finish the Service creation

 

 

You should see your newly created service in the list of service in the Services menu

 

 

YAML configuration details

 

Here is an overview of the configuration parameters

 

  • Image: Name of the application image on Docker Hub
  • Environment variables
    • KAFKA_BROKER: the host and port information to connect to the Kafka broker

    • ZOOKEEPER: the host and port information to connect to the Zookeeper server (for the Kafka broker)
    • SELF_KEY & OCCS_HOST: these are defined as templates functions (more details on this in a moment) and help with dynamic container discovery
  • Ports: Our application is configured to run on port 8080 i.e. this is specified within the code itself. This is not a problem since we have configured a random (auto generated) port on the host (worker node of Oracle Container Cloud) to map to 8080

 

This is equivalent to using the –P option in docker run command

 

Template functions and Service discovery

 

We used the following template functions within the environment variables of our YAML file

 

Environment variable

Template function

 

 

SELF_KEY

{{ sd_deployment_containers_path .ServiceID 8080 }}

OCCS_HOST

{{hostip_for_interface .HostIPs \"public_ip\"}}

 

What are templates*?

Template arguments provide access to deployment properties related to your services (or stacks) and template functions allow you to utilize them at runtime (in a programmatic fashion). More details in the documentation

 

Why do we need them?

Within our application, each Kafka Streams consumer application instance needs register to its co-ordinates in the Streams configuration (using the application.server parameter). This in turn allows Kafka Streams to store this as a metadata which can then be used at runtime. Here are some excerpts from the code

 

Seeding discovery info

 

Map<String, Object> configurations = new HashMap<>();
String streamsAppServerConfig = GlobalAppState.getInstance().getHostPortInfo().host() + ":"
                + GlobalAppState.getInstance().getHostPortInfo().port();
 configurations.put(StreamsConfig.APPLICATION_SERVER_CONFIG, streamsAppServerConfig);

 

Using the info

 

Collection<StreamsMetadata> storeMetadata = ks.allMetadataForStore(storeName);
StreamsMetadata metadataForMachine = ks.metadataForKey(storeName, machine, new StringSerializer());

 

How is this achieved?

 

For the application.server parameter, we need the host and port of the container instance in Oracle Container Cloud. The OCCS_HOST environment variable is populated automatically by the evaluation of the template function {{hostip_for_interface .HostIPs \"public_ip\"}} – this is the public IP of the Oracle Container Cloud host and takes care of ‘host’ part of the application.server configuration. The port determination needs more work since we have configured port 8080 to be mapped with a random port on Oracle Container Cloud host/worker node. The inbuilt discovery service mechanism within Oracle Container cloud made it possible to implement this.

 

The internal service discovery database is exposed via a REST API for external clients. But it can be accessed internally (by applications) on 172.17.0.1:9109. It exposes the host and port (of a Docker container) information in a key-value format

 

 

Key points to be noted in the above image

  • The part highlighted in red is the value which is the host and port information
  • The part highlighted in green is a portion of the key, which is the (dynamic) Docker container ID
  • The remaining portion of the key is also dynamic, but can be evaluated with the help of a template function

 

The trick is to build the above key and then use that to query the discovery service to get the value (host and port details). This is where the SELF_KEY environment variable comes into play. It uses the {{ sd_deployment_containers_path .ServiceID 8080 }} (where 8080 is the exposed and mapped application port) template function which gets evaluated at runtime. This gives us a part of the key i.e. (as per above example) apps/kstreams-kstreams-20170315-080407-8080/containers

 

The SELF_KEY environment variable is concatenated with the Docker container ID (which is a random UUID) evaluated during container startup within the init.sh script i.e. (in the above example) 3a52….. This completes our key using which we can query the service discovery store.

 

#!/bin/sh

export CONTAINER_ID=$(cat /proc/self/cgroup | grep 'cpu:/' | sed -r 's/[0-9]+:cpu:.docker.//g')
echo $CONTAINER_ID
java -jar -DCONTAINER_ID=$CONTAINER_ID occ-kafka-streams-1.0.jar

 

 

Both SELF_KEY and OCCS_HOST environment variables are used within the internal logic of the Kafka consumer application. The Oracle Container Cloud service discovery store is invoked (using its REST API) at container startup using the complete URL – http://172.17.0.1:9109/api/kv/<SELF_KEY>/<CONTAINER_ID>

 

See it in action via this code snippet

 

String containerID = System.getProperty("CONTAINER_ID", "container_id_not_found");
String sd_key_part = Optional.ofNullable(System.getenv("SELF_KEY")).orElse("sd_key_not_found");
String sd_key = sd_key_part + "/" + containerID;
String sd_base_url = "172.17.0.1:9109/api/kv";
String fullSDUrl = "http://" + sd_base_url + "/" + sd_key + "?raw=true";
String hostPort = getRESTClient().target(fullSDUrl).request().get(String.class);        
String port = hostPort.split(":")[1];

 

Initiate Deployment

 

Start Kafka broker first

 

 

Click on the Deploy button to start the deployment. Accept the defaults (for this time) and click Deploy

 

 

 

You will be lead into the Deployments screen. Wait for a few seconds for the process to finish

 

 

 

Dive into the container details

 

Click on the Container Name (highlighted). You will lead to the container specific details page

 

 

Make a note of the following

 

Auto bound port

 

 

Environment variables (important ones have been highlighted)

 

Test

 

Assuming your Kakfa broker is up and running and you have deployed the application successfully, execute the below mentioned steps to test drive your application

 

Build & start the producer application

 

 

mvn clean package //Initiate the maven build 
cd target //Browse to the build director
java –jar –DKAFKA_CLUSTER=<kafka broker host:port> kafka-cpu-metrics-producer.jar //Start the application

 

The producer application will start sending data to the Kakfa broker

 

Check the statistics

 

Cumulative moving average of all machines

 

Allow the producer to run for a 30-40 seconds and then check the current statistics. Issue a HTTP GET request to your consumer application at http://OCCS_HOST:PORT/metrics e.g . http://120.33.42.007:37155/metrics. You’ll see a response payload similar to what’s depicted below

 

the output below has been truncated for the sake of brevity

 

 

The information in the payload is as following

  • cpu: the cumulative average of the CPU usage of a machine
  • machine: the machine ID
  • source: this has been purposefully added as a diagnostic information to see which node (Docker container in Oracle Container Cloud) is handling the calculation for a specific machine (this is subject to change as your application scales up/down)

 

Cumulative moving average of a specific machine

 

Issue a HTTP GET request to your consumer application at http://OCCS_HOST:PORT/metrics/<machine-ID> e.g.  http://120.33.42.007:37155/metrics/machine-1

 

 

 

Scale up… and down

 

Oracle Container Cloud enables your application to remain elastic i.e. scale out or scale in on-demand. The process is simple – let’s see how it works for this application. Choose your deployment from the Deployments menu and click Change Scaling. We are bumping up to 3 instances now

 

 

After sometime, you’ll have three containers running separate instances of your Kafka Streams application

 

 

 

The cpu metrics computation task will now be shared amongst three nodes now. You can check the logs of the old and new container logs to confirm this.

 

 

In the old container, Kafka streams will close the existing processing tasks in order to re-distribute them to the new nodes. On checking the logs, you will see something similar to the below output

 

 

 

In the new containers, you will see Processor Initialized output, as a result of tasks being handed to these nodes. Now you can check the metrics using any of the three instances (check the auto bound port for the new containers). You can spot the exact node which has calculated the metric (notice the different port number). See snippet below

 

 

 

Scale down: You can scale down the number of instances using the same set of step and Kafka Streams will take care re-balancing the tasks among the remaining nodes

 

Note on Dynamic load balancing

 

In a production setup, one would want to load balance the consumer microservices by using haproxy, ngnix etc. (in this example one had to inspect each application instance by using the auto bound port information). This might be covered in a future blog post. Oracle Container Cloud provides you the ability to easily build such a coordinated set of services using Stacks and ships with some example stacks for reference purposes

 

That’s all for this blog post.... Cheers!

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This blog shows you how you can use Payara Micro to build a Java EE based microservice. It will leverage the following services from the Oracle Cloud (PaaS) stack

 

  • Developer Cloud service: to host the code (Git repo), provide Continuous Integration & Continuous Deployment capabilities (thanks to its integration with other Oracle PaaS services)
  • Application Container Cloud service: scalable aPaaS for running our Java EE microservice

 

 

Overview

 

Payara Micro?

Payara Micro is a Java EE based solution for building microservice style applications. Let’s expand on this a little bit

 

  • Java EE: Payara Micro supports the Java EE Web Profile standard along with additional support for other specifications which are not a part of the Web Profile (e.g. Batch, Concurrency Utilities etc.)
  • It’s a library: Available as a JAR file which encapsulates all these features

 

Development model

Payara Micro offers you the choice of multiple development styles…

 

  • WAR: package your Java EE application a WAR file and launch it with Payara Micro using java –jar payara-micro-<version>.jar --deploy mystocks.war
  • Embedded mode: because it’s a library, it can be embedded within your Java applications using its APIs
  • Uber JAR: Use the Payara Micro Maven support along with the exec plugin to package your WAR along with the Payara Micro library as a fat JAR

 

We will use the fat JAR technique in the sample application presented in the blog

 

Benefits

 

Some of the potential benefits are as follows

 

  • Microservices friendly: gives you the power of Java EE as a library, which can be easily used within applications, packaged in flexible manner (WAR + JAR or just a fat JAR) and run in multiple environments such as PaaS , container based platforms
  • Leverage Java EE skill set: continue using your expertise on Java EE specifications like JAX-RS, JPA, EJB, CDI etc.

 

About the sample application

 

It is a vanilla Java EE application which uses the following APIs – JAX-RS, EJB, CDI and WebSocket. It helps keep track of stock prices of NYSE scrips.

 

  • Users can check the stock price of a scrip (listed on NASDAQ) using a simple REST interface
  • Real time price tracking is also available – but this is only available for Oracle (ORCL)

 

Here is a high level diagram and some background context

 

  • an EJB scheduler fetches (ORCL) periodically fetches stock price, fires CDI events which are recevied by the WebSocket component (marked as a CDI event observer) and connected clients are updated with the latest price
  • the JAX-RS REST endpoint is used to fetch price for any stock on demand - this is a typical request-response based HTTP interaction as opposed to the bi-directional, full-duplex WebSocket interaction

 

 

 

 

Code

 

Let's briefly look at the relevant portions of the code (import statements omitted for brevity)

 

RealTimeStockTicker.java

 

@ServerEndpoint("/rt/stocks")
public class RealTimeStockTicker {


    //stores Session (s) a.k.a connected clients
    private static final List<Session> CLIENTS = new ArrayList<>();


    /**
     * Connection callback method. Stores connected client info
     *
     * @param s WebSocket session
     */
    @OnOpen
    public void open(Session s) {
        CLIENTS.add(s);
        Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.INFO, "Client connected -- {0}", s.getId());
    }


    /**
     * pushes stock prices asynchronously to ALL connected clients
     *
     * @param tickTock the stock price
     */
    public void broadcast(@Observes @StockDataEventQualifier String tickTock) {
        Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.INFO, "Event for Price {0}", tickTock);
        for (final Session s : CLIENTS) {
            if (s != null && s.isOpen()) {
                /**
                 * Asynchronous push
                 */
                s.getAsyncRemote().sendText(tickTock, new SendHandler() {
                    @Override
                    public void onResult(SendResult result) {
                        if (result.isOK()) {
                            Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.INFO, "Price sent to client {0}", s.getId());
                        } else {
                            Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.SEVERE, "Could not send price update to client " + s.getId(),
                                    result.getException());
                        }
                    }
                });
            }


        }


    }


    /**
     * Disconnection callback. Removes client (Session object) from internal
     * data store
     *
     * @param s WebSocket session
     */
    @OnClose
    public void close(Session s) {
        CLIENTS.remove(s);
        Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.INFO, "Client discconnected -- {0}", s.getId());
    }


}

 

 

StockDataEventQualifier.java

 

/**
 * Custom CDI qualifier to stamp CDI stock price CDI events
 * 
 */
@Qualifier
@Retention(RUNTIME)
@Target({METHOD, FIELD, PARAMETER, TYPE})
public @interface StockDataEventQualifier {
}

 

 

StockPriceScheduler.java

 

/**
 * Periodically polls the Google Finance REST endpoint using the JAX-RS client
 * API to pull stock prices and pushes them to connected WebSocket clients using
 * CDI events
 *
 */
@Singleton
@Startup
public class StockPriceScheduler {


    @Resource
    private TimerService ts;
    private Timer timer;


    /**
     * Sets up the EJB timer (polling job)
     */
    @PostConstruct
    public void init() {
        /**
         * fires 5 secs after creation
         * interval = 5 secs
         * non-persistent
         * no-additional (custom) info
         */
        timer = ts.createIntervalTimer(5000, 5000, new TimerConfig(null, false)); //trigger every 5 seconds
        Logger.getLogger(StockPriceScheduler.class.getName()).log(Level.INFO, "Timer initiated");
    }


    @Inject
    @StockDataEventQualifier
    private Event<String> msgEvent;


    /**
     * Implements the logic. Invoked by the container as per scheduled
     *
     * @param timer the EJB Timer object
     */
    @Timeout
    public void timeout(Timer timer) {
        Logger.getLogger(StockPriceScheduler.class.getName()).log(Level.INFO, "Timer fired at {0}", new Date());
        /**
         * Invoked asynchronously
         */
        Future<String> tickFuture = ClientBuilder.newClient().
                target("https://www.google.com/finance/info?q=NASDAQ:ORCL").
                request().buildGet().submit(String.class);


        /**
         * Extracting result immediately with a timeout (3 seconds) limit. This
         * is a workaround since we cannot impose timeouts for synchronous
         * invocations
         */
        String tick = null;
        try {
            tick = tickFuture.get(3, TimeUnit.SECONDS);
        } catch (InterruptedException | ExecutionException | TimeoutException ex) {
            Logger.getLogger(StockPriceScheduler.class.getName()).log(Level.INFO, "GET timed out. Next iteration due on - {0}", timer.getNextTimeout());
            return;
        }
        
        if (tick != null) {
            /**
             * cleaning the JSON payload
             */
            tick = tick.replace("// [", "");
            tick = tick.replace("]", "");


            msgEvent.fire(StockDataParser.parse(tick));
        }


    }


    /**
     * purges the timer
     */
    @PreDestroy
    public void close() {
        timer.cancel();
        Logger.getLogger(StockPriceScheduler.class.getName()).log(Level.INFO, "Application shutting down. Timer will be purged");
    }
}

 

 

RESTConfig.java

 

/**
 * JAX-RS configuration class
 * 
 */
@ApplicationPath("api")
public class RESTConfig extends Application{
    
}

 

 

StockDataParser.java

 

/**
 * A simple utility class which leverages the JSON Processing (JSON-P) API to filter the JSON 
 * payload obtained from the Google Finance REST endpoint and returns useful data in a custom format
 * 
 */
public class StockDataParser {
    
    public static String parse(String data){
        
        JsonReader reader = Json.createReader(new StringReader(data));
                JsonObject priceJsonObj = reader.readObject();
                String name = priceJsonObj.getJsonString("t").getString();
                String price = priceJsonObj.getJsonString("l_cur").getString();
                String time = priceJsonObj.getJsonString("lt_dts").getString();
        


        return (String.format("Price for %s on %s = %s USD", name, time, price));
    }
}

 

A note on packaging

A mentioned earlier, from a development perspective, it is a typical WAR based Java EE application which is packaged as a fat JAR along with the Payara Micro container

 

Notice how the container is being packaged with the application rather than the application being deployed into a container

The Java EE APIs are only needed for compilation (scope = provided) since they are present in the Payara Micro library

 

<dependency>
 <groupId>javax</groupId>
 <artifactId>javaee-api</artifactId>
 <version>7.0</version>
 <scope>provided</scope>
</dependency>

 

 

Using the Maven plugin to produce a fat JAR

 

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <version>1.5.0</version>
    <dependencies>
        <dependency>
            <groupId>fish.payara.extras</groupId>
            <artifactId>payara-micro</artifactId>
            <version>4.1.1.164</version>
        </dependency>
    </dependencies>
    <executions>
        <execution>
            <id>payara-uber-jar</id>
            <phase>package</phase>
            <goals>
                <goal>java</goal>
            </goals>
            <configuration>
                <mainClass>fish.payara.micro.PayaraMicro</mainClass>
                <arguments>
                    <argument>--deploy</argument>
                    <argument>${basedir}/target/${project.build.finalName}.war</argument>
                    <argument>--outputUberJar</argument>                                                  
                    <argument>${basedir}/target/${project.build.finalName}.jar</argument>
                </arguments>
                <includeProjectDependencies>false</includeProjectDependencies>
                <includePluginDependencies>true</includePluginDependencies>
                <executableDependency>
                    <groupId>fish.payara.extras</groupId>
                    <artifactId>payara-micro</artifactId>
                </executableDependency>
            </configuration>
        </execution>
    </executions>
</plugin>

 

 

Setting up Continuous Integration & Deployment

The below sections deal with the configurations to made within Oracle Developer Cloud service

 

Project & code repository creation

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

cd <project_folder> 
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

Configure build

 

Create a New Job

 

 

Select JDK

 

 

 

Continuous Integration (CI)

 

Choose Git repository

 

 

 

Set build trigger - this build job will be triggered in response to updated within the Git repository (e.g. via git push)

 

 

Add build steps

 

  • A Maven Build step – to produce the WAR and the fat JAR
  • An Execute Shell step – package up the application JAR along with the required deployment descriptor (manifest.json required by Application Container cloud)

 

 

 

 

Here is the command for your reference

 

zip -j accs-payara-micro.zip target/mystocks.jar manifest.json

 

The manifest.json is as follows

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar mystocks.jar --port $PORT --noCluster",
    "release": {
        "build": "23022017.1202",
        "commit": "007",
        "version": "0.0.1"
    },
    "notes": "Java EE on ACC with Payara Micro"
}

 

Activate a post build action to archive deployable zip file

 

 

 

Execute Build

Before configuring deployment, we need to trigger the build in order to produce the artifacts which can be referenced by the deployment configuration

 

 

 

After the build is complete, you can

  • Check the build logs
  • Confirm archived artifacts

 

Logs

 

 

Artifacts

 

 

 

Continuous Deployment (CD) to Application Container Cloud

 

Create a New Confguration for deployment

 

 

 

  • Enter the required details and configure the Deployment Target
  • Configure the Application Container Cloud instance
  • Configure Automatic deployment option on the final confirmation page

 

You’ll end up with the below configuration

 

 

Confirmation screen

 

 

 

Check your application in Application Container Cloud

 

 

 

Test the CI/CD flow

 

Make some code changes and push them to the Developer Cloud service Git repo. This should

 

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

Test the application

 

 

I would recommend using the client which can be installed into Chrome browser as a plugin – Simple WebSocket Client

 

That's all for this blog post..

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Part I of the blog demonstrated development, deployment of individual microservices (on Oracle Application Container Cloud) and how they are loosely coupled using the Apache Kafka message hub (setup on Oracle Compute Cloud). This (second) part will continue building on the previous one and with the help of an application, it will explore microservice based stream processing and dive into the following areas

 

  • Kafka Streams: A stream processing library
  • Scalability: enable your application to handle increased demands
  • Handling state: this is a hard problem to solve when the application needs to be horizontally scalable

 

 

Technical Components

 

Open source technologies

The following open source components were used to build the sample application

 

Component

Description

 

 

Apache Kafka

A scalable, distributed pub-sub message hub

Kafka Streams

A library for building stream processing applications on top of Apache Kafka

Jersey

Used to implement REST and SSE services. Uses Grizzly as a (pluggable) runtime/container

Maven

Used as the standard Java build tool (along with its assembly plugin)

 

Oracle Cloud

The following Oracle Cloud services have been leveraged

 

Oracle Cloud Service

Description

 

 

Application Container Cloud

Serves as a scalable platform for running our

stream processing microservices

Compute Cloud

Hosts the Kafka cluster (broker)

 

Note: In addition to compute based (IaaS) Kafka hosting, Oracle Cloud now offers Event Hub Cloud. This is a compelling offering which provides Apache Kafka as a fully managed service along with other value added capabilities.

 

Hello Kafka Streams!

In simple words, Kafka Streams is a library which you can include in your Java based applications to build stream processing applications on top of Apache Kafka. Other distributed computing platforms like Apache Spark, Apache Storm etc. are widely used in the big data stream processing world, but Kafka Streams brings some unique propositions in this area

 

Kafka Streams: what & why

 

What

Why

 

 

Built on top of Kafka – leverages its scalable and fault tolerant capabilities

If you use Kafka in your ecosystem, it makes perfect sense to leverage Kafka Streams to churn streaming data to/from the Kafka topics

 

 

Microservices friendly

It’s a lightweight library which you use within your Java application. This means that you can use it to  build microservices style stream processing applications

 

 

Flexible deployment & elastic in nature

You’re not restricted to a specific deployment model (e.g. cluster-based). The application can be packaged and deployed in a flexible manner and scaled up and down easily

 

 

For fast data

Harness the power of Kafka streams to crunch high volume data in real time systems – it does not need to be at big data scale

 

 

Support for stateful processing

Helps manage local application state in a fault tolerant & scalable manner

 

 

Sample application: what’s new

 

In part I, the setup was as follows

  • A Kafka broker serving as the messaging hub
  • Producer application (on Application Container Cloud) pushing CPU usage metrics to Kafka
  • Consumer application (on Application Container Cloud) consuming those metrics from Kafka and exposes them as real time feed (using Server Sent Events)

 

Some parts of the sample have been modified to demonstrate some of the key concepts. Here is the gist

 

Component

Changes

 

 

Consumer API

The new consumer application leverages the Kafka Streams API on Application Container Cloud as compared to the traditional (polling based) Kafka Consumer client API (used in part I)

 

 

Consumer topology

We will deploy multiple instances of the Consumer application to scale our processing logic

 

 

Nature of metrics feed

The cumulative moving average of the CPU metrics per machine is calculated as opposed to the exact metric provided by the SSE feed in part I

 

 

Accessing the CPU metrics feed

the consumer application makes the CPU usage metrics available in the form of a REST API as compared to the SSE based implementation in part I

 

High level architecture

The basic architecture still remains the same i.e. microservices decoupled using a messaging layer

 

 

 

As mentioned above, the consumer application has undergone changes and is now based on the Kafka Streams API. We could have continued to use the traditional poll based Kafka Consumer client API as in part I, but the Kafka Streams API was chosen for a few reasons. Let’s go through them in detail and see how it fits in the context of the overall solution. At this point, ask yourself the following questions

 

  • How would you scale your consumer application?
  • How would you handle intermediate state (required for moving average calculation) spread across individual instances of your scaled out application?

 

Scalability

With Application Container Cloud you can spawn multiple instances of your stream processing application with ease (for more details, refer to the documentation)

 

But how does it help?

The sample application models CPU metrics being continuously sent by the producer application to a Kafka broker – for demonstration purposes, the number of machines (whose CPU metrics are being sent) have been limited to ten. But how would you handle large scale data

 

  • When the number of machines increases to scale of thousands?
  • Perhaps you want to factor in additional attributes (in addition to just the cpu usage)?
  • Maybe you want to execute all this at data-center scale?

 

The answer lies in distributing your computation across several processes and this is where horizontal scalability plays a key role.

When the CPU metrics are sent to a topic in Kafka, they are distributed to different partitions (using a default consistent hashing algorithm) – this is similar to sharding. This helps from multiple perspectives

  • When Kafka itself is scaled out (broker nodes are added) – individual partitions are replicated over these nodes for fault tolerance and high performance
  • From a consumer standpoint - multiple consumers (in the same group) automatically distribute the load among themselves

 

In the case of our example, each of the streams processing application instance is nothing but a (specialized) form of Kafka Consumer and takes up a non-overlapping set of partitions in Kafka for processing. For a setup where 2 instances which are processing data for 10 machines spread over 4 partitions in Kafka (broker). Here is a pictorial representation

 

 

 

Managing application state (at scale)

The processing logic in the sample application is not stateless i.e. it depends on previous state to calculate its current state. In the context of this application, state is

 

  • the cumulative moving average of a continuous stream of CPU metrics,
  • being calculated in parallel across a distributed set of instances, and
  • constantly changing i.e. the cumulative moving average of the machines handled by each application instance is getting updated with the latest results

 

If you confine the processing logic to a single node, the problem of localized state co-ordination would not have existed i.e. local state = global state. But this luxury is not available in a distributed processing system. Here is how our application handles it (thanks to Kafka Streams)

 

  • The local state store (a KV store) containing the machine to (cumulative moving average) CPU usage metric is sent to a dedicated topic in Kafka e.g. the in-memory-avg-store in our application (named cpu-streamz) will have a corresponding topic cpu-streamz-in-memory-avg-store-changelog in Kafka
  • This topic is called a changelog since it is a compacted one i.e. only the latest key-value pair is retained by Kafka. This is meant to achieve the goal (distributed state management) in the cheapest possible manner
  • During scale up – Kafka assigns some partitions to the new instance (see above example) and the state for those partitions (which were previously stored in another instance) are replayed from the Kafka changelog topic to build the state store for this new instance
  • When an instance crashes or is stopped – the partitions being handled by that instance is handed off to some other node and the state of the partition (stored in the Kafka changelog topic) is written to the local state store of the existing node to which the work was allotted

 

All in all, this ensures scalable and fault tolerant state management

 

Exposing application state

As mentioned above, the cumulative moving averages of CPU metrics of each machine is calculated across multiple nodes in parallel. In order to find out the global state of the system i.e. current average of all (or specific) machines, the local state stores need to be queried. The application provides a REST API for this

 

 

 

 

More details in the Testing section on how to see this in action

 

It's important to make note of these points with regards to the implementation of the REST API which in turns lets us get what we want - real time insight in to the moving averages of the CPU usage

 

  • Topology agnostic: Use a single access URL provided by Application Container Cloud (as highlighted in the diagram above). As a client, you do not have to be aware of individual application instances
  • Robust & flexible: Instances can be added or removed on the fly but the overall business logic (in this case it is calculation of the cumulative moving average of a stream of CPU metrics) will remain fault tolerant and adjust to the elastic topology changes

 

This is made possible by a combination of the following

 

  • Automatic load balancing: Application Container cloud load balances requests among multiple instances of your applications
  • Clustered setup: from an internal implementation perspective, your application instances can detect each other. For this to work, the isClustered attribute in the manifest.json is set to true and custom logic is implemented within the solution in order for the instance specific information to be discovered and used appropriately. However, this is an internal implementation detail and the user is not affected by it

Please look at the Code snippets section for some more details

  • Interactive queries: this capability in Kafka Streams enables external clients to introspect the state store of a stream processing application instance via a host-port configuration enabled within the app configuration

 

An in-depth discussion of Kafka Streams is not possible in a single blog. The above sections are meant to provide just enough background which is (hopefully) sufficient from the point of view of this blog post. Readers are encouraged to spend some time going through the official documentation and come back to this blog to continue hacking on the sample

 

Setup

You can refer to part I of the blog for the Apache Kafka related setup. The only additional step which needs to be executed is exposing the port on which your Zookeeper process is listening (its 2181 by default) – as this is required by the Kafka Streams library configuration. While executing the steps from section Open Kafka listener port section, ensure that you include the Oracle Compute Cloud configuration for 2181 (in addition to the Kafka broker port 9092)

 

Code

Maven dependenies

As mentioned earlier, from an application development standpoint, Kafka Streams is just a library. This is evident in the pom.xml

 

<dependency>
     <groupId>org.apache.kafka</groupId>
     <artifactId>kafka-streams</artifactId>
     <version>0.10.1.1</version>
</dependency>

 

The project also uses the appropriate Jersey libraries along with the Maven shade and assembly plugins to package the application  

Overview

The producer microservice remains the same and you can refer part I for the details. Let’s look at the revamped Consumer stream processing microservice

 

Class

Details

 

 

KafkaStreamsAppBootstrap

Entry point for the application. Kicks off Grizzly container, Kafka Stream processing pipeline

CPUMetricStreamHandler

Implements the processing pipeline logic and handles K-Stream configuration and the topology creation as well

MetricsResource

Exposes multiple REST endpoints for fetching CPU moving average metrics

Metric, Metrics

POJOs (JAXB decorated) to represent metric data. They are exchanged as JSON/XML payloads

GlobalAppState, Utils

Common utility classes

 

Now that you have a fair idea of what's going on within the application and an overview of the classes involved, it makes sense to peek at some of the relevant sections of the code

 

State store

 

    public static class CPUCumulativeAverageProcessor implements Processor<String, String> {
     ...................
        @Override
        public void init(ProcessorContext pc) {
            this.pc = pc;
            this.pc.schedule(12000); //invoke punctuate every 12 seconds
            this.machineToAvgCPUUsageStore = (KeyValueStore<String, Double>) pc.getStateStore(AVG_STORE_NAME);
            this.machineToNumOfRecordsReadStore = (KeyValueStore<String, Integer>) pc.getStateStore(NUM_RECORDS_STORE_NAME);
        }
     ...............

 

Cumulative Moving Average (CMA) calculation

 

..........
@Override
public void process(String machineID, String currentCPUUsage) {

            //turn each String value (cpu usage) to Double
            Double currentCPUUsageD = Double.parseDouble(currentCPUUsage);
            Integer recordsReadSoFar = machineToNumOfRecordsReadStore.get(machineID);
            Double latestCumulativeAvg = null;

            if (recordsReadSoFar == null) {
                PROC_LOGGER.log(Level.INFO, "First record for machine {0}", machineID);
                machineToNumOfRecordsReadStore.put(machineID, 1);
                latestCumulativeAvg = currentCPUUsageD;
            } else {
                Double cumulativeAvgSoFar = machineToAvgCPUUsageStore.get(machineID);
                PROC_LOGGER.log(Level.INFO, "CMA so far {0}", cumulativeAvgSoFar);

                //refer https://en.wikipedia.org/wiki/Moving_average#Cumulative_moving_average for details
                latestCumulativeAvg = (currentCPUUsageD + (recordsReadSoFar * cumulativeAvgSoFar)) / (recordsReadSoFar + 1);
                recordsReadSoFar = recordsReadSoFar + 1;
                machineToNumOfRecordsReadStore.put(machineID, recordsReadSoFar);
            }

            machineToAvgCPUUsageStore.put(machineID, latestCumulativeAvg); //store latest CMA in local state store
..........

 

 

Metrics POJO

 

@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
public class Metrics {
    private final List<Metric> metrics;

    public Metrics() {
        metrics = new ArrayList<>();
    }

    public Metrics add(String source, String machine, String cpu) {
        metrics.add(new Metric(source, machine, cpu));
        return this;
    }

    public Metrics add(Metrics anotherMetrics) {
        anotherMetrics.metrics.forEach((metric) -> {
            metrics.add(metric);
        });
        return this;
    }

    @Override
    public String toString() {
        return "Metrics{" + "metrics=" + metrics + '}';
    }
    
    public static Metrics EMPTY(){
        return new Metrics();
    }
    
}

 

 

Exposing REST API for state

 

@GET
@Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
public Response all_metrics() throws Exception {
        Response response = null;
        try {
            KafkaStreams ks = GlobalAppState.getInstance().getKafkaStreams();
            HostInfo thisInstance = GlobalAppState.getInstance().getHostPortInfo();
            
          Metrics metrics = getLocalMetrics();

            ks.allMetadataForStore(storeName)
                    .stream()
                    .filter(sm -> !(sm.host().equals(thisInstance.host()) && sm.port() == thisInstance.port())) //only query remote node stores
                    .forEach(new Consumer<StreamsMetadata>() {
                        @Override
                        public void accept(StreamsMetadata t) {
                            String url = "http://" + t.host() + ":" + t.port() + "/metrics/remote";
                            //LOGGER.log(Level.INFO, "Fetching remote store at {0}", url);
                            Metrics remoteMetrics = Utils.getRemoteStoreState(url, 2, TimeUnit.SECONDS);
                            metrics.add(remoteMetrics);
                            LOGGER.log(Level.INFO, "Metric from remote store at {0} == {1}", new Object[]{url, remoteMetrics});
                        }
                    });

            response = Response.ok(metrics).build();
        } catch (Exception e) {
            LOGGER.log(Level.SEVERE, "Error - {0}", e.getMessage());
        }
        return response;
}

 

Host discovery

 

    public static String getHostIPForDiscovery() {
    String host = null;
        try {

            String hostname = Optional.ofNullable(System.getenv("APP_NAME")).orElse("streams");

            InetAddress inetAddress = Address.getByName(hostname);
            host = inetAddress.getHostAddress();

        } catch (UnknownHostException ex) {
            host = "localhost";
        }
        return host;
    }

Deployment to Application Container Cloud

 

Now that you have a fair idea of the application, it’s time to look at the build, packaging & deployment

 

Update deployment descriptors

 

The metadata files for the producer application are the same. Please refer to part I for details on how to update them. The steps below are relevant to the (new) stream processing consumer microservice.

manifest.json: You can use this file in its original state

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar acc-kafka-streams-1.0.jar",
  "isClustered": "true"
}

 

deployment.json

 

It contains environment variables corresponding required by the application at runtime. The value is left as a placeholder for you to fill prior to deployment.

 

{
"instances": "2",
  "environment": {
  "APP_NAME":"kstreams",
  "KAFKA_BROKER":"<as-configured-in-kafka-server-properties>",
  "ZOOKEEPER":"<zookeeper-host:port>"
  }
}

 

Here is an example

 

{
"instances": "2",
  "environment": {
  "APP_NAME":"kstreams",
  "KAFKA_BROKER":"oc-140-44-88-200.compute.oraclecloud.com:9092",
  "ZOOKEEPER":"10.190.210.199:2181"
  }
}

 

You need to be careful about the following

 

  • The value of the KAFKA_BROKER attribute should be the same as (Oracle Compute Cloud instance public DNS) the one you configured in the advertised.listeners attribute of the Kafka server.properties file
  • The APP_NAME attribute should be the same as the one you use while deploying your application using the Application Container Cloud REST API

Please refer to the following documentation for more details on metadata files

 

 

Build

 

Initiate the build process to produce the deployable artifact (a ZIP file)

 

//Producer application

cd <code_dir>/producer //maven project location
mvn clean package

//Consumer application

cd <code_dir>/producer //maven project location
mvn clean package

 

The output of the build process is the respective ZIP files for producer (accs-kafka-producer-1.0-dist.zip) and consumer (acc-kafka-streams-1.0-dist.zip) microservices respectively

 

Upload & deploy

You would need to upload the ZIP file to Oracle Storage Cloud and then reference it in the subsequent steps. Here are the required the cURL commands

 

Create a container in Oracle Storage cloud (if it doesn't already exist)  
  
curl -i -X PUT -u <USER_ID>:<USER_PASSWORD> <STORAGE_CLOUD_CONTAINER_URL>  
e.g. curl -X PUT –u jdoe:foobar "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accs-kstreams-consumer/"  
  
Upload your zip file into the container (zip file is nothing but a Storage Cloud object)  
  
curl -X PUT -u <USER_ID>:<USER_PASSWORD> <STORAGE_CLOUD_CONTAINER_URL> -T <zip_file> "<storage_cloud_object_URL>" //template  
e.g. curl -X PUT –u jdoe:foobar -T acc-kafka-streams-1.0-dist.zip "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accs-kstreams-consumer/accs-kafka-consumer.zip"

 

 

Repeat the same for the producer microservice

 

You can now deploy your application to Application Container Cloud using its REST API. The Oracle Storage cloud path (used above) will be referenced while using the Application Container Cloud REST API (used for deployment). Here is a sample cURL command which makes use of the REST API

 

curl -X POST -u joe@example.com:password \    
-H "X-ID-TENANT-NAME:domain007" \    
-H "Content-Type: multipart/form-data" -F "name=kstreams" \    
-F "runtime=java" -F "subscription=Monthly" \    
-F "deployment=@deployment.json" \    
-F "archiveURL=accs-kstreams-consumer/accs-kafka-consumer.zip" \    
-F "notes=notes for deployment" \    
https://apaas.oraclecloud.com/paas/service/apaas/api/v1.1/apps/domain007  

 

Note

  • the name attribute used in the curl command should be the same as the APP_NAME attribute used in the manifest.json
  • Repeat the same for the producer microservice

 

Post deployment

(the consumer application has been highlighted below)

 

The Applications console

 

 

The Overview sub-section

 

 

 

The Deployments sub-section

 

 

 

Testing

Assuming your Kakfa broker is up and running and you have deployed the application successfully, execute the below mentioned steps to test drive your application

 

Start the producer

Trigger your producer application by issuing a HTTP GET https://my-producer-app-url/producer e.g. https://accs-kafka-producer-domain007.apaas.us.oraclecloud.com/producer. This will start producing (random) CPU metrics for a bunch of (10) machines

 

 

You can stop the producer by issuing a HTTP DELETE on the same URL

 

 

Check the statistics

 

Cumulative moving average of all machines

Allow the producer to run for a 30-40 seconds and then check the current statistics. Issue a HTTP GET request to your consumer application e.g. https://acc-kafka-streams-domain007.apaas.us.oraclecloud.com/metrics. You’ll see a response payload similar to what’s depicted below

 

 

 

The information in the payload is as following

  • cpu: the cumulative average of the CPU usage of a machine
  • machine: the machine ID
  • source: this has been purposefully added as a diagnostic information to see which node (instance in the Application Container Cloud) is handling the calculation for a specific machine (this is subject to change as your application scales up/down)

 

Cumulative moving average of a specific machine

 

 

 

Scale your application

Increase the number of instances of your application (from 2 to 3)

 

 

 

Check the stats again and you’ll notice that the computation task is being shared among three nodes now..

 

That’s all for this blog series.. !

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle