Skip navigation
1 2 3 Previous Next

Developer Solutions

63 posts

Introduction

 

Application architecture is maturing from traditional monolith to SOA to micro services and now to Serverless Architecture which breaks application into small functions doing specific task, running in server infrastructure which is owned and managed by service provider allowing us to focus on core business logic. In this new cloud computing model, we as application developer are responsible for writing our business logic as functions and submitting it to the cloud provider for its execution in scalable and highly available manner. More about Serverless Architecture can be read on wonderful article here .

 

In this article, we will see how we can manage access to functions implemented as AWS Lambda and using Oracle Identity Cloud Service and use them in a Single Page Serverless Application.  Oracle Identity Cloud Service provides a fully integrated cloud service which deliver all the core identity and access management capabilities including user onboarding and access management, integration with on premise AD/OAM,  Identity Federation and Single-Sign-On, Open-ID Connect based authentication, Security using OAuth2, multi-factor authentication and social logins. More about IDCS can be found here.

 

Single Page Serverless Application Pattern

 

To emphasize more on the concept and the integration pattern, the use case considered in this blog is kept very simple like a traditional Hello World application. Here is what we will accomplish with the help of this use case:

  1. We will create 2 Lambda functions - Lambda-Manager and Lambda-Employee.
  2. These Lambdas will be front-ended by AWS API Gateway.
  3. A single page Web Application served from AWS S3 will call AWS API Gateway APIs to trigger Lambda to perform specific action implemented.

 

The following diagram depicts a general Serverless application pattern

pattern-1.PNG

 

Secure Your Lambda


We want to enable role based access to our Lambda functions, so that -

  1. The function Lambda-Manager is only accessible by users with manager role
  2. The function Lambda-Employee is only accessible by users with employee role.

 

Let's see how to do it with Oracle Identity Cloud Service. The general pattern uses Oracle Identity Cloud Service(IDCS) Open-ID Connect Authentication feature to authenticate the user and get the access-token which will be used while calling AWS API Gateway API. AWS API Gateway uses Custom Authorizer to implement the authorization logic which will verify the access-token and call IDCS Rest API to get more information about user to decide if he is allowed to access the Lambda function or not. The following diagram presents and overview of this implementation

pattern-2.PNG

 

Request and Response flow is -

  1. User access web-page in browser which is served from AWS S3.
  2. User is redirected to IDCS login page and asked for login credentials.
  3. User provides username/password, gets authenticated at IDCS and return back with access-token.
  4. Browser Java-Script makes a call to API with access-token.
  5. API Gateway using custom authorizer as authorization mechanism calls authorizer function (implemented as another Lambda function) passing access-token.
  6. Authorizer function validate access-token by verifying signature as first level validation.
  7. Authorizer function call IDCS Rest API to get more information about the user represented by access-token.
  8. If the user belong to Group - Manager and requested function is Lambda-Manager(identified by API Gateway API), authorizer create a IAM Allow policy to allow access to Lambda-Manager else create IAM Deny policy and return.
  9. API Gateway evaluate the return policy, if its Allow Policy calls the Lambda else return with HTTP 403.

 

Set-up and Configuration

 

Here are set-up and configuration steps required to implement above flow (Focusing only on security aspects).

  • After deploying your Lambda function, define API in AWS API Gateway to call your Lambda function.
  • Deploy authorizer Lambda which implements the authorization logic. Authorizer Lambda can implement custom authorization logic as per your application. We will implement this authorization based on user's Group membership in Identity Cloud Service with following validation
    1. First step is to validate the access token by verifying the signature of the access token. There are various libraries available for JWT, you can use one. You will require signing certificate of your IDCS tenancy which you can acquire using Signing Certificate JWK Rest Endpoint.
    2. If access-token is valid, you can retrieve the Group Membership using IDCS UserInfo Rest Endpoint.
    3. Depending on Group Membership, create and return IAM Allow or IAM Deny Policy.
  • Create "Custom Authorizer" for your APIs

          API-1.PNG

  • Configure the custom authorizer to use the authorizer Lambda created in step #2 with Identity token source as "method.request.header.Authorization". This will allow us to pass access-token in Authorization Header while calling API

               API-2.PNG

  • On the IDCS side, create Groups as "SPA-Manager" and "SPA-Employee"
  • Create users and assign them to one of the above Groups.
  • Create Application in IDCS to define resource for your API Gateway's APIs
  • Create a public client in IDCS and add scope for your API resources defined above.
  • In the single page app, you validate if the user is already logged in or not by checking for access-token. If access-token is not present, redirect the user to IDCS authorizer URL at
    • https://[IDCS Tenant URL]/oauth2/v1/authorize?client_id=[Client-ID]&response_type=token&redirect_uri=[URL where you want to redirect after login]&scope=[API Gateway API URI configured as scope in client] openid groups";

               code-snip-1.PNG

  • After successful login, user will be redirected to the "redirect_uri" with the access-token which can be used to call your API. The token is set to the HTTP Authorizer header

     code-snip-2.PNG

This is all you have to do to enable user authentication and access control in your Serverless Application implemented using AWS Lambda using Oracle Identity Cloud Service. You can extend this to enable social login, multi-factor authentication as well as federated login using your on premise enterprise directory.

 

Reference

1. Oracle Identity Cloud Service

2. Oracle Identity Cloud Service - Rest API

3. AWS Lambda

4. AWS API Gateway Custom Authorizer

We will look at

 

  • How to Setup Apache Cassandra on Oracle Compute Cloud
  • Develop: some implementation details
  • Deploy: Run it on Oracle Application Container Cloud using CI/CD feature in Oracle Developer Cloud
  • Secure access: secure channel b/w your application and Cassandra

 

 

 

Hello Cassandra

Apache Cassandra is an open source,  NoSQL database. It's written in Java, (originally) developed at Facebook and it's design is based on/inspired by Amazon's Dynamo and Google’s Bigtable. Some of its salient characteristics are as follows

 

  • Belongs to the Row-oriented family (of NoSQL databases)
  • Distributed, decentralized & elastically scalable
  • Highly available & fault-tolerant
  • Supports Tunable consistency

 

You can read more about Cassandra here

About the sample application

  • The sample application exposes REST endpoints - implemented using Jersey (JAX-RS)
  • Employee - serves as the domain object. You would need to bootstrap the required table
  • DataStax Java driver is used to interact with Cassandra
    • Leverages Cassandra object mapper for CRUD operations

 

It is available here

 

Locking down access to Cassandra

The goal is to allow exclusive access to Cassandra from our application without exposing it's port (e.g. 9042) to the public internet. To enable this, Oracle Application Container Cloud ensures that when you deploy an application, a Security IP list is automatically generated which can be added to a Security rule for a virtual machine (VM) in Oracle Compute Cloud Service. This allows your application and the VM to communicate.

 

The setup details are discussed in an upcoming section

 

Setup Cassandra on Oracle Compute Cloud

 

Quick start using Bitnami

We will use a pre-configured Cassandra image from Bitnami via the Oracle Cloud Marketplace

 

  • Login to your Oracle Compute Cloud dashboard,
  • choose the Create Instance wizard, and then
  • select the required machine image from the Marketplace tab

 

More details here

 

 

 

 

Activate SSH access

We now need to allow SSH connections to our Cassandra virtual machine on Oracle Compute cloud

 

Create a Security Rule

 

 

You should see it in the list once its done

 

 

SSH into the VM

 

 

Reset password

You will need to reset Cassandra password as per this documentation https://docs.bitnami.com/oracle/infrastructure/cassandra/#how-to-reset-the-cassandra-administrator-password. Once you're done, log in using the new credentials

 

 

Oracle Developer Cloud: setup & application deployment

 

You would need to configure Developer Cloud for the Continuous Build as well as Deployment process. You can refer to previous blogs for the same (some of the details specific to this example will be highlighted here)

 

References

 

Provide Oracle Application Container Cloud (configuration) descriptor

 

 

 

Check application details on Oracle Application Container Cloud

 

Deployed Application

 

 

Environment Variables

 

 

Check Security IP List

 

After successful deployment, you will be able to see the application as well the Security IP List information in Oracle Application Container Cloud

 

 

Please note that you will not be able to access/test the application now since the secure communication channel b/w your application and Cassandra is not setup. The next section covers the details

 

Oracle Compute Cloud security configurations

 

Confirm Security IP List

 

You will see the Security IP list created when you had deployed the application on Oracle Application Container cloud (mentioned above). It ensures that the IP of the application deployed on Oracle Application Container cloud is whitelisted for accessing our Cassandra VM on Oracle Compute Cloud

 

 

 

Create Security Application

 

This represents the component you are protecting along with its access type and port number - in this case its TCP and 9042 respectively

 

Create Security Rule

 

The default Security list is created by Oracle Compute Cloud (after the Bitnami image was provisioned)

 

We will create a Security Rule to make use of Security IP list, Security application and Security List

 

 

You should see it in the list of rules

 

 

Test the application

 

Bootstrap the keyspace and table

 

The sample application uses the test keyspace and a table named employee - you would need to create these entities in the Cassandra instance

 

CREATE KEYSPACE test
  WITH REPLICATION = { 
   'class' : 'SimpleStrategy', 
   'replication_factor' : 1 
  };
  
  
 CREATE TABLE test.employee (emp_id uuid PRIMARY KEY, name text);

 

Access REST endpoints

 

Create a few employees

 

curl -X POST <ACCS_APP_URL>/employees -d abhishek // 'abhishek' is the name
curl -X POST <ACCS_APP_URL>/employees -d john // 'john' is the name

 

  • You will receive HTTP 201 (Created) in response
  • The Location (response) header will have the (REST) co-ordinates (URI) for the newly created employee record - use this for search (next step)

 

Search for the new employee

 

curl -X GET <ACCS_APP_URL>/employees/<emp_id>

 

You will receive a XML payload with employee ID and name

 

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<employee>
    <empId>18df5fd1-88d8-4820-984e-3cf0293c3051</empId>
    <name>test1</name>
</employee>

 

Search for all employees

 

curl -X GET <ACCS_APP_URL>/employees/

 

You will receive a XML payload with employees info

 

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<employees>
    <employee>
        <empId>8a841167-6aaf-428f-bc2b-02269f04ce93</empId>
        <name>abhirockzz</name>
    </employee>
    <employee>
        <empId>2e2cfb3c-1530-4099-b6e9-a550f11b25de</empId>
        <name>test2</name>
    </employee>
    <employee>
        <empId>18df5fd1-88d8-4820-984e-3cf0293c3051</empId>
        <name>test1</name>
    </employee>
    <employee>
        <empId>2513a12d-5fc7-4bc6-9f94-d13cea23fe7a</empId>
        <name>abhishek</name>
    </employee>
</employees>

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Glassfish 5 Build 11 is now available - with support for many of the Java EE 8 specifications e.g. JAX-RS 2.1, JPA 2.2, JSON-B 1.0 etc. For more details check out the Aquarium space. This blog covers

 

 

 

Application

 

It's a simple one

 

  • Has a REST endpoint (also a @Stateless bean)
  • Interacts with an embedded (Derby) DB using JPA - we use the jdbc/__TimerPool present in Glassfish to make things easier
  • Test data is bootstrapped using standard JPA features in persistence.xml (drop + create DB along with a SQL source)

 

JSON-B 1.0 in action

 

Primarily makes use of the JSON-B annotations to customize the behavior

 

  • @JsonbProperty to modify the name of the JSON attribute i.e. its different as compared to the POJO field/variable name
  • @JsonbPropertyOrder to specify the lexicographical reverse (Z to A) order for JSON attributes

 

For more, check out Yasson which is the reference implementation

 

JPA 2.2 in action

 

The sample application uses the stream result feature added to Query and TypedQuery interfaces by which it's possible to use JDK 8 Streams API to navigate the result set of a JPA (JPQL, native etc.) query. For other additions in JPA 2.2, please check this

Build the Docker image

 

 

Shortcut

 

Use an existing image from Docker Hub - docker pull abhirockzz/javaee-jsonb-jpa

 

Run on Oracle Container Cloud

 

You can use this section of one of my previous blog or the documentation (create a service, deploy) to get this up and running on Oracle Container Cloud. It's super simple

 

Create a service where you reference the Docker image

 

 

Post service creation

 

 

Initiate a deployment.. and that's it ! You'll see something similar to this

 

 

Test things out

 

Please make a note of the Host IP of your Oracle Container Cloud worker node (basically a compute VM)

 

Fetch all employees

http://<OCCS_HOST_IP>:8080/javaee8-jsonb-jpa/

 

You will get a JSON payload with all employees

 

[
    {
        "salary": 100,
        "name": "abhi",
        "emp_email": "abhirockzz@gmail.com"
    },
    {
        "salary": 99,
        "name": "rockzz",
        "emp_email": "kehsihba@hotmail.com"
    }
]

 

 

Fetch an employee

 

http://<OCCS_HOST_IP>:8080/javaee8-jsonb-jpa/abhirockzz@gmail.com

 

You will see a JSON payload in as a response

 

{
    "salary": 100,
    "name": "abhi",
    "emp_email": "abhirockzz@gmail.com"
}

 

Enjoy Java EE 8 and Glassfish !

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

This blog will demonstrate how to get started with a Redis based Java application

 

  • Run it on Oracle Application Container cloud and CI/CD using Oracle Developer cloud
  • Execute Integration tests using NoSQLUnit
  • Our Redis instance will run in a Docker container on Oracle Container cloud

 

 

 

 

Application

Here is a summary of the application

 

  • Exposes REST endpoints using Jersey
  • Uses Redis as the data store
  • Jedis is used as the Java client for Redis
  • NoSQLUnit is the framework used for integration testing

 

Here is the application

 

NoSQLunit

 

NoSQLUnit is an open source testing framework for applications which use NoSQL databases. It works on the concept of (JUnit) Rules and a couple of annotations. These rules are meant for both database lifecycle (start/stop) as well as state (seeding/deleting test data) management. In the sample application, we use it for state management for Redis instance i.e.

 

  • with the help of a json file, we define test data which will be seeded to Redis before our tests start and then
  • use the annotation (@UsingDataSet) to specify the our modus operandi (in this case - its clean and insert)

 

Our test dataset in json format

 

 

NoSQLUnit in action

 

 

 

 

Setup

Redis on Oracle Container Cloud

  • Use the existing service or create your own (make sure you expose the default Redis port to the host IP) - documentation here
  • Start the deployment - documentation here
  • Note down the host IP of the worker node on which the Redis container is running

 

Configure Oracle Developer Cloud

We'll start with bootstrapping the application in Oracle Developer Cloud. Check this section for reference Project & code repository creation. Once this is done, we can now start configuring our Build which is in the form of a pipeline consisting of the build, deployment, integration test and tear down phases

 

Build & deploy phase

The code is built and deployed on Oracle Application Container cloud. Please note that we are skipping the unit test part in order to keep things concise

 

Build step

 

 

 

 

Post-build (deploy)

 

 

 

Deployment

 

At the end of this phase, our application will be deployed to Application Container Cloud - its time to configure the integration tests

 

Integration test phase

Our integration tests will run directly against the deployed application using the Redis instance (on Oracle Container Cloud which we had setup earlier). For this

  • we define another build job and
  • make sure that it's triggered after the build + deployment phase completes

 

Integration build job

 

 

 

Define the dependency

 

 

Tear Down phase

Thanks to Oracle Developer Cloud integration with Oracle PaaS Service Manager (PSM), it's easy to add a PSMcli build step that invokes Oracle PaaS Service Manager command line interface (CLI) command to stop our ACCS application once the pipeline has been executed. More details in the documentation

 

 

 

Summary

We covered the following

 

  • Built a Java application on top of Redis
  • Orchestrated its build, deployment and integration test using Oracle Developer Cloud and Oracle Application Container Cloud
  • In the process, we also saw how its possible to treat our infrastructure as code and utilize our cloud services efficiently

 

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This blog will demonstrate how to get started with a simple MongoDB based application

 

  • Run it on Oracle Application Container cloud
  • Unit test and CI/CD using Oracle Developer cloud
  • Our MongoDB instance will run in a Docker container on Oracle Container cloud

 

 

 

Application

The sample project is relatively simple

 

  • Its uses JPA to define the data layer along with Hibernate OGM
  • Fongo (in-memory Mongo DB) is used for unit testing
  • Jersey (the JAX-RS implementation) is used to provide a REST interface

 

You can check out the project here

 

MongoDB, Hibernate OGM

MongoDB is an open source, document-based, distributed database.. More information here. Hibernate OGM is a framework which helps you use JPA (Java Persistence API) to work with NoSQL stores instead of RDBMS (which JPA was designed for)

 

  • It has support for a variety of NoSQL stores (document, column, key-value, graph)
  • NoSQL databases it supports include MongoDB (as demonstrated in this blog), Neo4j, Redis, Cassandra etc.

 

More details here

 

In this application

 

  • We define our entities and data operations (create, read) using plain old JPA
  • Hibernate OGM is used to speak JPA with MongoDB using the native Mongo DB Java driver behind the scenes. We do not interact with/write code on top of the Java driver explicitly

 

Here is a snippet from the persistence.xml which gives you an idea of the Hibernate OGM related configuration

 

Setup

Let's configure/setup our Cloud services and get the application up and running...

MongoDB on Oracle Container Cloud

 

 

 

 

Oracle Developer Cloud

You would need to configure Developer Cloud for the Continuous Build as well as Deployment process. You can refer to previous blogs for the same (some of the details specific to this example will be highlighted here)

 

References

 

Make sure you setup Oracle Developer Cloud to provide JUnit results

 

Provide Oracle Application Container Cloud (configuration) descriptor

 

As a part of the Deployment configuration, we will provide the deployment.json details to Oracle Developer Cloud - in this case, it's specifically for setting up the MongoDB co-ordinates in the form of environment variables. Oracle Developer cloud will deal with the intricacies of the deployment to Oracle Application Container Cloud

 

 

JUnit results in Oracle Developer Cloud

 

From the build logs

 

 

From the test reports

 

 

Deployment confirmation in Oracle Developer Cloud

 

 

Post-deployment status in Application Container Cloud

 

Note that the environment variables were seeded during deployment

 

 

Test the application

  • We use cURL to interact with our application REST endpoints, and
  • Robomongo as a (thick) client to verify data in Mongo DB

 

Check the URL for the ACCS application first

 

Add employee(s)

 

curl -X POST https://my-accs-app/employees -d 42:abhirockzz
curl -X POST https://my-accs-app/employees -d 43:john
curl -X POST https://my-accs-app/employees -d 44:jane

 

The request payload is ':' delimited string with employee ID and name

 

Get employee(s)

 

You will get back a XML payload in response

 

curl -X GET https://my-accs-app/employees - all employees
curl -X GET https://my-accs-app/employees/44 - specific employee (by ID)

 

 

Let's peek into MongoDB as well

 

  • mongotest is the database
  • EMPLOYEES is the MongoDB collection (equivalent to @Table in JPA)

 

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

It's time to take Java EE 8 for a spin and try out Glassfish 5 builds on Docker using Oracle Container Cloud. Java EE specifications covered

 

  • Server Sent Events in JAX-RS 2.1 (JSR 370) - new in Java EE 8
  • Asynchronous Events in CDI 2.0 (JSR 365) - new in Java EE 8
  • Websocket 1.1 (JSR 356) - part of the existing Java EE 7 specification

 

 

Application

 

Here is a quick summary of what's going on

 

  • A Java EE scheduler triggers asynchronous CDI events (fireAsync())
    • These CDI events are qualified (using a custom Qualifier)
    • It also uses a custom java.util.concurrent.Executor (based on the Java EE Concurrency Utility ManagedExecutorService) – thanks to the NotificationOptions supported by the CDI API
  • Two (asynchronous) CDI observers (@ObservesAsync) – a JAX-RS SSE broadcaster and a Websocket endpoint
  • SSE & Websocket endpoints cater to their respective clients

 

Notice the asynchronous events running in Managed Executor service thread

action-2.jpg?w=768

You can choose to let things run in the default (container) chosen thread

 

cdi-2-async-events-in-action.jpg?w=768

 

Build the Docker images

 

Please note that I have used my personal Docker Hub account (abhirockzz) as the registry. Feel free to use any Docker registry of your choice

 

git clone https://github.com/abhirockzz/cdi-async-events.git
mvn clean install
docker build -t abhirockzz/gf5-nightly -f Dockerfile_gf5_nightly .
docker build -t abhirockzz/gf5-cdi-example -f Dockerfile_app .

 

Push it to a registry

 

docker push abhirockzz/gf5-cdi-example

 

Run in Oracle Container Cloud

 

Create a service

 

 

 

Deploy it

 

 

You see this once the container (and the application) start..

 

 

 

Drill down into the (Docker) container and check the IP for the host where it's running and note it down

Test it

 

Make use of the Host IP you just noted down

 

http://<occs_host_ip>:8080/cdi-async-events/events/subscribe - You should see a continuous stream of (SSE) events

 

68747470733a2f2f61626869726f636b7a7a2e66696c65732e776f726470726573732e636f6d2f323031372f30362f7373652d6f75747075742e6a7067

 

Pick a Websocket client and use it connect to the Websocket endpoint ws://<occs_host_ip>:8080/cdi-async-events/

 

You will see the same event stream... this time, delivered by a Websocket endpoint

 

 

 

You can try this with multiple clients - for both SSE and Websocket

 

Enjoy Java EE 8 and Glassfish !

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

This blog walks through an example of how to create a test pipeline which incorporates unit as well as integration testing - in the Cloud. What's critical to note is the fact that the cloud service instances (for testing) are started on-demand and then stopped/terminated after test execution

  • Treat infrastructure as code and control it within our pipeline
  • Pay for what you use = cost control

 

We will be leveraging the following Oracle Cloud services

  • Oracle Developer Cloud
  • Oracle Database Cloud
  • Oracle Application Container Cloud

 

 

 

 

 

Oracle Developer Cloud: key enablers

The following capabilities play a critical role

 

The below mentioned features are available within the Build module

 

  • Integration with Oracle PaaS Service Manager (PSM): It's possible to add a PSMcli build step that invokes Oracle PaaS Service Manager command line interface (CLI) commands when the build runs. More details in the documentation
  • Integration with SQLcl: This makes it possible to invoke SQL statements on an Oracle Database when the build runs. Details here

 

Application

The sample application uses JAX-RS (Jersey impementation) to expose data over REST and JPA as the ORM solution to interact with Oracle Database Cloud service (more on Github)

 

 

Here is the test setup

 

Tests

 

Unit

There two different unit tests in the application which use Maven Surefire plugin

  • Using In-memory/embedded (Derby) database: this is invoked using mvn test
  • Using (remote) Oracle Database Cloud service instance: this test is activated by using a specific profile in the pom.xml and its executed using mvn test –Pdbcs-test

 

 

Extract from pom.xml

 

 

 

Integration

In addition to the unit test, we have an integration test layer which is handled using the Maven Failsafe plugin

 

 

Its invoked by mvn integration-test or mvn verify

 

 

Packaging

It's handled using Maven Shade plugin (fat JAR) and Maven assembly plugin (to create a zip file with the ACCS manifest.json)

 

Developer Cloud service configuration

 

Setup

Before we dive into the details, let’s get a high level overview of how you can set this up

 

Project & code repository creation

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

cd <project_folder> //where you unzipped the source code  
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

Once this is done, we can now start configuring our Build

 

  • The pipeline is divided into multiple phases each of which corresponds to a Build
  • These individual phases/builds are then stitched together to create an end-to-end test flow. Let’s explore each phase and its corresponding build configuration

 

Phases

 

Unit test: part I

The JPA logic is tested using the embedded Derby database. It links to the Git repo where we pushed the code and also connects to the Oracle Maven repository

 

 

 

 

The build step invokes Maven

 

 

The post-build step

  • Invokes the next job in the pipeline
  • Archives the test results and enables JUnit test reports availability

 

 

 

 

 

Bootstrap Oracle Database Cloud service

 

  • This phase leverages the PSMcli to first start the Oracle Database Cloud  service and then,
  • SQLcl to create the table and load it up with test data. It is invoked by the previous job

 

 

Please note that the PSM command is asynchronous in nature and returns a Job ID which you can further use (within a shell script) in order to poll the status of the job

 

Here is an example of a such a script

 

VALUE=`psm dbcs stop --service-name test`

echo $VALUE

#Split on ':' which contains the Job ID on the right side of :
OIFS=$IFS
IFS=':'
JSONDATA=${VALUE}

#trying to skip over the left side of : to get the JobID
COUNTER=0
for X in $JSONDATA
do
  if [ "$COUNTER" -eq 1 ]
  then
  #clean string, removing leading white space and tab
  X=$(echo $X |sed -e 's/^[ \t]*//')
  JOBID=$X
  else
  COUNTER=$(($COUNTER+1))
  fi
done

echo "Job ID is "$JOBID

#Repeat check until SUCCEED is in the status
PSMSTATUS=-1
while [ $PSMSTATUS -ne 0 ]; do 

CHECKSTATUS=`psm dbcs operation-status --job-id $JOBID`


  if [[ $CHECKSTATUS == *"SUCCEED"* ]]
  then
  PSMSTATUS=0
    echo "PSM operation Succeeded!"
  else 
  echo "Waiting for PSM operation to complete"
  fi
  sleep 60
done

 

 

Here is the SQLcl configuration which populates the Oracle Database Cloud service table

 

 

 

 

 

Unit test: part II

  • Runs test against the Oracle Database Cloud service instance which we just bootstrapped
  • Triggers application deployment (to Oracle Application Container Cloud)
  • and, like the previous job, this too links to the Git repo and connects to the Oracle Maven repository

 

Certain values for the test code are passed in as parameters

 

 

 

The build step involves invocation of a specific (Maven) profile defined in the pom.xml

 

 

 

The post build section does a bunch of things

  • Invokes the next job in the pipeline
  • Archives the deployment artifact (in this case, a ZIP file for ACCS)
  • Archives the test results and enables test reports availability
  • Invocation of the Deployment step to Application Container Cloud

 

 

 

 

Integration test

Now that we have executed the unit tests and our application is deployed, its now time to execute the integration test against the live application. In this case we test the REST API exposed by our application

 

 

 

Build step invokes Maven goal

 

 

We use the HTTPS proxy in order to access external URL (the ACC application in this case) from within the Oracle Developer Cloud build machines

 

Post build section invokes two subsequent jobs (both of them can run in parallel) as well the test result archive

 

 

 

 

Tear Down

 

  • PSMcli is used to stop the ACCS application and runs in parallel with another job which uses SQLcl to clean up the data in Oracle Database Cloud (drop the table)
  • After that, the final tear down job is invoke, which shuts down the Oracle Database Cloud service instance (again, using PSMcli)

 

 

 

 

 

 

 

Finally, shut down the Oracle Database Cloud service instance

 

 

 

Total recall...

 

  • Split pipeline into phases and implement them using a Build job - the choice of granularity is up to you e.g. you can invoke PSMcli and SQLcl steps in the same job
  • Treat infrastructure (cloud services) as code and manage them from within your pipeline - Developer Cloud makes it easy to across the entire Oracle PaaS platform by PSMcli integration

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

With Oracle Developer Cloud Service, you can integrate your existing Jenkins or Hudson setup - whether they are on-premise or cloud based. Currently, there are three different integration points which are enabled using Webhooks. Let’s look at each of these

 

Jenkins Build notifications

This is made possible by an inbound Webhook which accepts build notifications from a remote Jenkins server

 

Configuration summary

  • Create a Webhook in Developer Cloud (type: Jenkins - Notification Plugin)
  • Configure your external Jenkins to use the URL provided in the Developer Cloud Service Webhook configuration

 

Here is snapshot of the configuration in Oracle Developer Cloud

 

 

This is how the resulting Activity Stream looks like in Oracle Developer Cloud. Clicking on the hyperlinks available in the Activity Stream will redirect you to artifacts in the remote Jenkins instance e.g. build, commit, git repository etc.

 

 

 

You can refer to this documentation section for more details

 

Jenkins Build Trigger integration

You can configure an outbound Webhook which triggers a build on a remote Hudson or Jenkins build server when a Git push occurs in the selected repository in Developer Cloud

 

Configuration summary

  • Configure external Jenkins to allow remote invocation of builds
  • Create a Webhook of type Hudson/Jenkins - Build Trigger
    • Provide basic info, configure authentication and trigger

 

Here is snapshot of the configuration in Oracle Developer Cloud

 

 

 

 

You can refer to this documentation section for more details.

 

Jenkins Git Plugin integration

This is another outbound Webhook which can notify another Hudson or Jenkins build job in response to a Git push in Developer Cloud service. The difference between this and previous Webhook is that this will trigger builds of all the jobs configured for the same Git repository (in Developer Cloud service) as sent in the Webhook payload

 

Configuration summary

  • Create a Webhook of type Hudson/Jenkins Git Plugin
  • Provide the Git repository details as a part of the external Jenkins configuration and activate SCM polling

 

 

You can refer to this documentation section for more details.

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

In this blog we will look at

 

 

 

Application

The application is a simple one which fetches the price of a stock from the cache. It demonstrates other features (in addition to basic caching) such as

  • Cache loader – if the key (stock name) does not exist in the cache (since it was never searched for or has expired), the cache loader logic kicks in and fetches the price using a REST call to an endpoint
  • Serializer – Allows us to work with our domain object (Ticker) and takes care of the transformation logic
  • Expiry – A cache-level expiry is enforced after which the entry is purged from the cache
  • Metrics – get common metrics such as cache size, hits, misses etc.

 

Code

Let’s look at some code snippets for our application and each of the features mentioned above

 

Project is available on Github

 

Cache operations

This example exposes the get cache operation over a REST endpoint implemented using Jersey (JAX-RS API)

 

 

Cache Loader

PriceLoader.java contains the logic to fetch the price from an external source

 

 

Serializer

TickerSerializer.java Converts to and from Ticker.java and its String representation

 

 

 

 

Expiry

Purges the cache entry when this threshold is hit and causes the cache loader invocation is the expired entry is looked up (get) again

 

 

Metrics

Many cache metrics can be extracted – common ones are exposed over a REST endpoint

Some of the metrics are global and other are not. Please refer to the CacheMetrics javadoc for details

 

 

Setup

 

Oracle Application Container Cloud

The only setup required is to create the Cache. It’s very simple and can be done quickly using the documentation.

 

Please make sure that the name of the cache is the same as one used in the code and configuration (Developer Cloud) i.e test-cache. If not, please update the references

 

Oracle Developer Cloud

You would need to configure Developer Cloud for the build as well as Continuous Deployment process. You can refer to previous blogs for the same - some of the details specific to this example will be highlighted here

 

References

 

Provide Oracle App Container Cloud (configuration) descriptors

 

  • The manifest,json provided here will override the one in your zip file (if any) - its not compulsory to provide it here
  • Providing the deployment.json details is compulsory (in this CI/CD scenario ) since it cannot be included in the zip file

 

 

 

Deployment confirmation in Developer Cloud

 

 

 

Status in Application Container Cloud

 

Application URL has been highlighted

 

 

 

 

 

Test the application

Check price

Invoke a HTTP GET (use curl or browser) to the REST endpoint (check the application URL) e.g. https://acc-cache-dcs-domain007.apaas.us6.oraclecloud.com/price/ORCL

 

 

If you try fetching the price of the stock after the expiry (default is 5 seconds), you should see a change in the time attribute (and the price as well - if it has actually changed)

 

Check cache metrics

 

Invoke a HTTP GET (use curl or browser) to the REST endpoint (check the application URL) e.g. https://acc-cache-dcs-domain007.apaas.us6.oraclecloud.com/metrics

 

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

Additional reading/references

 

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Introduction

An application programming interface (API) is an interface to a service at an endpoint that provides controlled access to a business process or data. Businesses today are treating APIs as a primary product and are adopting the “API First” strategy for increased efficiency, revenue, partner contribution and customer engagement. These companies want to expose their core business values as APIs to partners to bring in more revenue but at the same time secure their core data and processes which enables faster service delivery with low cost.

 

APIs, Microservices and Cloud

Microservices is an architectural style that is increasingly being used for building cloud native applications, where each application is built as a set of services. These services communicate with one another through contractually agreed upon interfaces – APIs. It is an alternative architecture for building applications which provides a better way of decoupling components within an application boundary.

 

The number and granularity of APIs that a microservice-based application exposes demands for robust API management that involves creating, publishing APIs, monitoring the life cycle and enforcing usage policies in a secure and scalable environment.

 

Also, with more and more enterprises of all sizes leveraging cloud platforms to build innovative applications, effective API management in the cloud and/or on-premises is pivotal to meet their business and customer needs.

 

Oracle API Platform Cloud Service

Oracle’s API Platform Cloud Service (API platform CS) provides an integrated platform to build, expose and monitor APIs to backend services including capabilities like security enforcement, message routing, track usage…etc. This blog will introduce you to the various capabilities offered by Oracle API Platform Cloud Service with the help of a simple use case. The following diagram illustrates the architecture overview of Oracle’s API Platform CS.

 

apics_diagram.png

 

API Platform Cloud Service offers a centralized API design with distributed API runtime which makes it easy to manage, secure, and publicize services for application developers by offering innovative solutions.

There are several components in the Oracle API Platform Cloud Service - Cloud Service Console, the Gateway, the Management Portal, and the Developer Portal. Following is a short description of each of these components:

  • API Platform Cloud Service Console: Provision new service instances, start and stop service instances, initiate backups, and perform other life cycle management tasks.
  • Gateway: This is the security and access control runtime layer for APIs. Each API is deployed to a gateway node from the Management Portal or via the REST API.
  • Management Portal: This is used to create and manage APIs, deploy APIs to gateways, and manage gateways, and create and manage applications. You can also manage and Deploy APIs and manage gateways with the REST API.
  • Developer Portal: Application developers subscribe to APIs and get the necessary information to invoke them from this portal.

 

Personas of API life cycle

  • API Designer/API Product Manager - Collects customer requirements, documents the API and gets agreement with the consumer on the design of the API
  • API Manager/Implementer - Creates, tests, deploys, monitors and manages APIs, apply policies supporting the design and ensuring security
  • Gateway Manager - Deploys and configures gateway nodes, reviews and approves API deployment requests, monitors and manages the gateways
  • API Consumer - Application developer who consumes APIs to meet the requirements of an application. Searches the API catalog to identify existing APIs, registers desired APIs with application.

 

Sample Use case

In this blog we will build a simple API called Book Store that exposes functions of an online book store.  This blog emphasizes on the API implementation specific features of API Platform CS. To keep things simple we will mock one simple function – list books, which returns a list of books along with their titles, authors and expose it as an API. This blog does not discuss about all the features of Oracle API Platform Cloud Service. Please refer here for comprehensive documentation.

 

Note: The steps defined in the subsequent sections of this blog assume that you have access to an instance of Oracle API platform Cloud service with Gateway configurations and appropriate user grants to be able to implement and deploy APIs.

 
Create an API

In this section you create an entry for an API you want to manage in the API Platform CS Management Portal.

1) Sign in to the Management Portal as a user with the API Manager role

Management portal login page.png

2) Click on the “Create API” button to create a new API by providing name, version and description

Create API.png

3) The newly created Book Store API should be listed under the APIs page as shown below:

BookStore_API.png

Register Application to an API

API consumers register their applications to APIs they want to use. Application developers use the Developer Portal to register to APIs while API managers use Developer Portal and the Management Portal. In this section you register an application to the BookStore API.

 

1) From the APIs page, click the API you want to register an application to and Click on Registrations tab. Click on Register Application to register an application to this API.

Register_Application.png

2) The following Register Application page comes up listing all the existing applications from which you can choose or you can create a new application.

Register_Application_1.png

3) In this case, click on the Create an application link to create a new application and provide the details as in the below screenshot. Click Register button to register the Books App application with Book Store API

Register_Application_2.png

4) Once the application is registered with the API, it should be displayed under the “Registrations” tab as follows. Also notice that you can suspend this registration or de-register this application by clicking the respective buttons that appear when you hover on the application name. You can also approve or reject a developer’s request to register their application to an API from this page.

Register_Application_3.png

5) Each application that is registered is issued an App key which can be sent along the request to ensure that access to the API is granted only to the registered applications. Click on the Applications tab and click on the Books App application to view the application details along with the App key. You can also re-issue the App key by clicking on the Reissue key button.

Application_Key.png

Implement the API

Now that we have created an API and registered an application that can access the API, in this section we implement the API by applying policies to configure the request and response flows.

 

Click on the BookStore API to start implementing the API. The following page comes up with API Implementation activity highlighted.

API_Implementation.png

As a first step of API implementation, we configure the API endpoints. Endpoints are locations at which an API sends or receives requests or responses. APIs in API Platform CS have two endpoints in the request flow:

  1. API Request
  2. Service Request

 

Configure API Request URL

The API Request URL is the endpoint at which the gateway will receive requests from users or applications for your API

1) When you hover on to the API request section, you will see an “Edit” button using which you can configure the API request URL.

API_Request_URL_1.png

2) Click Next to configure the URL as follows. In the API Endpoint URL field, provide the endpoint URL for the Book Store API, apply and save the changes.

API_Request_URL_2.png

In this case we have specified /bookstore/books to be the relative URI. You can also choose the protocol to be HTTP or HTTPS or both.

 

Create a backend service

We need to create a backend service that would process the requests forwarded by the bookstore/books API. As mentioned earlier, we create a mock implementation of this service using Apiary. Oracle Apiary provides you with the ability to design APIs using either API Blueprint or Swagger 2.0. Please refer to http://apiary.io to learn more about Oracle Apiary, its features and to register for a free account.

 

Note: This task assumes that you have already registered with Apiary and have valid access credentials to login into Apiary.

 

1) Navigate to http://apiary.io  and Sign In using your account

apiary_login.png

2) Create a new API by clicking on Create New API project as follows, you can choose to design your API using API Blueprint or with Swagger. In this case we use API Blueprint, click on Create API button to create the Book Store API.

apiary_new_api.png

3) The API editor opens with a sample API definition which can be edited to define the API for /books as follows:

For the sake of simplicity, just replace the content in the left window with the following text. We have mocked the implementation of /bookstore/books service by providing two book entries.

 

FORMAT: 1A

HOST: http://bookstore.apiblueprint.org/

# BookStoreAPI

Bookstore is a simple API allowing consumers to view all the books along with their title and author.

## Books Collection [/bookstore/books]

### List All Books [GET]

+ Response 200 (application/json)

    {

        [

            {

                "Title": "Thus Spoke Zarathustra",

                "Author": "Friedrich Nietzsche"

            } ,

            {

                "Title": "The Fountainhead",

                "Author": "Ayan Rand"

            }

        ]

      }

 

4) When you click on Save button, you should see the right side window updated accordingly, based on the content you just provided.

apiary_bookstore_api.png

5) When you click on the List All Books link, you will see the following page with a mock server URL (https://private-2dd84-bookstoreapi.apiary-mock.com/bookstore/books , note that this URL would be different when you try to execute this example) for the /bookstore/books API service implementation

apiary_bookstore_api_1.png

6) Click on the Try button to invoke the mock service URL and validate the output. You will see a HTTP 200 response with the following output on the right side window. This confirms that the mock service URL is returning the book entries in response.

apiary_bookstore_api_2.png

 

Configure Service Request URL

The service request is the URL at which your backend service receives requests. The gateway routes the request to this URL when a request meets all policy conditions to invoke your service.

 

1) Click on the “Edit” button you see when you hover on the Service Request section on the API Implementation page

service_request_url.png

2) Enter the policy name and provide description and click on Next as shown below

service_request_url_2.png

3) In the Backend service URL input field, provide the mock server URL that was noted in step #5 in the above section as follows. Apply and Save the changes.

service_request_url_3.png

Apply Policies

You can apply policies to an API to secure, throttle, route, or log requests sent to it. Requests can be rejected depending on the policies applied, if they do not meet criteria specified for each policy.

 

1) The API Implementation page lists all the policies currently supported as follows, you can apply any policy by hovering onto the policy name and clicking on the Apply button.

Policy_list.png

 

Configuring all policies is beyond the scope of this blog, we apply couple of security and traffic management polices to the BookStore API to illustrate how to manage APIs by applying the policies. Please refer to Applying Policies section of API Platform CS documentation for more details.

 

Note: Policies in the request flow can be used to secure, throttle, route, manipulate, or log requests before they reach the backend service while polices in the response flow manipulate and log responses before they reach the requester.

 

2) Let us say we want to restrict the BookStore API to be consumed only by a specific application, a key validation policy can be applied on the request flow which ensures that requests from unregistered (anonymous) applications are rejected.

  • As discussed in Register Application to an API section above applications can be registered to an API and a unique App key is generated and assigned for each application.
  • These keys can be distributed to clients when they register to use an API on the Developer Portal.
  • At runtime, if this key is not present in the given header or query parameter, or if the application is not registered, the request is rejected.

To apply this policy, hover on the key validation policy under Security and click on Apply button. In the policy configuration page, give a name to the policy and you can specify the order in which this policy has to be triggered by selecting the policy from "Place after the following policy" drop down. Currently we have API Request policy that is already configured.

Key_validation_1.png

When you click the Next button you can specify the key delivery approach. The application key can be passed in header or as a query parameter. In this case we choose Header and specify the key name as “api-key”, click on Apply and save the changes. At runtime, the request is parsed for this key name and if found its value is validated against the registered application’s App key value. The request would be processed only if the values match else they would be rejected.

Key_validation_2.png

3) Let us apply another policy to restrict the number of requests our BookStore API can take within a specific time period. An API rate limiting policy can be used to limit the total number of requests an API allows over a time period that you specify, this time period can be defined in seconds, minutes, hours, days, weeks, or months.

 

To configure this policy, hover on the API Rate Limiting policy under Traffic Management and click on Apply button, in the resulting page provide a policy name and specify the order in which this policy should be triggered, in this case we want this policy to be triggered after the key validation policy.

API_Rate_Limiting_1.png

Click Next to configure the time period and the number of requests. We want the Gateway to reject requests for this API if they exceed 5 per minute.

API_Rate_Limiting_2.png

Note: Other traffic management related policies like API throttling can be implemented to delay request processing if they exceed the set threshold. Please refer to the API platform CS documentation for more details.

The API implementation should look like below after configuring the above policies:

API_Implementation_1.png

 

Deploy the API

In this section you will deploy the BookStore API to a gateway and activate the API. To deploy an endpoint, API Managers must have the Manage API or Deploy API grant for the API in addition to the Deploy to Gateway or Request Deployment to Gateway grant for a gateway.

 

Note:  This task assumes that the gateway nodes are configured and the user has the required grants to be able to deploy the API to the gateway. Please refer to Managing Gateways section for more details on configuring Gateways and their topology and Grants section for more details on granting users access to resources.

 

1) To deploy the API to the gateway, click on the Deployments icon just below the API Implementation.

deploy_api_1.png

2) Click on the Deploy API button, the resulting page lists all the gateways configured and allow you to choose a gateway onto which you want to deploy this API to. You can also choose the Initial deployment state of this API

deploy_api_2.png

Please note that the gateways can be configured anywhere - on oracle cloud , or third party cloud or on-premise.

 

3) When you click on the Deploy button, a request for API deployment is submitted and once the deployment is successful, the Deployments page would look as follows showing the Gateway Load Balancer URL which is your endpoint for sending the API requests.

deploy_api_3.png

 

Invoke the API

Now that you have successfully implemented your API and deployed the API to the gateway, you can send requests to the API and validate if the policies work as intended. 

You can use Postman or any other REST client to send requests to the API. In this case we use Postman to invoke the API.

 

Scenario # 1

Open Postman and initiate a GET request to the Load Balancer URL that has been shown on the Deployments page.

The request to the API fails with error – 401 (Unauthorized access), this is because the key validation policy got triggered and was looking for a header called “api-key”, which we have not set while submitting the request.

invoke_api_1.png

Scenario # 2

Add a request header with key as “api-key” and provide the App key of the registered BooksApp as value and submit the request. This will return a couple of book entries which we have mocked as part of the API service implementation in Apiary.

invoke_api_2.png

Scenario # 3

Submit 5 requests to this API within a minute time period; you will see the response from the API retrieving the book entries. When the request for the API is made for the 6th time within 1 minute, the API Rate limit policy that we have configured gets triggered and rejects the request as shown below:

invoke_api_3.png

The requests submitted after sometime (when the invocation rate comes to acceptable limit) are accepted and processed as usual again until any policy execution fails.

 

Once the API has been tested, you can publish this API to development portal from where developers can discover the API and register apps for consuming the APIs. Also API Platform Cloud Service Management Portal provides Analytics around who is using your API, how APIs are being used, and how many requests are rejected along with other metrics like request volumes…etc. Discussion on these aspects is beyond the scope of this blog.

 

Conclusion

This blog discussed about the concepts of API management and its importance in the context of Microservices and cloud native applications. It provides an overview of Oracle API Platform Cloud Service and briefly discussed about its key components. Using a simple use case we illustrated how APIs can be created, configured, deployed, consumed and monitored using Oracle API platform Cloud service. This blog is limited to discuss specific aspects of Oracle API platform CS, please refer to Oracle API platform CS documentation for further details.

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

It's pretty easy to get started with a Jenkins instance on Oracle Container Cloud

 

 

Setup Jenkins service

 

You can leverage the out-of-the-box Service (name Jenkins) provided in OCC or create your own. In this example, we will be creating a new service (name yet-another-jenkins)

 

 

Use the existing Docker image or another image (tag) of your choice (e.g. from Docker Hub)

 

Map volumes

 

If you want to save your Jenkins data (e.g. plugins, configuration etc.) after container restart, you would need map the container path to a persistent volume on your host container

 

The Docker Hub Jenkins images stores data in /var/jenkins_home

This can be done easily since Oracle Container Cloud allows SSH access into the worker nodes as well (in addition to the Manager node)

 

All you need to do is the following

 

SSH into your worker node

 

More details here

 

Create the Jenkins data directory

 

This needs to be done on the worker node and permissions need to be assigned

 

 

cd /home/opc
mkdir jenkins
sudo chmod 777 jenkins

 

Configure the volume in the OCCS Jenkins service

 

More info here

 

 

Deploy the service in OCC

That's it.. Now just click Deploy to start your Jenkins container. You should see it in the Deployments list

 

 

Get started

 

Get the administrator password

 

Access the running Jenkins container in OCC and click on View Logs (scroll down to see the password)

 

 

Access Jenkins

 

The Jenkins container exposes port 9002 (by default).Just browse to http://<occs-host-ip>:9002/ and enter the password to get started

 

 

Configure Jenkins as per your requirements....

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

I will be using one of my previous blogs as a background for this one. Although the concept remains the same, the execution is different. We will leverage Oracle Event Hub cloud service as the managed Kafka broker. Our producer & consumer microservices run will run on Oracle Application Container Cloud and leverage its Service Binding feature for Oracle Event Hub cloud. CI/CD for both these applications will be handled by Oracle Developer Cloud service. Specific focus areas include

 

  • Overview of how to get started with Oracle Event Hub cloud including bootstrapping a cluster and topic
  • How to use the Oracle Event Hub Cloud service binding available in Application Container Cloud
  • Configuring Oracle Developer Cloud Service to achieve CI/CD to Application Container Cloud

 

 

Overview

Let's briefly look at what role do the individual cloud services play

 

Oracle Event Hub cloud

This is a fully managed Platform-as-a-Service which makes it dead simple to setup and work with an Apache Kafka cluster

  • You can easily setup clusters and scale them elastically
  • Quickly create topics and add/remove partitions
  • It also provides REST interface as well command line clients to work with your Kafka cluster & topics

 

Oracle Application Container cloud

We continue to use it as the platform to run our producer and consumer microservices. The good thing is that services running on Application Container cloud can easily connect with Oracle Event Hub cloud service using the Service Binding feature. We will this in action

 

Oracle Developer Cloud

This serves as a central hub for source code repository and DevOps pipeline. All we need to do is configure the build and deployment once and it will take care of seamless CI/CD to Application Container Cloud. Although Developer Cloud is capable of a lot more, this blog will focus on these features. Please refer to the documentation for more details

 

Code

The logic for our consumer and producer services is very much the same and the details available here. Let's focus on how to use the Oracle Event Hub service binding

 

Leveraging the Event Hub Service Binding in Application Container Cloud

The service bindings are utilized in the same way in both our services i.e. consumer and producer. The logic uses the Application Container Cloud environment variables (created as a result of the Service Binding) to fetch the location of our Event Hub Kafka cluster as well the topic we want to work with (in this case it’s just a single topic). You do not need to expose ports on the Kafka node(s) for the services on Application Container Cloud to access them. It's all taken care of by the Service Binding internally !

 

Here is a preview

 

Note the usage of OEHCS_TOPIC and OEHCS_EXTERNAL_CONNECT_STRING

public class Consumer implements Runnable {

    private static final Logger LOGGER = Logger.getLogger(Consumer.class.getName());
    private static final String CONSUMER_GROUP = "cpu-metrics-group";
    private final AtomicBoolean CONSUMER_STOPPED = new AtomicBoolean(false);
    private KafkaConsumer<String, String> consumer = null;
    private final String topicName;
    public Consumer() {
        Properties kafkaProps = new Properties();
        LOGGER.log(Level.INFO, "Kafka Consumer running in thread {0}", Thread.currentThread().getName());

        this.topicName = System.getenv().get("OEHCS_TOPIC");
        LOGGER.log(Level.INFO, "Kafka topic {0}", topicName);

        String kafkaCluster = System.getenv().get("OEHCS_EXTERNAL_CONNECT_STRING");
        LOGGER.log(Level.INFO, "Kafka cluster {0}", kafkaCluster);

        kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaCluster);
        kafkaProps.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP);
        kafkaProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        kafkaProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        this.consumer = new KafkaConsumer<>(kafkaProps);
    }

 

 

  • The variable OEHCS_EXTERNAL_CONNECT_STRING allows us get the co-ordinates for the Kafka cluster. This is used in the Kafka configuration represented by a java.util.Properties object
  • OEHCS_TOPIC gives us the name of the topic which is then passed on to the subscribe method of the KafkaConsumer

 

 

 

Setting up Oracle Event Hub cloud service

Let’s see how to setup a managed Kafka cluster in Oracle Cloud

 

Bootstrap the cluster

We first create the Kakfa cluster itself. Use the wizard in the Oracle Event Hub Cloud Service – Platform section to get started

 

 

Enter the required details

 

 

In this case, we are choosing the following configuration

  • Basic deployment where the Kafka cluster and Zookeeper are co-located on the same node
  • A single Kafka node

 

 

 

You can choose from different options such as

  • changing the deployment type to Recommended
  • opting for 3 nodes with the Basic mode
  • deploying a REST proxy alongside your cluster etc.

 

Refer to the official product documentation for more details

 

Click Create to start the provisioning process for your Kafka cluster

 

 

Wait for the process to complete

 

 

Setup the Kafka topic

Once the Kafka cluster is created, you can now create individual topics. To do so, choose Oracle Event Hub Cloud Service from the Platform Services menu

 

 

Click on Create Service and fill in the required details in the subsequent page

 

 

Here we create a topic name cpu-metrics in the cluster kafka-cluster (we just created). The number of partitions is 10 and the retention period is one week (168 hours)

 

 

 

Click Create to conclude the process

 

 

 

Within a few seconds, you should see your newly created topic

 

 

 

Configuring Developer Cloud Service

 

Project & code repository creation

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

Repeat this process for both your application (producer and consumer)

 

cd <project_folder> //where you unzipped the source code  
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

 

Configure build job

Repeat this process for both your application (producer and consumer)

 

Create a New Job

 

 

Basic Configuration

 

Select JDK

 

 

Source Control

 

Choose Git repository

 

 

Build Trigger (Continuous Integration)

 

Set build trigger - this build job will be triggered in response to updated within the Git repository (e.g. via git push)

 

 

Build steps

 

A Maven Build step – to produce the ZIP file to be deployed to Application Container Cloud

 

 

Post-Build actions

 

Activate a post build action to archive the zip file

 

 

   

 

Execute Build

 

Before configuring deployment, we need to trigger the build in order to produce the artifacts which can be referenced by the deployment configuration

 

 

 

After the build is complete, you can check the archived artifacts

 

 

 

Continuous Deployment (CD) to Application Container Cloud

 

Repeat this process for both your application (producer and consumer)

 

Create a New Configuration for deployment

 

 

 

 

  • Enter the required details and configure the Deployment Target
  • Configure the Application Container Cloud instance
  • Configure Automatic deployment option on the final confirmation page
  • Provide content for manifest.json and deployment.json

 

You’ll end up with the below configuration (the view has been split into two parts)

 

 

 

Application Container Cloud defines two primary configuration descriptors – manifest.json and deployment.json, and each of them fulfill a specific purpose (more details here)

 

 

 

Click Save, initiate your deployment and wait for it to finish

 

Confirmation screen

 

 

 

Check your application(s) in Application Container Cloud

 

 

 

In the Deployment sub-section of the application details screen, notice that the required Service Bindings have been automatically wired and the environment variables have been populated as well (only a couple of variables have been highlighted below)

 

 

 

Test the application

 

The details to test the application are the same as described in this section of the previous blog. It’s really simple and here are the high level steps

  • Start your producer application using its REST URL, and
  • Access your consumer application

 

You should see the real time metrics being sent by the producer component to the Event Hub cloud service instance and consumed by the Server-Sent event (SSE) client via the consumer microservice

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This blog covers CI/CD for a Java application deployed on Oracle Application Container Cloud which uses Oracle Database Cloud via its declarative Service Binding feature

 

  • We will focus on setting up and configuring Oracle Developer Cloud Service to achieve end-to-end DevOps and specifically look at
    • Continuous Deployment to Application Container Cloud
    • Using Oracle Maven repository from Developer Cloud Service
  • The scenario depicted here will be used as a reference

 

 

Quick background

Here is an overview

  • APIs used: The application leverages JPA (DB persistence) and JAX-RS (for REST) APIs
  • Oracle Database Cloud Service: The client (web browser/curl etc) invokes a HTTP(s) URL (GET request) which internally calls the JAX-RS resource, which in turn invokes the JPA (persistence) layer to communicate with Oracle Database Cloud instance
  • Application Container Cloud Service bindings in action: Connectivity to the Oracle Database Cloud instance is achieved with the help of a service binding which exposes database connectivity details as environment variables which are then within the code

 

For more details you can refer to the following sections from one of my previous blogs - About the sample and Service Bindings concept

 

Using Oracle Maven within Oracle Developer Cloud

The instructions in the previous blog included a manual step to seed the Oracle JDBC driver (ojdbc7.jar) into the local Maven local repository. In this blog however, we will see leverage Oracle Maven repository (one time registration required for access) for the same. Developers generally need to go through a bunch of steps to before starting to use the Oracle Maven repo (e.g. configuring Maven settings.xml etc.), but Oracle Developer Cloud service handles all this internally! All you need to do is provide your repository credentials along with any customizations if needed. More on this in upcoming section

 

Here is snippet from the pom.xml which highlights the usage of the Oracle Maven repository

 

 

    <repositories>
        <repository>
            <id>maven.oracle.com</id>
            <releases>
                <enabled>true</enabled>
            </releases>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
            <url>https://maven.oracle.com</url>
            <layout>default</layout>
        </repository>
    </repositories>
    <pluginRepositories>
        <pluginRepository>
            <id>maven.oracle.com</id>
            <url>https://maven.oracle.com</url>
        </pluginRepository>
    </pluginRepositories>
    <dependencies>

 

Setting up Developer Cloud Service

 

Project & code repository creation

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

cd <project_folder> //where you unzipped the source code  
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

 

Configure build

Create a New Job

 

 

Basic Configuration

Select JDK

 

 

 

Source Control

Choose Git repository

 

 

 

Build Trigger (Continuous Integration)

Set build trigger - this build job will be triggered in response to updated within the Git repository (e.g. via git push)

 

 

 

Configure Oracle Maven repository

As mentioned above, we will configure Oracle Developer Cloud to use the Oracle Maven repository – the process is quite simple. For more details, refer product documentation

 

 

 

Build steps

A Maven Build step – to produce the ZIP file to be deployed to Application Container Cloud

 

 

Post-Build actions

 

Activate a post build action to archive deployable zip file

 

 

Execute Build

Before configuring deployment, we need to trigger the build in order to produce the artifacts which can be referenced by the deployment configuration

 

 

After the build is complete, you can check the archived artifacts

 

 

Continuous Deployment (CD) to Application Container Cloud

Create a New Confguration for deployment

 

 

 

  • Enter the required details and configure the Deployment Target
  • Configure the Application Container Cloud instance
  • Configure Automatic deployment option on the final confirmation page
  • Provide content for manifest.json and deployment.json

 

You’ll end up with the below configuration (the view has been split into two parts)

 

 

 

Application Container Cloud defines two primary configuration descriptors – manifest.json and deployment.json, and each of them fulfill a specific purpose (more details here). In this case, we have defined the configuration using Developer Cloud service which in turn will override the ones in your application zip (if any) - refer to the documentation for more details

 

 

Confirmation screen

 

 

 

Check your application in Application Container Cloud

 

 

 

In the Deployment sub-section of the application details screen, notice that the required Service Bindings have been automatically wired

 

 

 

Test the application

The testing process remains the same – please refer to this section of the previous blog for details

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

In this blog we will look at how to run a Docker based Java EE microservice in HA/load-balanced mode using HAProxy – all this on the Oracle Container Cloud. Here is a quick overview

 

  • Java EE microservice using Wildfly Swarm: a simple (JAX-RS based) REST application
  • HAProxy: we will use it for load balancing multiple instances of our application
  • Docker: our individual components i.e. our microservice and load balancer services will be packaged as Docker images
  • Oracle Container Cloud: we will stack up our services and run them in a scalable + load balanced manner on Oracle Container Cloud

 

 

Application

 

The application is a very simple REST API using JAX-RS. It just fetches the price for a stock

 

    @GET
    public String getQuote(@QueryParam("ticker") final String ticker) {


        Response response = ClientBuilder.newClient().
                target("https://www.google.com/finance/info?q=NASDAQ:" + ticker).
                request().get();


        if (response.getStatus() != 200) {
            //throw new WebApplicationException(Response.Status.NOT_FOUND);
            return String.format("Could not find price for ticker %s", ticker);
        }
        String tick = response.readEntity(String.class);
        tick = tick.replace("// [", "");
        tick = tick.replace("]", "");


        return StockDataParser.parse(tick)+ " from "+ System.getenv("OCCS_CONTAINER_NAME");
    }

 

 

Wildfly Swarm is used as the (just enough) Java EE runtime. We build a simple WAR based Java EE project and let the Swarm Maven plugin weave its magic – it auto-magically detects and configures required fractions and creates a fat JAR from your WAR.

 

 

<build>
        <finalName>occ-haproxy</finalName>
        <plugins>
            
            <plugin>
                <groupId>org.wildfly.swarm</groupId>
                <artifactId>wildfly-swarm-plugin</artifactId>
                <version>1.0.0.Final</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>package</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
    
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.1</version>
                <configuration>
                    <source>1.7</source>
                    <target>1.7</target>
                    <compilerArguments>
                        <endorseddirs>${endorsed.dir}</endorseddirs>
                    </compilerArguments>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-war-plugin</artifactId>
                <version>2.3</version>
                <configuration>
                    <failOnMissingWebXml>false</failOnMissingWebXml>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-dependency-plugin</artifactId>
                <version>2.6</version>
                <executions>
                    <execution>
                        <phase>validate</phase>
                        <goals>
                            <goal>copy</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${endorsed.dir}</outputDirectory>
                            <silent>true</silent>
                            <artifactItems>
                                <artifactItem>
                                    <groupId>javax</groupId>
                                    <artifactId>javaee-endorsed-api</artifactId>
                                    <version>7.0</version>
                                    <type>jar</type>
                                </artifactItem>
                            </artifactItems>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

 

Alternatives: you can also look into other JavaEE based fat JAR style frameworks such as Payara Micro, KumuluzEE, Apache TomEE embedded etc.

Let’s dive into the nitty gritty….

 

Dynamic load balancing

Horizontal scalability with Oracle Container Cloud is extremely simple - all you need to do it spawn additional instances of your application. This is effective when we have a load balancer to ensure that the consumers of application (users or other applications) do not have to deal with the details of the individual instances – they only need to be aware of the load balancer co-ordinates (host/port). Thw problem is that our load balancer will not be aware of the newly spawned application instances/containers. Oracle Container Cloud helps create a unified Stack where both the back end (REST API in our example) and the (HAProxy) load balancer components are configured as a single unit and can be managed and orchestrated easily as well as provide a recipe for a dynamic HAProxy avatar

 

HAProxy on steroids

We will make use of the artifacts in the Oracle Container Cloud Github repository to build a specialized (Docker) HAProxy image on top of the customized Docker images for confd and runit. confd is a configuration management tool and in this case its used to dynamically discover our application instances on the fly. Think of it as a mini service discovery module in itself which queries the native Service Discovery within Oracle Container Cloud service to detect new application instances

 

Configuring our application to run on Oracle Container Cloud

 

Build Docker images

We will first build the required Docker images. For the demonstration, I will be using my public registry (abhirockzz) on Docker Hub. You can choose to use your own public or private registry

 

Please ensure that Docker engine is up and running

 

Build the application Docker image

 

Dockerfile below

 

FROM anapsix/alpine-java:latest
RUN mkdir app 
WORKDIR "/app"
COPY target/occ-haproxy-swarm.jar .
EXPOSE 8080
CMD ["java", "-jar", "occ-haproxy-swarm.jar"]

 

Run the following command

 

docker build –t <registry>/occ-wfly-haproxy:<tag> . e.g. docker build –t abhirockzz/occ-wfly-haproxy:latest .

 

Build Docker images for runit, confd, haproxy

We will build the images in sequence since they are dependent.To begin with,

 

  • clone the docker-images Github repository, and
  • edit the vars.mk (Makefile) in the ContainerCloud/images/build directory to enter your Docker Hub username

 

 

Now execute the below commands

 

cd ContainerCloud/images
cd runit
make image
cd ../confd
make image
cd ../nginx-lb
make image

Check your local Docker repository

Your local Docker repository should now have all the required images

 

 

Push Docker images

Now we will push the Docker images to a registry (in this case my public Docker Hub registry) so that they can be pulled from Oracle Container Cloud during deployment of our application stack. Execute the below commands

 

Adjust the names (registry and repository) as per your setup

 

docker login
docker push abhirockzz/occ-wfly-haproxy
docker push abhirockzz/haproxy
docker logout

Create the Stack

We will make use of a YAML configuration file to create the Stack. It is very similar to docker-compose. In this specific example, notice how the service name (rest-api) is referenced in the lb (HAProxy) service

 

 

 

This in turn provides information to the HAProxy service about the key in the Oracle Container Cloud service registry which is actually used by the confd service (as explained before) for auto-discovery of new application instances. 8080 is nothing but the exposed port and it is hard coded since it’s also a part of the key within the service registry.

 

Start the process by choosing New Stack from the Stacks menu

 

 

Click on the Advanced Editor and enter the YAML content

 

 

 

 

You should now see the individual services. Enter the Stack Name and click Save

 

 

Initiate Deployment

Go back to the Stacks menu, look for the newly created stack and click Deploy

 

 

In order to test the load balancing capabilities, we will deploy 3 instances our rest-api (back end) service and stick with one instance of the lb (HAProxy) service

 

 

After a few seconds, you should see all the containers in RUNNING state – in this case, its three for our service and one for the ha-proxy load balancer instance

 

 

Check the Service Discovery menu to verify that each instance has its entry here. As explained earlier, this is introspected by the confd service to auto-detect new instance of our application (it would automatically get added to this registry)

 

 

Test

We can access our application via HAProxy. All we need to know is the public IP of the host where our HAProxy container is running. We already mapped port 8886 for accessing the downstream applications (see below snapshot)

 

 

 

Test things out with the following curl command

 

for i in `seq 1 9`; do curl -w "\n" -X GET "http://<haproxy-container-public-IP>:8886/api/stocks?ticker=ORCL"; done

 

All we do is invoke is 9 times, just to see the load balancing in action (among three instances). Here is a result. Notice that the highlighted text points to the instance from which the response is being served – it is load balanced equally among the three instances

 

 

 

Scale up… and check again

You can simply scale up the stack and repeat the same. Navigate to your deployment and click Change Scaling

 

 

After sometime, you’ll see additional instances of your application (five in our case). Execute the command again to verify the load balancing is working as expected

 

 

That’s all for this blog post.

 

Cheers!

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This blog introduces you to building event-driven microservices application using CQRS and Event sourcing patterns. Following is a brief definition of the concepts that would be discussed during the course of this blog, more details about these can be obtained from the resources provided at the end of this blog.

 

What is a Microservice?

While there is no single definition for this architectural style, Adrian Cockcroft defines microservices architecture as a service-oriented architecture composed of loosely coupled elements that have bounded contexts.

 

What is a Bounded Context?

A Bounded Context is a concept that encapsulates the details of a single domain, such as domain model, data model, application services, etc., and defines the integration points with other bounded contexts/domains.

 

What is CQRS?

Command Query Responsibility Segregation (CQRS) is an architectural pattern that segregates the domain operations into two categories – Queries and Commands. While queries just return some results without making any state changes, commands are the operations which change the state of the domain model.

 

Why CQRS?

During the lifecycle of an application, it is common that the logical model becomes more complicated and structured that could impact the user experience, which must be independent of the core system.
In order to have a scalable and easy to maintain application we need to reduce the constraints between the read model and the write model. Some reasons for keeping reads and writes apart could be:

  • Scalability (read exceeds the write, so does the scaling requirements for each differs and can be addressed better)
  • Flexibility (separate read / write models)
  • Reduced Complexity (shifting complexity into separate concerns)

 

What is Event sourcing?

Event sourcing achieves atomicity by using a different, event-centric approach to persisting business entities. Instead of storing the current state of an entity, the application stores a sequence of ‘events’ that changed the entity’s state. The application can reconstruct an entity’s current state by replaying the events. Since saving an event is a single operation, it is inherently atomic and does not require 2PC (2-phase commit) which is typically associated with distributed transactions.

 
Overview

This blog explains how CQRS and Event Sourcing patterns can be applied to develop a simple microservice application that consists of a single bounded context called “Cart” with add, remove and read operations. The sample does not have any functional significance but should be good enough to understand the underlying patterns and their implementations.  The following diagram depicts a high level flow of activities when using CQRS and Event sourcing patterns to build applications:

 

cqrs-es_1.jpg

Figure 1 CQRS and Event sourcing

 

The sample referred in this blog uses the following technology stack:

  • Spring Boot for building and packaging the application
  • Axon framework with Spring for CQRS and Event Sourcing. Axon is an open source CQRS framework for Java which provides implementations of the most important building blocks, such as aggregates, repositories and event buses that help us build applications using CQRS and Event sourcing architectural patterns. It also allows you to provide your own implementation of the above mentioned building blocks.
  • Oracle Application Container cloud for application deployment

With this background, let us start building the sample.

 

Identify Aggregate Root

First step is to identify the bounded context and domain entities in the bounded context. This will help us define the Aggregate Root (for example, an ‘account’, an ‘order’…etc.). An aggregate is an entity or group of entities that is always kept in a consistent state. The aggregate root is the object on top of the aggregate tree that is responsible for maintaining this consistent state.

To keep things simple for this blog, we consider ‘Cart’ as the only Aggregate Root in the domain model. Just like the usual shopping cart, the items in the cart are adjusted based on the additions or removals happening on that cart.

 

Define Commands

This aggregate root has 2 commands associated with it:

  • Add to Cart Command – Modeled by AddToCartCommand class
  • Remove from Cart Command – Modeled by RemoveFromCartCommand class
publicclass AddToCartCommand {

    private final String cartId;
    private final int item;

    public AddToCartCommand(String cartId, int item) {
        this.cartId = cartId;
        this.item = item;
    }

    public String getCartId() {
        return cartId;
    }

    public int getItem() {
        return item;
    }
}

public class RemoveFromCartCommand {

 private final String cartId;
 private final int item;

 public RemoveFromCartCommand(String cartId, int item) {
      this.cartId = cartId;
      this.item = item;
 }

 public String getCartId() {
      return cartId;
 }

 public int getItem() {
      return item;
 }
  }

 

As you notice, these commands are just POJOs used to capture the intent of what needs to happen within a system along with the necessary information that is required. Axon Framework does not require commands to implement any interface nor extend any class.

 

Define Command Handlers

A command is intended to have only one handler, the following classes represent the handlers for Add to Cart and Remove from Cart commands:

 

@Component
public class AddToCartCommandHandler {

 private Repository repository;

 @Autowired
 public AddToCartCommandHandler(Repository repository) {
      this.repository = repository;
 }

 @CommandHandler
 public void handle(AddToCartCommand addToCartCommand){
      Cart cartToBeAdded = (Cart) repository.load(addToCartCommand.getCartId());
      cartToBeAdded.addCart(addToCartCommand.getItem());
 }

}

@Component
public class RemoveFromCartHandler {

 private Repository repository;

 @Autowired
 public RemoveFromCartHandler(Repository repository) {
      this.repository = repository;
    }

 @CommandHandler
 public void handle(RemoveFromCartCommand removeFromCartCommand){
      Cart cartToBeRemoved = (Cart) repository.load(removeFromCartCommand.getCartId());
      cartToBeRemoved.removeCart(removeFromCartCommand.getItem());

 }
}

 

We use Axon with Spring framework, so the Spring beans defined above have methods annotated with @CommandHandler which makes them as command handlers. @Component annotation ensures that these beans are scanned during application startup and any auto wired resources are injected into this bean. Instead of accessing the Aggregates directly, Repository which is a domain object in Axon framework abstracts retrieving and persisting of aggregates.

 

Application Startup

Following is the AppConfiguration class which is a Spring configuration class that gets initialized upon application deployment and creates the components required for implementing the patterns.

 

@Configuration
@AnnotationDriven
public class AppConfiguration {

 @Bean
 public DataSource dataSource() {
      return DataSourceBuilder
                .create()
                .username("sa")
                .password("")
                .url("jdbc:h2:mem:axonappdb")
                .driverClassName("org.h2.Driver")
                .build();
 }

 /**
 * Event store to store events
 */
 @Bean
 public EventStore jdbcEventStore() {
      return new JdbcEventStore(dataSource());
 }

 @Bean
 public SimpleCommandBus commandBus() {
      SimpleCommandBus simpleCommandBus = new SimpleCommandBus();
      return simpleCommandBus;
 }

 /**
 *  Cluster event handlers that listens to events thrown in the application.
 */
 @Bean
 public Cluster normalCluster() {
      SimpleCluster simpleCluster = new SimpleCluster("simpleCluster");
      return simpleCluster;
 }


 /**
 * This configuration registers event handlers with defined clusters
 */
 @Bean
 public ClusterSelector clusterSelector() {
      Map<String, Cluster> clusterMap = new HashMap<>();
      clusterMap.put("msacqrses.eventhandler", normalCluster());
      return new ClassNamePrefixClusterSelector(clusterMap);
 }

 /**
*The clustering event bus is needed to route events to event handlers in the clusters. 
 */
 @Bean
 public EventBus clusteringEventBus() {
     ClusteringEventBus clusteringEventBus = new ClusteringEventBus(clusterSelector(), terminal());

     return clusteringEventBus;
 }

 /**
 * Event Bus Terminal publishes domain events to the cluster
 *
 */
 @Bean
 public EventBusTerminal terminal() {
      return new EventBusTerminal() {
            @Override
            public void publish(EventMessage... events) {
                normalCluster().publish(events);
 }
            @Override
            public void onClusterCreated(Cluster cluster) {

            }
 };
 }

 /**
 * Command gateway through which all commands in the application are submitted
 *
 */

 @Bean
 public DefaultCommandGateway commandGateway() {
      return new DefaultCommandGateway(commandBus());
 }

 /**
* Event Repository that handles retrieving of entity from the stream of events.
 */
 @Bean
 public Repository<Cart> eventSourcingRepository() {
EventSourcingRepository eventSourcingRepository = new EventSourcingRepository(Cart.class, jdbcEventStore());
      eventSourcingRepository.setEventBus(clusteringEventBus());

     return eventSourcingRepository;
 }
}

 

Let us take a look at the key Axon provided infrastructure components that are initialized in this class:

 

Command bus

As represented in “Figure 1” above, command bus is the component that routes commands to their respective command handlers.  Axon Framework comes with different types of Command Bus out of the box that can be used to dispatch commands to command handlers. Please refer here for more details on Axon’s Command Bus implementations. In our example, we use SimpleCommandBus which is configured as a bean in Spring's application context.

 

Command Gateway

Command bus can directly send commands but it is usually recommended to use a command gateway. Using a command gateway allows developers to perform certain functionalities like intercepting commands, setting retry in failure scenarios…etc. In our example, we use Axon provided default which is DefaultCommandGateway that is configured as a Spring bean to send commands instead of directly using a command bus.

 

Event Bus

As depicted in “Figure 1”, the commands initiated on an Aggregate root are sent as events to the Event store where they get persisted. Event Bus is the infrastructure that routes events to event handlers. Event Bus may look similar to command bus from a message dispatching perspective but they vary fundamentally.

Command Bus works with commands that define what happen in the near future and there is only one command handler that interprets the command. However in case of Event Bus, it routes events and events define actions that happened in the past with zero or more event handlers for an event.

Axon defines multiple implementations of Event Bus, in our example we use ClusteringEventBus which is again wired up as a Spring bean. Please refer here for more details on Axon’s Event Bus implementations.

 

Event Store

We need to configure an event store as our repository will store domain events instead of the current state of our domain objects. Axon framework allows storing the events using multiple persistent mechanisms like JDBC, JPA, file system etc. In this example we use a JDBC event store.

 

Event Sourcing Repository

In our example, the aggregate root is not created from a representation in a persistent mechanism, instead is created from stream of events which is achieved through an Event sourcing repository. We configure the repository with the event bus that we defined earlier since it will be publishing the domain events.

 

Database

We use in memory database (h2) in our example as the data store. The Spring Boot’s application.properties contains the data source configuration settings:

# Datasource configuration
spring.datasource.url=jdbc:h2:mem:axonappdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.datasource.validation-query=SELECT 1;
spring.datasource.initial-size=2
spring.datasource.sql-script-encoding=UTF-8

spring.jpa.database=h2
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=create

 

As mentioned above, this example uses a JDBC event store to store the domain events generated in the system. These events are stored in a default tables (part of Axon framework event infrastructure) specified by the Axon framework. We use the following startup class for creating the database tables required by this example:

 

@Component
public class Datastore {

 @Autowired
 @Qualifier("transactionManager")
 protected PlatformTransactionManager txManager;

 @Autowired
 private Repository repository;

 @Autowired
 private javax.sql.DataSource dataSource;
    // create two cart entries in the repository used for command processing 
 @PostConstruct
 private void init(){

 TransactionTemplate transactionTmp = new TransactionTemplate(txManager);
 transactionTmp.execute(new TransactionCallbackWithoutResult() {
            @Override
            protected void doInTransactionWithoutResult(TransactionStatus status) {
                UnitOfWork uow = DefaultUnitOfWork.startAndGet();
                repository.add(new Cart("cart1"));
                repository.add(new Cart("cart2"));
                uow.commit();
            }
 });

 // create a database table for querying and add two cart entries
 JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
 jdbcTemplate.execute("create table cartview (cartid VARCHAR , items NUMBER )");
 jdbcTemplate.update("insert into cartview (cartid, items) values (?, ?)", new Object[]{"cart1", 0});
 jdbcTemplate.update("insert into cartview (cartid, items) values (?, ?)", new Object[]{"cart2", 0});
 }

 

This startup class creates two cart entries in the repository used for command processing and creates a database table called “cartview” which is used for processing the queries.

 

A quick recap on what we did so far:

  • We have identified “Cart” as our Aggregate root and have defined commands and command handlers for adding and removing items from the Cart.
  • We have defined a startup class which initializes the infrastructure components required for CQRS and Event sourcing.
  • A startup class has also been defined to create the database tables and setup the data required by this sample.

 

Let us now look at our AggregateRoot - “Cart” which is defined as below:

 

Aggregate Root

 

publicclass Cart extends AbstractAnnotatedAggregateRoot {
 @AggregateIdentifier
 private String cartid;

 private int items;

 public Cart() {
 }

 public Cart(String cartId) {
      apply(new CartCreatedEvent(cartId));
 }

 @EventSourcingHandler
 public void applyCartCreation(CartCreatedEvent event) {
      this.cartid = event.getCartId();
      this.items = 0;
 }

 public void removeCart(int removeitem) {

 /**
* State is not directly changed, we instead apply event that specifies what happened. Events applied are stored.
*/
 if(this.items > removeitem && removeitem > 0)
      apply(new RemoveFromCartEvent(this.cartid, removeitem, this.items));

 }

  
   
 @EventSourcingHandler
 private void applyCartRemove(RemoveFromCartEvent event) {
 /**
* When events stored in the event store are applied on an Entity this method is 
* called. Once all the events in the event store are applied, it will bring the Cart * to the most recent state.
*/

      this.items -= event.getItemsRemoved();
 }

 public void addCart(int item) {
 /**
* State is not directly changed, we instead apply event that specifies what happened. Events applied are stored.
*/
 if(item > 0)    
      apply(new AddToCartEvent(this.cartid, item, this.items));
 }

 @EventSourcingHandler
 private void applyCartAdd(AddToCartEvent event) {
 /**
* When events stored in the event store are applied on an Entity this method is 
* called. Once all the events in the event store are applied, it will bring the 
* Cart to the most recent state.
*/

      this.items += event.getItemAdded();
 }

 public int getItems() {
      return items;
 }

 public void setIdentifier(String id) {
      this.cartid = id;
 }

 @Override
 public Object getIdentifier() {
      return cartid;
 }
}

 

Following are some key aspects of the above Aggregate Root definition:

  1. The @AggregateIdentifier is similar to @Id in JPA which marks the field that represents the entity’s identity.
  2. Domain driven design recommends domain entities to contain relevant business logic, hence the business methods in the above definition. Please refer to the “References” section for more details.
  3. When a command gets triggered, the domain object is retrieved from the repository and the respective method (say addCart) is invoked on that domain object (in this case “Cart”).
    1. The domain object instead of changing the state directly, applies the appropriate event.
    2. The event is stored in the event store and the respective handler gets triggered which makes the change to the domain object.
  4. Note that the “Cart” aggregate root is only used for updates (i.e. state change via commands). All the query requests are handled by a different database entity (will be discussed in coming sections).

 

Let us also look at the events and the event handlers that manage the domain events triggered from “Cart” entity:

 

Events

 

As mentioned in the previous sections, there are two commands that get triggered on the “Cart” entity – Add to Cart and Remove from Cart. These commands when executed on the Aggregate root will generate two events – AddToCartEvent and RemoveFromCartEvent which are listed below:

 

publicclass AddToCartEvent {

 private final String cartId;
 private final int itemAdded;
 private final int items;
 private final long timeStamp;

 public AddToCartEvent(String cartId, int itemAdded, int items) {
      this.cartId = cartId;
      this.itemAdded = itemAdded;
      this.items = items;
      ZoneId zoneId = ZoneId.systemDefault();
      this.timeStamp = LocalDateTime.now().atZone(zoneId).toEpochSecond();
 }

 public String getCartId() {
      return cartId;
 }

 public int getItemAdded() {
      return itemAdded;
 }

 public int getItems() {
      return items;
 }

 public long getTimeStamp() {
      return timeStamp;
 }
}

public class RemoveFromCartEvent {
 private final String cartId;
 private final int itemsRemoved;
 private final int items;
 private final long timeStamp;

 public RemoveFromCartEvent(String cartId, int itemsRemoved, int items) {
      this.cartId = cartId;
      this.itemsRemoved = itemsRemoved;
      this.items = items;
      ZoneId zoneId = ZoneId.systemDefault();
      this.timeStamp = LocalDateTime.now().atZone(zoneId).toEpochSecond();

 }

 public String getCartId() {
      return cartId;
 }

 public int getItemsRemoved() {
      return itemsRemoved;
 }

 public int getItems() {
      return items;
 }

 public long getTimeStamp() {
      return timeStamp;
 }
}

 

Event Handlers

 

The events described above would be handled by the following event handlers:

 

@Component
public class AddToCartEventHandler {

 @Autowired
 DataSource dataSource;

 @EventHandler
 public void handleAddToCartEvent(AddToCartEvent event, Message msg) {
      JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);

      // Get current state from event
      String cartId = event.getCartId();
      int items = event.getItems();
      int itemToBeAdded = event.getItemAdded();
      int newItems = items + itemToBeAdded;


      //  Update cartview
      String updateQuery = "UPDATE cartview SET items = ? WHERE cartid = ?";
      jdbcTemplate.update(updateQuery, new Object[]{newItems, cartId});

 }
@Component
public class RemoveFromCartEventHandler {

 @Autowired
 DataSource dataSource;

 @EventHandler
 public void handleRemoveFromCartEvent(RemoveFromCartEvent event) {

      JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);

      // Get current state from event
      String cartId = event.getCartId();
      int items = event.getItems();
      int itemsToBeRemoved = event.getItemsRemoved();
      int newItems = items - itemsToBeRemoved;

      // Update cartview
      String update = "UPDATE cartview SET items = ? WHERE cartid = ?";
      jdbcTemplate.update(update, new Object[]{newItems, cartId});

 }
}

 

As you notice, the event handlers update the “cartview” database table which is used for querying the “Cart” entity. While the commands get executed on one domain, the query requests are serviced by a different domain there by achieving CQRS with Event sourcing.

 

Controllers

 

This example defines 2 Spring controller classes one each for updating and querying the Cart domain. These are REST endpoints that could be invoked from a browser as below:

 

http://<host>:<port>/add/cart/<noOfItems>

http://<host>:<port>/remove/cart/<noOfItems>

http://<host>:<port>/view

 

@RestController
public class CommandController {

 @Autowired
 private CommandGateway commandGateway;

 @RequestMapping("/remove/{cartId}/{item}")
 @Transactional
 public ResponseEntity doRemove(@PathVariable String cartId, @PathVariable int item) {
       RemoveFromCartCommand removeCartCommand = new RemoveFromCartCommand(cartId, item);
      commandGateway.send(removeCartCommand);

      return new ResponseEntity<>("Remove event generated. Status: "+ HttpStatus.OK, HttpStatus.OK);
 }

 @RequestMapping("/add/{cartId}/{item}")
 @Transactional
 public ResponseEntity doAdd(@PathVariable String cartId, @PathVariable int item) {

      AddToCartCommand addCartCommand = new AddToCartCommand(cartId, item);
      commandGateway.send(addCartCommand);

     return new ResponseEntity<>("Add event generated. Status: "+ HttpStatus.OK, HttpStatus.OK);
 }


}

@RestController
public class ViewController {

 @Autowired
 private DataSource dataSource;

@RequestMapping(value = "/view", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
 public ResponseEntity getItems() {

      JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
      List<Map<String, Integer>> queryResult = jdbcTemplate.query("SELECT * from cartview ORDER BY cartid", (rs, rowNum) -> {

      return new HashMap<String, Integer>() {{
                put(rs.getString("CARTID"), rs.getInt("ITEMS"));
            }};
 });

      if (queryResult.size() > 0) {
        return new ResponseEntity<>(queryResult, HttpStatus.OK);
      } else {
        return new ResponseEntity<>(null, HttpStatus.NOT_FOUND);
      }

 }

}

 
Deployment

 

We use Spring Boot to package and deploy the application as a runnable jar into Oracle Application Container cloud. The following Spring Boot class initializes the application:

 

@SpringBootApplication

public class AxonApp {

 // Get PORT and HOST from Environment or set default
 public static final Optional<String> host;
 public static final Optional<String> port;
 public static final Properties myProps = new Properties();

 static {
      host = Optional.ofNullable(System.getenv("HOSTNAME"));
      port = Optional.ofNullable(System.getenv("PORT"));
 }

 public static void main(String[] args) {
      // Set properties
      myProps.setProperty("server.address", host.orElse("localhost"));
      myProps.setProperty("server.port", port.orElse("8128"));

      SpringApplication app = new SpringApplication(AxonApp.class);
      app.setDefaultProperties(myProps);
      app.run(args);

 }
}

Create an xml file with the following content and place it in the same directory as the pom.xml. This specifies the deployment assembly of the application being deployed to Oracle Application Container Cloud.

 

<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd">
 <id>dist</id>
 <formats>
 <format>zip</format>
 </formats>
 <includeBaseDirectory>false</includeBaseDirectory>
 <files>
 <file>
            <source>manifest.json</source>
            <outputDirectory></outputDirectory>
 </file>
 </files>
 <fileSets>
 <fileSet>
            <directory>${project.build.directory}</directory>
            <outputDirectory></outputDirectory>
            <includes>
                <include>${project.artifactId}-${project.version}.jar</include>
            </includes>
 </fileSet>
 </fileSets>
</assembly>

 

To let Application Container cloud know the jar to run once the application is deployed, you need to create a “manifest.json” file specifying the jar name as shown below:

{
 "runtime": {
 "majorVersion": "8"
    },
 "command": "java -jar AxonApp-0.0.1-SNAPSHOT.jar",
 "release": {},
 "notes": "Axon Spring Boot App"
}

 

The following diagram depicts the project structure of this sample:

prjstructure.png

Figure 2 Project Structure

 

The application jar file along with the above manifest file should be archived to zip and uploaded into Application Container cloud for deployment. Please refer here for more details on deploying Spring Boot Application in Application Container cloud.

 

Once the application is successfully deployed, you would be able to access the following URLs to trigger the services on the Cart:

http://<host>:<port>/view

http://<host>:<port>/add/cart/<noOfItems>

http://<host>:<port>/remove/cart/<noOfItems>

 

When you first hit the “view” REST endpoint, you can see the 2 carts that we added in our startup class with number of items added to them. You can add or remove items from the Cart using the other two REST calls and can retrieve the updated item count using the “view” REST call. The result from the above REST invocations is a simple JSON structure displaying the Carts and the no of items in the Cart at a given point of time.

 

Conclusion

 

This blog is restricted to introduce you to developing microservice applications using CQRS and Event sourcing patterns. You can refer to the following resources to know more about other advanced concepts and recent updates in this space.

 

References

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.