Skip navigation

The underlying theme of this blog continues to remain the same as one of my previous blogs i.e. scalable stream processing microservices on Oracle Cloud. But there are significant changes & additions

 

  • Docker: we will package the Kafka Streams based consumer application as a Docker image
  • Oracle Container Cloud: our containerized application will run and scale on Oracle Container Cloud
  • Service discovery: application is revamped to leverage the service discovery capabilities within Oracle Container Cloud

 

 

Technical Components

 

Here is a quick summary of the technologies used

 

  • Oracle Container Cloud: An enterprise grade platform to compose, deploy and orchestrate Docker containers
  • Docker needs no introduction
  • Apache Kafka: A scalable, distributed pub-sub message hub
  • Kafka Streams: A library for building stream processing applications on top of Apache Kafka
  • Jersey: Used to implement REST and SSE services. Uses Grizzly as a (pluggable) runtime/container
  • Maven: Used as the standard Java build tool (along with its assembly plugin)

 

Sample application

 

By and large, the sample application remains the same and its details can be referred here. Here is a quick summary

 

  • The components: a Kafka broker, a producer application and a consumer (Kafka Streams based) stream processing application
  • Changes (as compared to the setup here): the consumer application will now run on Oracle Container Cloud and the application instance discovery semantics (which were earlier Oracle Application Container Cloud specific) have now been implemented on top of Oracle Container Cloud service discovery capability

 

Architecture

 

To get an idea of the key concepts, I would recommend going through this section of the High level architecture section of one of the previous blogs . Here is a diagram representing the overall runtime view of the system

 

 

It's key takeaway are as follows

 

  • Oracle Container Cloud will host our containerized stream processing (Kafka consumer) applications
  • We will use its elastic scalability features to spin additional containers on-demand to distribute the processing load
  • The contents of the topic partitions in Kafka broker (marked as P1, P2, P3) will be distributed among the application instances

 

Please note that having more application instances than topic partitions will mean that some of your instances will be idle (no processing). It is generally recommended to set the number of topic partitions to a relatively high number (e.g. 50) in order to reap maximum benefit from Kafka

 

Code

 

You can refer to this section in the previous blog for code related details (since the bulk of the logic is the same). The logic for service discovery part (which is covered in-depth below) is the major difference (since it relies on Oracle Container Cloud KV store for runtime information). Here is the relevant snippet for the same

 

/**
     * find yourself in the cloud!
     *
     * @return my port
     */
    public static String getSelfPortForDiscovery() {
        String containerID = System.getProperty("CONTAINER_ID", "container_id_not_found");
        //String containerID = Optional.ofNullable(System.getenv("CONTAINER_ID")).orElse("container_id_not_found");
        LOGGER.log(Level.INFO, " containerID {0}", containerID);


        String sd_key_part = Optional.ofNullable(System.getenv("SELF_KEY")).orElse("sd_key_not_found");
        LOGGER.log(Level.INFO, " sd_key_part {0}", sd_key_part);


        String sd_key = sd_key_part + "/" + containerID;
        LOGGER.log(Level.INFO, " SD Key {0}", sd_key);


        String sd_base_url = "172.17.0.1:9109/api/kv";


        String fullSDUrl = "http://" + sd_base_url + "/" + sd_key + "?raw=true";
        LOGGER.log(Level.INFO, " fullSDUrl {0}", fullSDUrl);


        String hostPort = getRESTClient().target(fullSDUrl)
                .request()
                .get(String.class);


        LOGGER.log(Level.INFO, " hostPort {0}", hostPort);
        
        String port = hostPort.split(":")[1];
        LOGGER.log(Level.INFO, " Auto port {0}", port);


        return port;
    }

Kafka setup

 

On Oracle Compute Cloud

 

You can refer to part I of the blog for the Apache Kafka related setup on Oracle Compute. The only additional step which needs to be executed is opening of the port on which your Zookeeper process is listening (its 2181 by default) –as this is required by the Kafka Streams library configuration. While executing the steps from section Open Kafka listener port section, ensure that you include the Oracle Compute Cloud configuration for 2181 (in addition to the Kafka broker port 9092)

 

On Oracle Container Cloud!

 

You can run a Kafka cluster on Oracle Container Cloud – check out this cool blog post !

 

The Event Hub Cloud is a new offering which provides Apache Kafka as a managed service in Oracle Cloud

 

Configuring our application to run on Oracle Container Cloud

 

Build the application

 

Execute mvn clean package to build the application JAR

 

Push to Docker Hub

 

Create a Docker Hub account if you don't have one already. To build and push the Docker image, execute the below commands

 

Please ensure that Docker engine is up and running

 

docker login
docker build –t <registry>/<image_name>:<tag> . e.g. docker build –t abhirockzz/kafka-streams:latest .
docker push <registry>/<image_name>:<tag> e.g. docker push abhirockzz/kafka-streams:latest

 

Check your Docker Hub account to confirm that the image exists there

 

 

Create the Service

 

To create a new Service, click on New Service in the Services menu

 

 

There are multiple ways in which you can configure your service – one of which is the traditional way of filling in each of the attributes in the Service Editor. You can also directly enter the Docker run command or a YAML configuration (similar to docker-compose) and Oracle Container Cloud will automatically populate the Service details. Let’s see the YAML based method in action

 

 

Populate the YAML editor (highlighted above) with the required configuration

 

version: 2
services:
  kstreams:
    image: "<docker hub image e.g. abhirockzz/kafka-streams>"
    environment:
      - "KAFKA_BROKER=<kafka broker host:port e.g. 149.007.42.007:9092>"
      - "ZOOKEEPER=<zookeeper host:port e.g. 149.007.42.007:2181>"
      - "SELF_KEY={{ sd_deployment_containers_path .ServiceID 8080 }}"
      - "OCCS_HOST={{hostip_for_interface .HostIPs \"public_ip\"}}"
      - "occs:scheduler=random"
    ports:
      - 8080/tcp

Please make sure that you substitute the host:port for your Kafka broker and Zookeeper server in the yaml configuration file

 

 

If you switch to the Builder view, notice that all the values have already been populated

 

 

All you need to do is fill out the Service Name and (optionally) choose the Scheduler and Availability properties and click Save to finish the Service creation

 

 

You should see your newly created service in the list of service in the Services menu

 

 

YAML configuration details

 

Here is an overview of the configuration parameters

 

  • Image: Name of the application image on Docker Hub
  • Environment variables
    • KAFKA_BROKER: the host and port information to connect to the Kafka broker

    • ZOOKEEPER: the host and port information to connect to the Zookeeper server (for the Kafka broker)
    • SELF_KEY & OCCS_HOST: these are defined as templates functions (more details on this in a moment) and help with dynamic container discovery
  • Ports: Our application is configured to run on port 8080 i.e. this is specified within the code itself. This is not a problem since we have configured a random (auto generated) port on the host (worker node of Oracle Container Cloud) to map to 8080

 

This is equivalent to using the –P option in docker run command

 

Template functions and Service discovery

 

We used the following template functions within the environment variables of our YAML file

 

Environment variable

Template function

 

 

SELF_KEY

{{ sd_deployment_containers_path .ServiceID 8080 }}

OCCS_HOST

{{hostip_for_interface .HostIPs \"public_ip\"}}

 

What are templates*?

Template arguments provide access to deployment properties related to your services (or stacks) and template functions allow you to utilize them at runtime (in a programmatic fashion). More details in the documentation

 

Why do we need them?

Within our application, each Kafka Streams consumer application instance needs register to its co-ordinates in the Streams configuration (using the application.server parameter). This in turn allows Kafka Streams to store this as a metadata which can then be used at runtime. Here are some excerpts from the code

 

Seeding discovery info

 

Map<String, Object> configurations = new HashMap<>();
String streamsAppServerConfig = GlobalAppState.getInstance().getHostPortInfo().host() + ":"
                + GlobalAppState.getInstance().getHostPortInfo().port();
 configurations.put(StreamsConfig.APPLICATION_SERVER_CONFIG, streamsAppServerConfig);

 

Using the info

 

Collection<StreamsMetadata> storeMetadata = ks.allMetadataForStore(storeName);
StreamsMetadata metadataForMachine = ks.metadataForKey(storeName, machine, new StringSerializer());

 

How is this achieved?

 

For the application.server parameter, we need the host and port of the container instance in Oracle Container Cloud. The OCCS_HOST environment variable is populated automatically by the evaluation of the template function {{hostip_for_interface .HostIPs \"public_ip\"}} – this is the public IP of the Oracle Container Cloud host and takes care of ‘host’ part of the application.server configuration. The port determination needs more work since we have configured port 8080 to be mapped with a random port on Oracle Container Cloud host/worker node. The inbuilt discovery service mechanism within Oracle Container cloud made it possible to implement this.

 

The internal service discovery database is exposed via a REST API for external clients. But it can be accessed internally (by applications) on 172.17.0.1:9109. It exposes the host and port (of a Docker container) information in a key-value format

 

 

Key points to be noted in the above image

  • The part highlighted in red is the value which is the host and port information
  • The part highlighted in green is a portion of the key, which is the (dynamic) Docker container ID
  • The remaining portion of the key is also dynamic, but can be evaluated with the help of a template function

 

The trick is to build the above key and then use that to query the discovery service to get the value (host and port details). This is where the SELF_KEY environment variable comes into play. It uses the {{ sd_deployment_containers_path .ServiceID 8080 }} (where 8080 is the exposed and mapped application port) template function which gets evaluated at runtime. This gives us a part of the key i.e. (as per above example) apps/kstreams-kstreams-20170315-080407-8080/containers

 

The SELF_KEY environment variable is concatenated with the Docker container ID (which is a random UUID) evaluated during container startup within the init.sh script i.e. (in the above example) 3a52….. This completes our key using which we can query the service discovery store.

 

#!/bin/sh

export CONTAINER_ID=$(cat /proc/self/cgroup | grep 'cpu:/' | sed -r 's/[0-9]+:cpu:.docker.//g')
echo $CONTAINER_ID
java -jar -DCONTAINER_ID=$CONTAINER_ID occ-kafka-streams-1.0.jar

 

 

Both SELF_KEY and OCCS_HOST environment variables are used within the internal logic of the Kafka consumer application. The Oracle Container Cloud service discovery store is invoked (using its REST API) at container startup using the complete URL – http://172.17.0.1:9109/api/kv/<SELF_KEY>/<CONTAINER_ID>

 

See it in action via this code snippet

 

String containerID = System.getProperty("CONTAINER_ID", "container_id_not_found");
String sd_key_part = Optional.ofNullable(System.getenv("SELF_KEY")).orElse("sd_key_not_found");
String sd_key = sd_key_part + "/" + containerID;
String sd_base_url = "172.17.0.1:9109/api/kv";
String fullSDUrl = "http://" + sd_base_url + "/" + sd_key + "?raw=true";
String hostPort = getRESTClient().target(fullSDUrl).request().get(String.class);        
String port = hostPort.split(":")[1];

 

Initiate Deployment

 

Start Kafka broker first

 

 

Click on the Deploy button to start the deployment. Accept the defaults (for this time) and click Deploy

 

 

 

You will be lead into the Deployments screen. Wait for a few seconds for the process to finish

 

 

 

Dive into the container details

 

Click on the Container Name (highlighted). You will lead to the container specific details page

 

 

Make a note of the following

 

Auto bound port

 

 

Environment variables (important ones have been highlighted)

 

Test

 

Assuming your Kakfa broker is up and running and you have deployed the application successfully, execute the below mentioned steps to test drive your application

 

Build & start the producer application

 

 

mvn clean package //Initiate the maven build 
cd target //Browse to the build director
java –jar –DKAFKA_CLUSTER=<kafka broker host:port> kafka-cpu-metrics-producer.jar //Start the application

 

The producer application will start sending data to the Kakfa broker

 

Check the statistics

 

Cumulative moving average of all machines

 

Allow the producer to run for a 30-40 seconds and then check the current statistics. Issue a HTTP GET request to your consumer application at http://OCCS_HOST:PORT/metrics e.g . http://120.33.42.007:37155/metrics. You’ll see a response payload similar to what’s depicted below

 

the output below has been truncated for the sake of brevity

 

 

The information in the payload is as following

  • cpu: the cumulative average of the CPU usage of a machine
  • machine: the machine ID
  • source: this has been purposefully added as a diagnostic information to see which node (Docker container in Oracle Container Cloud) is handling the calculation for a specific machine (this is subject to change as your application scales up/down)

 

Cumulative moving average of a specific machine

 

Issue a HTTP GET request to your consumer application at http://OCCS_HOST:PORT/metrics/<machine-ID> e.g.  http://120.33.42.007:37155/metrics/machine-1

 

 

 

Scale up… and down

 

Oracle Container Cloud enables your application to remain elastic i.e. scale out or scale in on-demand. The process is simple – let’s see how it works for this application. Choose your deployment from the Deployments menu and click Change Scaling. We are bumping up to 3 instances now

 

 

After sometime, you’ll have three containers running separate instances of your Kafka Streams application

 

 

 

The cpu metrics computation task will now be shared amongst three nodes now. You can check the logs of the old and new container logs to confirm this.

 

 

In the old container, Kafka streams will close the existing processing tasks in order to re-distribute them to the new nodes. On checking the logs, you will see something similar to the below output

 

 

 

In the new containers, you will see Processor Initialized output, as a result of tasks being handed to these nodes. Now you can check the metrics using any of the three instances (check the auto bound port for the new containers). You can spot the exact node which has calculated the metric (notice the different port number). See snippet below

 

 

 

Scale down: You can scale down the number of instances using the same set of step and Kafka Streams will take care re-balancing the tasks among the remaining nodes

 

Note on Dynamic load balancing

 

In a production setup, one would want to load balance the consumer microservices by using haproxy, ngnix etc. (in this example one had to inspect each application instance by using the auto bound port information). This might be covered in a future blog post. Oracle Container Cloud provides you the ability to easily build such a coordinated set of services using Stacks and ships with some example stacks for reference purposes

 

That’s all for this blog post.... Cheers!

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This blog shows you how you can use Payara Micro to build a Java EE based microservice. It will leverage the following services from the Oracle Cloud (PaaS) stack

 

  • Developer Cloud service: to host the code (Git repo), provide Continuous Integration & Continuous Deployment capabilities (thanks to its integration with other Oracle PaaS services)
  • Application Container Cloud service: scalable aPaaS for running our Java EE microservice

 

 

Overview

 

Payara Micro?

Payara Micro is a Java EE based solution for building microservice style applications. Let’s expand on this a little bit

 

  • Java EE: Payara Micro supports the Java EE Web Profile standard along with additional support for other specifications which are not a part of the Web Profile (e.g. Batch, Concurrency Utilities etc.)
  • It’s a library: Available as a JAR file which encapsulates all these features

 

Development model

Payara Micro offers you the choice of multiple development styles…

 

  • WAR: package your Java EE application a WAR file and launch it with Payara Micro using java –jar payara-micro-<version>.jar --deploy mystocks.war
  • Embedded mode: because it’s a library, it can be embedded within your Java applications using its APIs
  • Uber JAR: Use the Payara Micro Maven support along with the exec plugin to package your WAR along with the Payara Micro library as a fat JAR

 

We will use the fat JAR technique in the sample application presented in the blog

 

Benefits

 

Some of the potential benefits are as follows

 

  • Microservices friendly: gives you the power of Java EE as a library, which can be easily used within applications, packaged in flexible manner (WAR + JAR or just a fat JAR) and run in multiple environments such as PaaS , container based platforms
  • Leverage Java EE skill set: continue using your expertise on Java EE specifications like JAX-RS, JPA, EJB, CDI etc.

 

About the sample application

 

It is a vanilla Java EE application which uses the following APIs – JAX-RS, EJB, CDI and WebSocket. It helps keep track of stock prices of NYSE scrips.

 

  • Users can check the stock price of a scrip (listed on NASDAQ) using a simple REST interface
  • Real time price tracking is also available – but this is only available for Oracle (ORCL)

 

Here is a high level diagram and some background context

 

  • an EJB scheduler fetches (ORCL) periodically fetches stock price, fires CDI events which are recevied by the WebSocket component (marked as a CDI event observer) and connected clients are updated with the latest price
  • the JAX-RS REST endpoint is used to fetch price for any stock on demand - this is a typical request-response based HTTP interaction as opposed to the bi-directional, full-duplex WebSocket interaction

 

 

 

 

Code

 

Let's briefly look at the relevant portions of the code (import statements omitted for brevity)

 

RealTimeStockTicker.java

 

@ServerEndpoint("/rt/stocks")
public class RealTimeStockTicker {


    //stores Session (s) a.k.a connected clients
    private static final List<Session> CLIENTS = new ArrayList<>();


    /**
     * Connection callback method. Stores connected client info
     *
     * @param s WebSocket session
     */
    @OnOpen
    public void open(Session s) {
        CLIENTS.add(s);
        Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.INFO, "Client connected -- {0}", s.getId());
    }


    /**
     * pushes stock prices asynchronously to ALL connected clients
     *
     * @param tickTock the stock price
     */
    public void broadcast(@Observes @StockDataEventQualifier String tickTock) {
        Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.INFO, "Event for Price {0}", tickTock);
        for (final Session s : CLIENTS) {
            if (s != null && s.isOpen()) {
                /**
                 * Asynchronous push
                 */
                s.getAsyncRemote().sendText(tickTock, new SendHandler() {
                    @Override
                    public void onResult(SendResult result) {
                        if (result.isOK()) {
                            Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.INFO, "Price sent to client {0}", s.getId());
                        } else {
                            Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.SEVERE, "Could not send price update to client " + s.getId(),
                                    result.getException());
                        }
                    }
                });
            }


        }


    }


    /**
     * Disconnection callback. Removes client (Session object) from internal
     * data store
     *
     * @param s WebSocket session
     */
    @OnClose
    public void close(Session s) {
        CLIENTS.remove(s);
        Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.INFO, "Client discconnected -- {0}", s.getId());
    }


}

 

 

StockDataEventQualifier.java

 

/**
 * Custom CDI qualifier to stamp CDI stock price CDI events
 * 
 */
@Qualifier
@Retention(RUNTIME)
@Target({METHOD, FIELD, PARAMETER, TYPE})
public @interface StockDataEventQualifier {
}

 

 

StockPriceScheduler.java

 

/**
 * Periodically polls the Google Finance REST endpoint using the JAX-RS client
 * API to pull stock prices and pushes them to connected WebSocket clients using
 * CDI events
 *
 */
@Singleton
@Startup
public class StockPriceScheduler {


    @Resource
    private TimerService ts;
    private Timer timer;


    /**
     * Sets up the EJB timer (polling job)
     */
    @PostConstruct
    public void init() {
        /**
         * fires 5 secs after creation
         * interval = 5 secs
         * non-persistent
         * no-additional (custom) info
         */
        timer = ts.createIntervalTimer(5000, 5000, new TimerConfig(null, false)); //trigger every 5 seconds
        Logger.getLogger(StockPriceScheduler.class.getName()).log(Level.INFO, "Timer initiated");
    }


    @Inject
    @StockDataEventQualifier
    private Event<String> msgEvent;


    /**
     * Implements the logic. Invoked by the container as per scheduled
     *
     * @param timer the EJB Timer object
     */
    @Timeout
    public void timeout(Timer timer) {
        Logger.getLogger(StockPriceScheduler.class.getName()).log(Level.INFO, "Timer fired at {0}", new Date());
        /**
         * Invoked asynchronously
         */
        Future<String> tickFuture = ClientBuilder.newClient().
                target("https://www.google.com/finance/info?q=NASDAQ:ORCL").
                request().buildGet().submit(String.class);


        /**
         * Extracting result immediately with a timeout (3 seconds) limit. This
         * is a workaround since we cannot impose timeouts for synchronous
         * invocations
         */
        String tick = null;
        try {
            tick = tickFuture.get(3, TimeUnit.SECONDS);
        } catch (InterruptedException | ExecutionException | TimeoutException ex) {
            Logger.getLogger(StockPriceScheduler.class.getName()).log(Level.INFO, "GET timed out. Next iteration due on - {0}", timer.getNextTimeout());
            return;
        }
        
        if (tick != null) {
            /**
             * cleaning the JSON payload
             */
            tick = tick.replace("// [", "");
            tick = tick.replace("]", "");


            msgEvent.fire(StockDataParser.parse(tick));
        }


    }


    /**
     * purges the timer
     */
    @PreDestroy
    public void close() {
        timer.cancel();
        Logger.getLogger(StockPriceScheduler.class.getName()).log(Level.INFO, "Application shutting down. Timer will be purged");
    }
}

 

 

RESTConfig.java

 

/**
 * JAX-RS configuration class
 * 
 */
@ApplicationPath("api")
public class RESTConfig extends Application{
    
}

 

 

StockDataParser.java

 

/**
 * A simple utility class which leverages the JSON Processing (JSON-P) API to filter the JSON 
 * payload obtained from the Google Finance REST endpoint and returns useful data in a custom format
 * 
 */
public class StockDataParser {
    
    public static String parse(String data){
        
        JsonReader reader = Json.createReader(new StringReader(data));
                JsonObject priceJsonObj = reader.readObject();
                String name = priceJsonObj.getJsonString("t").getString();
                String price = priceJsonObj.getJsonString("l_cur").getString();
                String time = priceJsonObj.getJsonString("lt_dts").getString();
        


        return (String.format("Price for %s on %s = %s USD", name, time, price));
    }
}

 

A note on packaging

A mentioned earlier, from a development perspective, it is a typical WAR based Java EE application which is packaged as a fat JAR along with the Payara Micro container

 

Notice how the container is being packaged with the application rather than the application being deployed into a container

The Java EE APIs are only needed for compilation (scope = provided) since they are present in the Payara Micro library

 

<dependency>
 <groupId>javax</groupId>
 <artifactId>javaee-api</artifactId>
 <version>7.0</version>
 <scope>provided</scope>
</dependency>

 

 

Using the Maven plugin to produce a fat JAR

 

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <version>1.5.0</version>
    <dependencies>
        <dependency>
            <groupId>fish.payara.extras</groupId>
            <artifactId>payara-micro</artifactId>
            <version>4.1.1.164</version>
        </dependency>
    </dependencies>
    <executions>
        <execution>
            <id>payara-uber-jar</id>
            <phase>package</phase>
            <goals>
                <goal>java</goal>
            </goals>
            <configuration>
                <mainClass>fish.payara.micro.PayaraMicro</mainClass>
                <arguments>
                    <argument>--deploy</argument>
                    <argument>${basedir}/target/${project.build.finalName}.war</argument>
                    <argument>--outputUberJar</argument>                                                  
                    <argument>${basedir}/target/${project.build.finalName}.jar</argument>
                </arguments>
                <includeProjectDependencies>false</includeProjectDependencies>
                <includePluginDependencies>true</includePluginDependencies>
                <executableDependency>
                    <groupId>fish.payara.extras</groupId>
                    <artifactId>payara-micro</artifactId>
                </executableDependency>
            </configuration>
        </execution>
    </executions>
</plugin>

 

 

Setting up Continuous Integration & Deployment

The below sections deal with the configurations to made within Oracle Developer Cloud service

 

Project & code repository creation

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

cd <project_folder> 
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

Configure build

 

Create a New Job

 

 

Select JDK

 

 

 

Continuous Integration (CI)

 

Choose Git repository

 

 

 

Set build trigger - this build job will be triggered in response to updated within the Git repository (e.g. via git push)

 

 

Add build steps

 

  • A Maven Build step – to produce the WAR and the fat JAR
  • An Execute Shell step – package up the application JAR along with the required deployment descriptor (manifest.json required by Application Container cloud)

 

 

 

 

Here is the command for your reference

 

zip -j accs-payara-micro.zip target/mystocks.jar manifest.json

 

The manifest.json is as follows

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar mystocks.jar --port $PORT --noCluster",
    "release": {
        "build": "23022017.1202",
        "commit": "007",
        "version": "0.0.1"
    },
    "notes": "Java EE on ACC with Payara Micro"
}

 

Activate a post build action to archive deployable zip file

 

 

 

Execute Build

Before configuring deployment, we need to trigger the build in order to produce the artifacts which can be referenced by the deployment configuration

 

 

 

After the build is complete, you can

  • Check the build logs
  • Confirm archived artifacts

 

Logs

 

 

Artifacts

 

 

 

Continuous Deployment (CD) to Application Container Cloud

 

Create a New Confguration for deployment

 

 

 

  • Enter the required details and configure the Deployment Target
  • Configure the Application Container Cloud instance
  • Configure Automatic deployment option on the final confirmation page

 

You’ll end up with the below configuration

 

 

Confirmation screen

 

 

 

Check your application in Application Container Cloud

 

 

 

Test the CI/CD flow

 

Make some code changes and push them to the Developer Cloud service Git repo. This should

 

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

Test the application

 

 

I would recommend using the client which can be installed into Chrome browser as a plugin – Simple WebSocket Client

 

That's all for this blog post..

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Part I of the blog demonstrated development, deployment of individual microservices (on Oracle Application Container Cloud) and how they are loosely coupled using the Apache Kafka message hub (setup on Oracle Compute Cloud). This (second) part will continue building on the previous one and with the help of an application, it will explore microservice based stream processing and dive into the following areas

 

  • Kafka Streams: A stream processing library
  • Scalability: enable your application to handle increased demands
  • Handling state: this is a hard problem to solve when the application needs to be horizontally scalable

 

 

Technical Components

 

Open source technologies

The following open source components were used to build the sample application

 

Component

Description

 

 

Apache Kafka

A scalable, distributed pub-sub message hub

Kafka Streams

A library for building stream processing applications on top of Apache Kafka

Jersey

Used to implement REST and SSE services. Uses Grizzly as a (pluggable) runtime/container

Maven

Used as the standard Java build tool (along with its assembly plugin)

 

Oracle Cloud

The following Oracle Cloud services have been leveraged

 

Oracle Cloud Service

Description

 

 

Application Container Cloud

Serves as a scalable platform for running our

stream processing microservices

Compute Cloud

Hosts the Kafka cluster (broker)

 

Note: In addition to compute based (IaaS) Kafka hosting, Oracle Cloud now offers Event Hub Cloud. This is a compelling offering which provides Apache Kafka as a fully managed service along with other value added capabilities.

 

Hello Kafka Streams!

In simple words, Kafka Streams is a library which you can include in your Java based applications to build stream processing applications on top of Apache Kafka. Other distributed computing platforms like Apache Spark, Apache Storm etc. are widely used in the big data stream processing world, but Kafka Streams brings some unique propositions in this area

 

Kafka Streams: what & why

 

What

Why

 

 

Built on top of Kafka – leverages its scalable and fault tolerant capabilities

If you use Kafka in your ecosystem, it makes perfect sense to leverage Kafka Streams to churn streaming data to/from the Kafka topics

 

 

Microservices friendly

It’s a lightweight library which you use within your Java application. This means that you can use it to  build microservices style stream processing applications

 

 

Flexible deployment & elastic in nature

You’re not restricted to a specific deployment model (e.g. cluster-based). The application can be packaged and deployed in a flexible manner and scaled up and down easily

 

 

For fast data

Harness the power of Kafka streams to crunch high volume data in real time systems – it does not need to be at big data scale

 

 

Support for stateful processing

Helps manage local application state in a fault tolerant & scalable manner

 

 

Sample application: what’s new

 

In part I, the setup was as follows

  • A Kafka broker serving as the messaging hub
  • Producer application (on Application Container Cloud) pushing CPU usage metrics to Kafka
  • Consumer application (on Application Container Cloud) consuming those metrics from Kafka and exposes them as real time feed (using Server Sent Events)

 

Some parts of the sample have been modified to demonstrate some of the key concepts. Here is the gist

 

Component

Changes

 

 

Consumer API

The new consumer application leverages the Kafka Streams API on Application Container Cloud as compared to the traditional (polling based) Kafka Consumer client API (used in part I)

 

 

Consumer topology

We will deploy multiple instances of the Consumer application to scale our processing logic

 

 

Nature of metrics feed

The cumulative moving average of the CPU metrics per machine is calculated as opposed to the exact metric provided by the SSE feed in part I

 

 

Accessing the CPU metrics feed

the consumer application makes the CPU usage metrics available in the form of a REST API as compared to the SSE based implementation in part I

 

High level architecture

The basic architecture still remains the same i.e. microservices decoupled using a messaging layer

 

 

 

As mentioned above, the consumer application has undergone changes and is now based on the Kafka Streams API. We could have continued to use the traditional poll based Kafka Consumer client API as in part I, but the Kafka Streams API was chosen for a few reasons. Let’s go through them in detail and see how it fits in the context of the overall solution. At this point, ask yourself the following questions

 

  • How would you scale your consumer application?
  • How would you handle intermediate state (required for moving average calculation) spread across individual instances of your scaled out application?

 

Scalability

With Application Container Cloud you can spawn multiple instances of your stream processing application with ease (for more details, refer to the documentation)

 

But how does it help?

The sample application models CPU metrics being continuously sent by the producer application to a Kafka broker – for demonstration purposes, the number of machines (whose CPU metrics are being sent) have been limited to ten. But how would you handle large scale data

 

  • When the number of machines increases to scale of thousands?
  • Perhaps you want to factor in additional attributes (in addition to just the cpu usage)?
  • Maybe you want to execute all this at data-center scale?

 

The answer lies in distributing your computation across several processes and this is where horizontal scalability plays a key role.

When the CPU metrics are sent to a topic in Kafka, they are distributed to different partitions (using a default consistent hashing algorithm) – this is similar to sharding. This helps from multiple perspectives

  • When Kafka itself is scaled out (broker nodes are added) – individual partitions are replicated over these nodes for fault tolerance and high performance
  • From a consumer standpoint - multiple consumers (in the same group) automatically distribute the load among themselves

 

In the case of our example, each of the streams processing application instance is nothing but a (specialized) form of Kafka Consumer and takes up a non-overlapping set of partitions in Kafka for processing. For a setup where 2 instances which are processing data for 10 machines spread over 4 partitions in Kafka (broker). Here is a pictorial representation

 

 

 

Managing application state (at scale)

The processing logic in the sample application is not stateless i.e. it depends on previous state to calculate its current state. In the context of this application, state is

 

  • the cumulative moving average of a continuous stream of CPU metrics,
  • being calculated in parallel across a distributed set of instances, and
  • constantly changing i.e. the cumulative moving average of the machines handled by each application instance is getting updated with the latest results

 

If you confine the processing logic to a single node, the problem of localized state co-ordination would not have existed i.e. local state = global state. But this luxury is not available in a distributed processing system. Here is how our application handles it (thanks to Kafka Streams)

 

  • The local state store (a KV store) containing the machine to (cumulative moving average) CPU usage metric is sent to a dedicated topic in Kafka e.g. the in-memory-avg-store in our application (named cpu-streamz) will have a corresponding topic cpu-streamz-in-memory-avg-store-changelog in Kafka
  • This topic is called a changelog since it is a compacted one i.e. only the latest key-value pair is retained by Kafka. This is meant to achieve the goal (distributed state management) in the cheapest possible manner
  • During scale up – Kafka assigns some partitions to the new instance (see above example) and the state for those partitions (which were previously stored in another instance) are replayed from the Kafka changelog topic to build the state store for this new instance
  • When an instance crashes or is stopped – the partitions being handled by that instance is handed off to some other node and the state of the partition (stored in the Kafka changelog topic) is written to the local state store of the existing node to which the work was allotted

 

All in all, this ensures scalable and fault tolerant state management

 

Exposing application state

As mentioned above, the cumulative moving averages of CPU metrics of each machine is calculated across multiple nodes in parallel. In order to find out the global state of the system i.e. current average of all (or specific) machines, the local state stores need to be queried. The application provides a REST API for this

 

 

 

 

More details in the Testing section on how to see this in action

 

It's important to make note of these points with regards to the implementation of the REST API which in turns lets us get what we want - real time insight in to the moving averages of the CPU usage

 

  • Topology agnostic: Use a single access URL provided by Application Container Cloud (as highlighted in the diagram above). As a client, you do not have to be aware of individual application instances
  • Robust & flexible: Instances can be added or removed on the fly but the overall business logic (in this case it is calculation of the cumulative moving average of a stream of CPU metrics) will remain fault tolerant and adjust to the elastic topology changes

 

This is made possible by a combination of the following

 

  • Automatic load balancing: Application Container cloud load balances requests among multiple instances of your applications
  • Clustered setup: from an internal implementation perspective, your application instances can detect each other. For this to work, the isClustered attribute in the manifest.json is set to true and custom logic is implemented within the solution in order for the instance specific information to be discovered and used appropriately. However, this is an internal implementation detail and the user is not affected by it

Please look at the Code snippets section for some more details

  • Interactive queries: this capability in Kafka Streams enables external clients to introspect the state store of a stream processing application instance via a host-port configuration enabled within the app configuration

 

An in-depth discussion of Kafka Streams is not possible in a single blog. The above sections are meant to provide just enough background which is (hopefully) sufficient from the point of view of this blog post. Readers are encouraged to spend some time going through the official documentation and come back to this blog to continue hacking on the sample

 

Setup

You can refer to part I of the blog for the Apache Kafka related setup. The only additional step which needs to be executed is exposing the port on which your Zookeeper process is listening (its 2181 by default) – as this is required by the Kafka Streams library configuration. While executing the steps from section Open Kafka listener port section, ensure that you include the Oracle Compute Cloud configuration for 2181 (in addition to the Kafka broker port 9092)

 

Code

Maven dependenies

As mentioned earlier, from an application development standpoint, Kafka Streams is just a library. This is evident in the pom.xml

 

<dependency>
     <groupId>org.apache.kafka</groupId>
     <artifactId>kafka-streams</artifactId>
     <version>0.10.1.1</version>
</dependency>

 

The project also uses the appropriate Jersey libraries along with the Maven shade and assembly plugins to package the application  

Overview

The producer microservice remains the same and you can refer part I for the details. Let’s look at the revamped Consumer stream processing microservice

 

Class

Details

 

 

KafkaStreamsAppBootstrap

Entry point for the application. Kicks off Grizzly container, Kafka Stream processing pipeline

CPUMetricStreamHandler

Implements the processing pipeline logic and handles K-Stream configuration and the topology creation as well

MetricsResource

Exposes multiple REST endpoints for fetching CPU moving average metrics

Metric, Metrics

POJOs (JAXB decorated) to represent metric data. They are exchanged as JSON/XML payloads

GlobalAppState, Utils

Common utility classes

 

Now that you have a fair idea of what's going on within the application and an overview of the classes involved, it makes sense to peek at some of the relevant sections of the code

 

State store

 

    public static class CPUCumulativeAverageProcessor implements Processor<String, String> {
     ...................
        @Override
        public void init(ProcessorContext pc) {
            this.pc = pc;
            this.pc.schedule(12000); //invoke punctuate every 12 seconds
            this.machineToAvgCPUUsageStore = (KeyValueStore<String, Double>) pc.getStateStore(AVG_STORE_NAME);
            this.machineToNumOfRecordsReadStore = (KeyValueStore<String, Integer>) pc.getStateStore(NUM_RECORDS_STORE_NAME);
        }
     ...............

 

Cumulative Moving Average (CMA) calculation

 

..........
@Override
public void process(String machineID, String currentCPUUsage) {

            //turn each String value (cpu usage) to Double
            Double currentCPUUsageD = Double.parseDouble(currentCPUUsage);
            Integer recordsReadSoFar = machineToNumOfRecordsReadStore.get(machineID);
            Double latestCumulativeAvg = null;

            if (recordsReadSoFar == null) {
                PROC_LOGGER.log(Level.INFO, "First record for machine {0}", machineID);
                machineToNumOfRecordsReadStore.put(machineID, 1);
                latestCumulativeAvg = currentCPUUsageD;
            } else {
                Double cumulativeAvgSoFar = machineToAvgCPUUsageStore.get(machineID);
                PROC_LOGGER.log(Level.INFO, "CMA so far {0}", cumulativeAvgSoFar);

                //refer https://en.wikipedia.org/wiki/Moving_average#Cumulative_moving_average for details
                latestCumulativeAvg = (currentCPUUsageD + (recordsReadSoFar * cumulativeAvgSoFar)) / (recordsReadSoFar + 1);
                recordsReadSoFar = recordsReadSoFar + 1;
                machineToNumOfRecordsReadStore.put(machineID, recordsReadSoFar);
            }

            machineToAvgCPUUsageStore.put(machineID, latestCumulativeAvg); //store latest CMA in local state store
..........

 

 

Metrics POJO

 

@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
public class Metrics {
    private final List<Metric> metrics;

    public Metrics() {
        metrics = new ArrayList<>();
    }

    public Metrics add(String source, String machine, String cpu) {
        metrics.add(new Metric(source, machine, cpu));
        return this;
    }

    public Metrics add(Metrics anotherMetrics) {
        anotherMetrics.metrics.forEach((metric) -> {
            metrics.add(metric);
        });
        return this;
    }

    @Override
    public String toString() {
        return "Metrics{" + "metrics=" + metrics + '}';
    }
    
    public static Metrics EMPTY(){
        return new Metrics();
    }
    
}

 

 

Exposing REST API for state

 

@GET
@Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
public Response all_metrics() throws Exception {
        Response response = null;
        try {
            KafkaStreams ks = GlobalAppState.getInstance().getKafkaStreams();
            HostInfo thisInstance = GlobalAppState.getInstance().getHostPortInfo();
            
          Metrics metrics = getLocalMetrics();

            ks.allMetadataForStore(storeName)
                    .stream()
                    .filter(sm -> !(sm.host().equals(thisInstance.host()) && sm.port() == thisInstance.port())) //only query remote node stores
                    .forEach(new Consumer<StreamsMetadata>() {
                        @Override
                        public void accept(StreamsMetadata t) {
                            String url = "http://" + t.host() + ":" + t.port() + "/metrics/remote";
                            //LOGGER.log(Level.INFO, "Fetching remote store at {0}", url);
                            Metrics remoteMetrics = Utils.getRemoteStoreState(url, 2, TimeUnit.SECONDS);
                            metrics.add(remoteMetrics);
                            LOGGER.log(Level.INFO, "Metric from remote store at {0} == {1}", new Object[]{url, remoteMetrics});
                        }
                    });

            response = Response.ok(metrics).build();
        } catch (Exception e) {
            LOGGER.log(Level.SEVERE, "Error - {0}", e.getMessage());
        }
        return response;
}

 

Host discovery

 

    public static String getHostIPForDiscovery() {
    String host = null;
        try {

            String hostname = Optional.ofNullable(System.getenv("APP_NAME")).orElse("streams");

            InetAddress inetAddress = Address.getByName(hostname);
            host = inetAddress.getHostAddress();

        } catch (UnknownHostException ex) {
            host = "localhost";
        }
        return host;
    }

Deployment to Application Container Cloud

 

Now that you have a fair idea of the application, it’s time to look at the build, packaging & deployment

 

Update deployment descriptors

 

The metadata files for the producer application are the same. Please refer to part I for details on how to update them. The steps below are relevant to the (new) stream processing consumer microservice.

manifest.json: You can use this file in its original state

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar acc-kafka-streams-1.0.jar",
  "isClustered": "true"
}

 

deployment.json

 

It contains environment variables corresponding required by the application at runtime. The value is left as a placeholder for you to fill prior to deployment.

 

{
"instances": "2",
  "environment": {
  "APP_NAME":"kstreams",
  "KAFKA_BROKER":"<as-configured-in-kafka-server-properties>",
  "ZOOKEEPER":"<zookeeper-host:port>"
  }
}

 

Here is an example

 

{
"instances": "2",
  "environment": {
  "APP_NAME":"kstreams",
  "KAFKA_BROKER":"oc-140-44-88-200.compute.oraclecloud.com:9092",
  "ZOOKEEPER":"10.190.210.199:2181"
  }
}

 

You need to be careful about the following

 

  • The value of the KAFKA_BROKER attribute should be the same as (Oracle Compute Cloud instance public DNS) the one you configured in the advertised.listeners attribute of the Kafka server.properties file
  • The APP_NAME attribute should be the same as the one you use while deploying your application using the Application Container Cloud REST API

Please refer to the following documentation for more details on metadata files

 

 

Build

 

Initiate the build process to produce the deployable artifact (a ZIP file)

 

//Producer application

cd <code_dir>/producer //maven project location
mvn clean package

//Consumer application

cd <code_dir>/producer //maven project location
mvn clean package

 

The output of the build process is the respective ZIP files for producer (accs-kafka-producer-1.0-dist.zip) and consumer (acc-kafka-streams-1.0-dist.zip) microservices respectively

 

Upload & deploy

You would need to upload the ZIP file to Oracle Storage Cloud and then reference it in the subsequent steps. Here are the required the cURL commands

 

Create a container in Oracle Storage cloud (if it doesn't already exist)  
  
curl -i -X PUT -u <USER_ID>:<USER_PASSWORD> <STORAGE_CLOUD_CONTAINER_URL>  
e.g. curl -X PUT –u jdoe:foobar "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accs-kstreams-consumer/"  
  
Upload your zip file into the container (zip file is nothing but a Storage Cloud object)  
  
curl -X PUT -u <USER_ID>:<USER_PASSWORD> <STORAGE_CLOUD_CONTAINER_URL> -T <zip_file> "<storage_cloud_object_URL>" //template  
e.g. curl -X PUT –u jdoe:foobar -T acc-kafka-streams-1.0-dist.zip "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accs-kstreams-consumer/accs-kafka-consumer.zip"

 

 

Repeat the same for the producer microservice

 

You can now deploy your application to Application Container Cloud using its REST API. The Oracle Storage cloud path (used above) will be referenced while using the Application Container Cloud REST API (used for deployment). Here is a sample cURL command which makes use of the REST API

 

curl -X POST -u joe@example.com:password \    
-H "X-ID-TENANT-NAME:domain007" \    
-H "Content-Type: multipart/form-data" -F "name=kstreams" \    
-F "runtime=java" -F "subscription=Monthly" \    
-F "deployment=@deployment.json" \    
-F "archiveURL=accs-kstreams-consumer/accs-kafka-consumer.zip" \    
-F "notes=notes for deployment" \    
https://apaas.oraclecloud.com/paas/service/apaas/api/v1.1/apps/domain007  

 

Note

  • the name attribute used in the curl command should be the same as the APP_NAME attribute used in the manifest.json
  • Repeat the same for the producer microservice

 

Post deployment

(the consumer application has been highlighted below)

 

The Applications console

 

 

The Overview sub-section

 

 

 

The Deployments sub-section

 

 

 

Testing

Assuming your Kakfa broker is up and running and you have deployed the application successfully, execute the below mentioned steps to test drive your application

 

Start the producer

Trigger your producer application by issuing a HTTP GET https://my-producer-app-url/producer e.g. https://accs-kafka-producer-domain007.apaas.us.oraclecloud.com/producer. This will start producing (random) CPU metrics for a bunch of (10) machines

 

 

You can stop the producer by issuing a HTTP DELETE on the same URL

 

 

Check the statistics

 

Cumulative moving average of all machines

Allow the producer to run for a 30-40 seconds and then check the current statistics. Issue a HTTP GET request to your consumer application e.g. https://acc-kafka-streams-domain007.apaas.us.oraclecloud.com/metrics. You’ll see a response payload similar to what’s depicted below

 

 

 

The information in the payload is as following

  • cpu: the cumulative average of the CPU usage of a machine
  • machine: the machine ID
  • source: this has been purposefully added as a diagnostic information to see which node (instance in the Application Container Cloud) is handling the calculation for a specific machine (this is subject to change as your application scales up/down)

 

Cumulative moving average of a specific machine

 

 

 

Scale your application

Increase the number of instances of your application (from 2 to 3)

 

 

 

Check the stats again and you’ll notice that the computation task is being shared among three nodes now..

 

That’s all for this blog series.. !

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Oracle Process Cloud service (PCS) provides REST API enabling other applications to integrate with PCS. More details on REST API for Oracle Process Cloud service can be found here. Oracle Process Cloud Service accepts OAuth tokens as an alternative to basic authentication for the REST APIs. In this post we discuss how PCS REST API can be accessed using an OAuth token to create a new business process instance.

 

The scenario discussed in this blog involves a web application deployed in JCS-SX that triggers a business process (“Funds Transfer Process”) deployed in PCS. The “Funds Transfer” process considered in this blog is a simple business process which validates certain attributes of the incoming request and forwards the request for manual approval if needed. The web application obtains the OAuth token from the OAuth server and passes the token to PCS REST API for authentication.

 

The following diagram depicts the high level interactions between JCS-SX, PCS and the OAuth server:

pcs_oauth_blog_image.png

This use case considers that both JCS-SX and PCS instances have been provisioned in the same identity domain. When provisioned in the same identity domain, the resources and clients needed for communicating using OAuth are automatically configured along with an OAuth server which is used for obtaining the tokens. You can navigate to “OAuth Administration” tab from “My Services” and see the following OAuth resources and clients registered by default. Please refer to Managing OAuth Resources and Clients for more details.

 

Note: You need to have an Identity Administrator role to be able to access the "OAuth Administration" page.

pcs_oauth_blog_image_1.png

Note: Please note the client identifier (Id) (highlighted in red box above) and its respective “secret” (secret) that can be viewed by clicking on the “Show Secret” button of the JCS SX OAuth client. This information will be used by the web application to obtain an access token for the client and access PCS REST API.

 

The JCS-SX OAuth client is used to invoke the PCS REST API from the web application, hence ensure that the PCS resource is accessible for this client. You can manage the accessibility of the resources by clicking on the “Modify” menu option under the action palette in “Register Client” section against the JCS-SX OAuth client as shown below:

pcs_oauth_blog_image_2.png

Note: This blog assumes that the business process (“Funds Transfer Process”) is deployed in PCS, the export of the business process is provided in the appendix section for your reference.

 

With the prerequisites in place we can now proceed to obtain the client access token that would be used by the web application to invoke the PCS REST API. This sample uses the OAuth grant types – client_credentials and password to retrieve the client access token involving the following steps:

  1. Obtain client assertion using the client credentials.
  2. Obtain an access token using the client assertion obtained in the above step

 

Note: When it comes to making webservice calls within Oracle Platform Services one can use owsm policies for identity propagation and does not necessarily deal with OAuth tokens explicitly.This is just an illustration of how you could use OAuth token for authenticating with PCS without using owsm policies.

 

These steps are detailed further using specific code snippets.

 

Store the details of the business process to be invoked and the details required to access the OAuth token server in a HashMap.

 

Note: To keep things simple for the purpose of this blog, details like client secret, user name , password are stored in a java HashMap, however it is highly recommended to use Oracle Credential Store Framework (CSF) to ensure secure management of credentials. Please look at “References” section at the end of this blog for more information regarding this.

 

    public static HashMap populateMap() {
        HashMap map = new HashMap();
        // PCS
        map.put("PCS_URL", "https://<PCS_HOST>:443/bpm/api/3.0/processes");
        map.put("PCS_PROCESS_DEF_ID", "default~MyApplication!1.0~FundsTransferProcess");
        map.put("PCS_FTS_SVC_NAME", "FundsTransferProcess.service");
        // OAuth
        map.put("TOKEN_URL", "https://<ID_DOMAIN_NAME>.identity.<DATA_CENTER>.oraclecloud.com/oam/oauth2/tokens");
        map.put("CLIENT_ID", "<CLIENT_ID>");
        map.put("SECRET", "<SECRET>");
        map.put("DOMAIN_NAME", "<ID_DOMAIN_NAME>");
        map.put("USER_NAME","<PCS_USER_NAME>");
        map.put("PASSWORD","<PCS_USER_PWD>");
        return map;
    }
   
    public String getOAuthToken() throws Exception{
        String token = "";
        
        String authString = entryMap.get("CLIENT_ID")+":"+entryMap.get("SECRET");
                
        Map clientAssertionMap = getClientAssertion(authString);
        
        token = getAccessToken(authString,clientAssertionMap);
        
        return token;
    }

 

Note: The values specified in the above code for the keys – PCS_PROCESS_DEF_ID, PCS_FTS_SVC_NAME are only for reference. After you deploy the funds transfer business process in PCS, you can retrieve the details of the business process by executing the following curl command and replace the above values appropriately:

 

curl -u <PCS_USER_NAME>:<PCS_USER_PWD> -H "Content-Type:application/json" -H "Accept:application/json" -X GET https://<PCS_HOST>:443/bpm/api/3.0/process-definitions

 

The “getOAuthToken” method is implemented to retrieve the client assertion by accessing the OAuth server (token endpoint) and passing client_id:client_secret as a basic authorization header. These details can be obtained from OAuth Administration tab as mentioned in a note above. The following code snippet shows how this can be implemented:

 

    private Map<String,String> getClientAssertion(String authString) throws Exception{
        
        resource = client.resource( entryMap.get("TOKEN_URL")+"");
        
        ClientResponse res = null;
        String payload = "grant_type:client_credentials";
        
        MultiPart multiPart = new MultiPart().bodyPart(new BodyPart(payload.toString(), MediaType.APPLICATION_JSON_TYPE));
        
        MultivaluedMap formData = new MultivaluedMapImpl();
        formData.add("grant_type", "client_credentials");
                
        try {
        res = 
            resource.header("X-USER-IDENTITY-DOMAIN-NAME",  entryMap.get("DOMAIN_NAME"))
            .header("Authorization", "Basic " + DatatypeConverter.printBase64Binary(authString.getBytes("UTF-8")))
            .header("Content-Type", "application/x-www-form-urlencoded;charset=UTF-8")
            .type(MediaType.APPLICATION_FORM_URLENCODED_TYPE)
            .accept(MediaType.APPLICATION_JSON_TYPE)
            .post(ClientResponse.class,formData);
        } catch (Exception e) {
            System.out.println("In catch: "+e);
            e.printStackTrace();
            throw e;
        }
        
        String output = res.getEntity(String.class);
        JSONObject newJObject = null;
        org.json.simple.parser.JSONParser parser = new org.json.simple.parser.JSONParser();
        try {
             
             newJObject = (JSONObject) parser.parse(output);
            
            } catch (org.json.simple.parser.ParseException e) {
                e.printStackTrace();
        }
              
        Map<String,String> assertionMap = new HashMap <String,String>();
        
        assertionMap.put("assertion_token",newJObject.get("access_token")+"");
        assertionMap.put("assertion_type",newJObject.get("oracle_client_assertion_type")+"");
        
        if (res != null && res.getStatus() != 200) {
            System.out.println("Server Problem (getClientAssertion): "+res.getStatusInfo());
            throw new Exception (res.getStatusInfo().getReasonPhrase());
        }
        return assertionMap;
    }

 

The above implementation uses a Jersey client to access the token server and obtain a client assertion and client assertion type. Also note that grant_type:client_credentials being passed in the payload. The following code snippet uses the password grant_type to obtain the client access token from the token server by passing in the client assertion obtained earlier along with user name and password.

 

    private String getAccessToken(String authString,Map clientAssertionMap) throws Exception{
        resource = client.resource(entryMap.get("TOKEN_URL")+"");
        
        String clientAssertionType = (String) clientAssertionMap.get("assertion_type");
        String clientAssertion = (String) clientAssertionMap.get("assertion_token");
                                              
        ClientResponse res = null;
        
        MultivaluedMap formData = new MultivaluedMapImpl();
        formData.add("grant_type", "password");
        formData.add("username", entryMap.get("USER_NAME"));
        formData.add("password", entryMap.get("PASSWORD"));
        formData.add("client_assertion_type", clientAssertionType);        
        formData.add("client_assertion", clientAssertion);        
        
        try {
        res = 
            resource.header("X-USER-IDENTITY-DOMAIN-NAME",  entryMap.get("DOMAIN_NAME"))
            .header("Authorization", "Basic " + DatatypeConverter.printBase64Binary(authString.getBytes("UTF-8")))
            .header("Content-Type", "application/x-www-form-urlencoded;charset=UTF-8")
            .type(MediaType.APPLICATION_FORM_URLENCODED_TYPE)
            .accept(MediaType.APPLICATION_JSON_TYPE)
            .post(ClientResponse.class,formData);
        } catch (Exception e) {
            e.printStackTrace();
            throw e;
        }
        
        String output = res.getEntity(String.class);
        
        JSONObject newJObject = null;
        org.json.simple.parser.JSONParser parser = new org.json.simple.parser.JSONParser();
        try {
           
           newJObject = (JSONObject) parser.parse(output);
        
        } catch (org.json.simple.parser.ParseException e) {
           e.printStackTrace();
        }
        
       String token = newJObject.get("access_token")+"";
        
        if (res != null && res.getStatus() != 200) {
            System.out.println("Server Problem (getAccessToken): "+res.getStatusInfo());
            throw new Exception (res.getStatusInfo().getReasonPhrase());
        }
        return token;
    }

 

The PCS resource can now be accessed using the client access token, the following code snippet invokes the PCS REST API to create a new process instance of the “Funds Transfer” Business Process. The payload consists of the details of the process to be created like process definition id, service name and the input values entered by the user in a JSP page. Please note that the OAuth token obtained in the previous step is being set to the “Authorization” header.

    public String invokeFundsTransferProcess(String token,FundsTransferRequest ftr) throws Exception {
     
        StringBuffer payload = new StringBuffer();
        payload.append("{");
        payload.append("\"processDefId\":\""+entryMap.get("PCS_PROCESS_DEF_ID").toString()+"\",");
        payload.append("\"serviceName\":\""+entryMap.get("PCS_FTS_SVC_NAME").toString()+"\",");
        payload.append("\"operation\":\"start\",");
        payload.append("\"params\": {");
        payload.append("\"incidentId\":\""+ftr.getIncidentId()+"\",");
        payload.append("\"sourceAcctNo\":\""+ftr.getSourceAcctNo()+"\",");
        payload.append("\"destAcctNo\":\""+ftr.getDestAcctNo()+"\",");
        payload.append("\"amount\":"+ftr.getAmount()+",");
        String tsfrType;
        if(ftr.getTransferType().equals("tparty"))
            tsfrType = "intra";
        else
            tsfrType = "inter";

        payload.append("\"transferType\":\""+tsfrType+"\"");
        payload.append("}, \"action\":\"Submit\"");
        payload.append("}");
     
        MultiPart multiPart = new MultiPart().bodyPart(new BodyPart(payload.toString(), MediaType.APPLICATION_JSON_TYPE));

        resource = client.resource(entryMap.get("PCS_URL").toString());
        ClientResponse res = null;  
        try {
        res = 
            resource.header("Authorization", "Bearer " + token)
            .type("multipart/mixed")
            .accept(MediaType.APPLICATION_JSON)
            .post(ClientResponse.class, multiPart);
        } catch (Exception e) {
            e.printStackTrace();
            throw e;
        }
        
        if (res != null && res.getStatus() != 200) {
            System.out.println("Server Problem (PCSRestOAuthClient.invokeFundsTransferProcess): "+res.getStatusInfo() +" while invoking "+entryMap.get("PCS_URL").toString());
            throw new Exception (res.getStatusInfo().getReasonPhrase());
        }
    
    return res.getStatus()+"";
    }

 

A simple JSP page is used to capture user input and trigger the Funds Transfer Business Process.

pcs_oauth_blog_image_4.png

Upon successful initiation of the Funds Transfer process, you should be able to see an instance of the process getting created and processed in the "Tracking" page of PCS as shown below:

pcs_oauth_blog_image_3.png

 

Known Issues:

Depending on the JDK you use, you might see a "javax.net.ssl.SSLHandshakeException: server certificate change is restricted during renegotiation" error when trying to invoke the PCS REST API from JCS-SX. Please set the following system properties on your JCS-SX and restart the server to work around this issue:

  1. Set "weblogic.security.SSL.minimumProtocolVersion" to "TLSv1.2" in JCS - SaaS Extension and restart JCS - SX
  2. If the problem still persists, set "jdk.tls.allowunsafeservercertchange" to "true" and restart JCS - SaaS Extension

 

Appendix:

Funds Transfer business process (PCS export) for reference -  MyApplication.zip (attached)

 

References:

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This blog will demonstrate how to a build and run a WebSocket based microservice. Here is what the blog will cover at a high level

 

  • Overview of WebSocket and the sample Java application
  • Continuous Integration setup: from source code in the IDE to a build artifact in Oracle Developer Cloud
  • Continuous Deployment setup: from a build artifact in Developer Cloud Service to an application running in Oracle Application Container Cloud
  • Testing the application

 

 

Overview

 

WebSocket: the standard

 

WebSocket is an IETF standard recognized by RFC 6455 and has the following key characteristics which make it great fit for real time applications

  • Bi-directional: both server and client an initiate a communication
  • Full duplex: once the WebSocket session is established, both server and client can communicate independent of each other
  • Less verbose (compared to HTTP)

 

A deep dive into the protocol is out of scope of this blog. Please refer to the RFC for further details

 

Java Websocket API

 

A standard Java equivalent (API) for this technology is defined by JSR 356. It is backed by a specification which makes it possible to have multiple implementations of the same. JSR 356 is also included as a part of the Java Enterprise Edition 7 (Java EE 7) Platform. This includes a pre-packaged (default) implementation of this API as well as integration with other Java EE technologies like EJB, CDI etc.

 

Tyrus

 

Tyrus is the reference implementation of the Java Websocket API. It is the default implementation which is packaged with Java EE 7 containers like Weblogic 12.2.1 (and above) and Glassfish (4.x). It provides both server and client side API for building web socket applications.

 

Tyrus grizzly module

 

Tyrus has a modular architecture i.e. it has different modules for server, client implementations, a SPI etc. It supports the notion of containers (you can think of them as connectors) for specific runtime support (these build on the modular setup). Grizzly is one of the supported containers which can be used for server or client (or both) modes as per your requirements (the sample application leverages the same)

 

About the sample application

 

The sample is a chat application – a canonical use case for WebSockets (this by no means a full-blown chat service). Users can

  • Join the chat room (duplicate usernames not allowed)
  • Get notified about new users joining
  • Send public messages
  • Send private messages
  • Leave the chat room (other users get notified)

 

The application is quite simple

  • It has a server side component which is a (fat) JAR based Java application deployed to Application Container Cloud
  • The client can be any component which has support for the WebSocket API e.g. your browser . The unit tests use the Java client API implementation of Tyrus

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Code

 

Here is a summary of the various classes and their roles

 

Class(es)

Category

Description

 

 

 

ChatServer

Core

It contains the core business logic of the application

WebSocketServerManager

Bootstrap

Manages bootstrap and shutdown process of the WebSocket container

ChatMessage,

DuplicateUserNotification, LogOutNotification,

NewJoineeNotification,

Reply,

WelcomeMessage

Domain objects

Simple POJOs to model the application level entities

ChatMessageDecoder

Decoder

Converts chats sent by users into Java (domain) object which can be used within the application

DuplicateUserMessageEncoder, LogOutMessageEncoder,

NewJoineeMessageEncoder,

ReplyEncoder,

WelcomeMessageEncoder

Encoder(s)

Converts Java (domain) objects into native (text) payloads which can be sent over the wire using the WebSocket protocol

 

Here is the WebSocket endpoint implementation (ChatServer.java)

 

@ServerEndpoint(
        value = "/chat/{user}/",
        encoders = {ReplyEncoder.class, 
                    WelcomeMessageEncoder.class, 
                    NewJoineeMessageEncoder.class, 
                    LogOutMessageEncoder.class,
                    DuplicateUserMessageEncoder.class},
        decoders = {ChatMessageDecoder.class}
)


public class ChatServer {


    private static final Set<String> USERS = new ConcurrentSkipListSet<>();
    private String user;
    private Session s;
    private boolean dupUserDetected;


    @OnOpen
    public void userConnectedCallback(@PathParam("user") String user, Session s) {
        if (USERS.contains(user)) {
            try {
                dupUserDetected = true;
                s.getBasicRemote().sendText("Username " + user + " has been taken. Retry with a different name");
                s.close();
                return;
            } catch (IOException ex) {
                Logger.getLogger(ChatServer.class.getName()).log(Level.SEVERE, null, ex);
            }


        }
        this.s = s;
        s.getUserProperties().put("user", user);
        this.user = user;
        USERS.add(user);


        welcomeNewJoinee();
        announceNewJoinee();
    }


    private void welcomeNewJoinee() {
        try {
            s.getBasicRemote().sendObject(new WelcomeMessage(this.user));
        } catch (Exception ex) {
            Logger.getLogger(ChatServer.class.getName()).log(Level.SEVERE, null, ex);
        }
    }


    private void announceNewJoinee() {
        s.getOpenSessions().stream()
                .filter((sn) -> !sn.getUserProperties().get("user").equals(this.user))
                //.filter((s) -> s.isOpen())
                .forEach((sn) -> sn.getAsyncRemote().sendObject(new NewJoineeNotification(user, USERS)));
    }


    public static final String LOGOUT_MSG = "[logout]";


    @OnMessage
    public void msgReceived(ChatMessage msg, Session s) {
        if (msg.getMsg().equals(LOGOUT_MSG)) {
            try {
                s.close();
                return;
            } catch (IOException ex) {
                Logger.getLogger(ChatServer.class.getName()).log(Level.SEVERE, null, ex);
            }
        }
        Predicate<Session> filterCriteria = null;
        if (!msg.isPrivate()) {
            //for ALL (except self)
            filterCriteria = (session) -> !session.getUserProperties().get("user").equals(user);
        } else {
            String privateRecepient = msg.getRecepient();
            //private IM
            filterCriteria = (session) -> privateRecepient.equals(session.getUserProperties().get("user"));
        }


        s.getOpenSessions().stream()
                .filter(filterCriteria)
                //.forEach((session) -> session.getAsyncRemote().sendText(msgContent));
                .forEach((session) -> session.getAsyncRemote().sendObject(new Reply(msg.getMsg(), user, msg.isPrivate())));


    }


    @OnClose
    public void onCloseCallback() {
        if(!dupUserDetected){
            processLogout();
        }
        
    }


    private void processLogout() {
        try {
            USERS.remove(this.user);
            s.getOpenSessions().stream()
                    .filter((sn) -> sn.isOpen())
                    .forEach((session) -> session.getAsyncRemote().sendObject(new LogOutNotification(user)));


        } catch (Exception ex) {
            Logger.getLogger(ChatServer.class.getName()).log(Level.SEVERE, null, ex);
        }
    }


}

 

Setting up Continuous Integration & Deployment

 

The below sections deal with the configurations to made within the Oracle Developer Cloud service

 

Project & code repository creation

 

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

 

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

cd <project_folder> 
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/acc-websocket-sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/acc-websocket-sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

Configure build

 

Create a New Job

 

 

Select JDK

 

 

 

Continuous Integration (CI)

 

Choose Git repo

 

 

 

Set build trigger - this build job will be triggered in response to updated within the Git repository (e.g. via git push)

 

 

 

Add Maven Build Step

 

 

 

Activate the following post build actions

  • Archive the Maven artifacts (contains deployable zip file)
  • Publish JUnit test result reports

 

 

 

Execute Build & check JUnit test results

 

Before configuring deployment, we need to trigger the build in order to produce the artifacts which can be referenced by the deployment configuration

 

 

 

After the build is complete, you can

  • Check the build logs
  • Check JUnit test results
  • Confirm archived Maven artifacts

 

 

 

 

Test results

 

 

 

Build logs

 

 

 

 

Continuous Deployment (CD) to Application Container Cloud

 

Create a New Confguration for deployment

 

 

 

Enter the required details and configure the Deployment Target

 

 

 

Configure the Application Container Cloud instance

 

 

 

 

 

Configure Automatic deployment option on the final confirmation page

 

 

 

Confirmation screen

 

 

 

 

Test the CI/CD flow

 

Make some code changes and push them to the Developer Cloud service Git repo. This should

 

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process

 

 

 

 

 

 

 

Check your application in Application Container Cloud

 

 

 

 

 

Here is the detailed view

 

 

 

 

Test

 

You would need a WebSocket client for this example. I would personally recommend using the client which can be installed into Chrome browser as a plugin – Simple WebSocket Client. See below snapshot for a general usage template of this client

 

 

 

The following is a template for the URL of the WebSocket endpoint

 

wss://<acc-app-url>/chat/<user-handle>/
e.g. wss://acc-websocket-chat-domain007.apaas.em2.oraclecloud.com/chat/abhi/

 

 

Test transcript

 

Here is a sequence of events which you can execute to test things out

 

Users foo and bar join the chatroom

 

wss://acc-websocket-chat-domain007.apaas.em2.oraclecloud.com/chat/foo/
wss://acc-websocket-chat-domain007.apaas.em2.oraclecloud.com/chat/bar/

 

   

 

 

foo gets notified about bar

 

 

 

 

User john joins

 

wss://acc-websocket-chat-domain007.apaas.em2.oraclecloud.com/chat/john/

 

 

 

foo and bar are notified

 

 

     

 

 

foo sends a message to everyone (public)

 

 

 

Both bar and john get the message

             

 

bar sends a private message to foo

 

 

Only foo gets it

 

 

In the meanwhile, john gets bored and decides to leave the chat room

 

 

 

Both foo and bar get notified

 

         

 

That's all folks !

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

The primary motive of this blog is to show how to

  • Enable Facebook login in Oracle MCS (16.4.x)
  • Implement Facebook login in Oracle JET Hybrid application by using Oracle MCS Cordova SDK
  • Calling a custom API secured using Social Login

 

Key Components

Below tables summarizes the key components involved in this solution:

 

Component
Details
Social Login - Facebook

Facebook Login is a secure, fast and convenient way for people to log into your mobile application. This mode of authentication is particularly useful for apps targeting consumers

Mobile Middleware - Oracle MCS

In Oracle MCS:

  • Configure mobile backend  to enable users to log in through Facebook login.
  • Add custom API's to this mobile backend secured with social login
Client Application - Oracle JET Hybrid applicationOracle JET Hybrid application is based on Apache cordova framework. We will integrate Oracle MCS Cordova SDK to simplify Social Login authentication

 

Functional Flow

Below is the application flow:

  1. The user presses Facebook login button created in our hybrid mobile application.
  2. The MCS Cordova SDK gets initialized and calls the Facebook Login service.
  3. The Facebook login page appears within the mobile app and user is prompted to enter credentials
  4. The social-login security check validates the credentials.
  5. Upon success, The social-login returns the access token
  6. The mobile app then uses the access token to access secure MCS APIs.

 

Component: Social Login - Facebook

The first step is to create a Facebook developer app. For more details, please refer Facebook developer documentation

 

Below are the steps to create the same:

 

Create a new App

Visit the Facebook Developer Apps Page and follow below steps:

 

Add-A-New-app.png

 

Basic Settings

Add basic settings of the application. "App ID" and "App Secret" will be used in Oracle MCS backend configuration.

 

Settings-basic.png

 

Advanced Settings

Please refer the advance settings in the image below:

 

Settings-advanced.png

 

 

 

Facebook Login Settings

Add the redirect OAuth URL as shown below.

FacebookLoginSettings.png

App Review

Once everything is tested, make your app public

AppReview.png

 

Once above steps are done, your facebook login setup is ready.

Component: Mobile Middleware - Oracle MCS

The next step is the create a new Mobile Middleware in Oracle MCS and perform following settings

 

Mobile Backend - Settings

  • Enable Facebook option
  • Enter the Facebook App ID and Secret you received when you registered your application.

mcs-settings.png

 

Secure Custom API

Create a custom API and enable the security using Social Identity

mcs-security.png

 

API Test

Once the custom API is ready, the next step is to test the same using facebook access token.

mcs-test-endpoint.png

 

Facebook User Access Token

Please follow below steps to obtainFacebook user access token

  1. Log into your Facebook account (the one with which you registered the mobile app).

  2. Navigate to https://developers.facebook.com/tools/accesstoken/ and find your app.

  3. Click the You need to grant permissions to your app to get an access token link to generate the token. A token is generated for you on the next page.

 

Copy the user token and paste in MCS Test screen as shown above

facebook-user-token.png

Please follow link "Getting a Facebook User Access Token Manually" for more deatils

 

Test Endpoint

After entering the user token, press "Test Endpoint" button to verify the result

mcs-test-endpoint.png

 

At this stage, the mobile backend providing social login and a secured custom API. The next step is to create client application utilizing this backend and facebook login page!

 

Component: Client Application - Oracle JET Hybrid application

 

Once the mobile back-end is up and ready, our next step is to develop client side application.

 

You may please refer to Troubleshooting while developing your First JET based Hybrid Application blog in case of initial issues faced during development/configuration issues.

Project Setup using Yeoman

Yeoman generator for Oracle JET lets you quickly set up a project for use as a Web application or mobile-hybrid application for Android and iOS.

Use following command to generate hybrid application for Android:

 

 

yo oraclejet:hybrid fbmcssample --appId=com.rdh.fbmcs --appName="fbmcssample" --template=navBar --platforms=android  

 

 

Cordova Plugin Required

Please refer to Cordova Applications section in Oracle Mobile Cloud Service to obtain details of the cordova plugin. Following cordova plugin needs to be added in our application:

 

  • cordova-plugin-inappbrowser: Plugin to provide Facebook authentication in your app
cordova plugin add cordova-plugin-inappbrowser -save

 

Adding Oracle MCS Cordova SDK

In order to communicate with Oracle MCS, following steps are required:

  1. Download the Cordova SDK from Oracle MCS. Extract the same on your local machine. It will contain Javascript based Cordova SDK , configuration files and documentation
  2. Add Oracle MCS Cordova SDK to your application, Copy mcs.js, mcs.min.js and oracle_mobile_cloud_config.js into the directory where you keep your JavaScript libraries.

 

For example, in this implementation, I have kept these files in mcs folder added in js/libs folder as shown in below image:

mcs-additions.png

 

Code Addition

 

Configuring SDK Properties for Cordova

Fill in your mobile backend details in oracle_mobile_cloud_config.js.

 

var mcs_config = {
  "logLevel": 3,
  "mobileBackends": {
    "RDXTESTSSO": {
      "default": true,
      "baseUrl": "http://XXX.us.oracle.com:7777",
      "applicationKey": "YOUR_BACKEND_APPLICATION_KEY",
      "synchronization": {
        "periodicRefreshPolicy": "PERIODIC_REFRESH_POLICY_REFRESH_NONE",
        "policies": [
          {
            "path": '/mobile/custom/taskApi/*',
            "fetchPolicy": 'FETCH_FROM_SERVICE_ON_CACHE_MISS_OR_EXPIRY',
            "expiryPolicy": 'EXPIRE_ON_RESTART',
            "evictionPolicy": 'EVICT_ON_EXPIRY_AT_STARTUP',
            "updatePolicy": 'QUEUE_IF_OFFLINE',
            "noCache" : false
          },
          {
            "path": '/mobile/custom/firstApi/tasks',
            "fetchPolicy": 'FETCH_FROM_SERVICE_ON_CACHE_MISS'
          },
          {
            "path": '/mobile/custom/secondApi/tasks',
          }
        ],
        "default" :{
          "fetchPolicy": 'FETCH_FROM_SERVICE_IF_ONLINE',
          "expiryPolicy": 'EXPIRE_ON_RESTART'
        }
      },
        "authorization": {
        "basicAuth": {
          "backendId": "YOUR_BACKEND_ID",
          "anonymousToken": "YOUR_BACKEND_ANONYMOUS_TOKEN"
        },
        "oAuth": {
          "clientId": "YOUR_CLIENT_ID",
          "clientSecret": "YOUR_ClIENT_SECRET",
          "tokenEndpoint": "YOUR_TOKEN_ENDPOINT"
        },
        "facebookAuth":{
          "facebookAppId": "21664XXXX175",
          "backendId": "cdXX781f-7fd4-4b42-88e1-XX409de0823f",
          "anonymousToken": "UFJJTUVfREVDRVBUSUNPTXXXT0JJTEVfQU5PTllNT1VTX0FQUElEOnZXXXXmwuamEwbTdu"
        }
        
      }
    }
  }
};

 

For details please refer this link

 

Update Main.JS for path mapping

After adding the physical files, update the paths mapping for mcs and mcs_cloud_config  in main.js file under requirejs.config section:

 

 paths:
                    //injector:mainReleasePaths
                            {
                                'knockout': 'libs/knockout/knockout-3.4.0.debug',
                                'jquery': 'libs/jquery/jquery-2.1.3',
                                'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4',
                                'promise': 'libs/es6-promise/promise-1.0.0',
                                'hammerjs': 'libs/hammer/hammer-2.0.4',
                                'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0',
                                'ojs': 'libs/oj/v2.0.2/debug',
                                'ojL10n': 'libs/oj/v2.0.2/ojL10n',
                                'ojtranslations': 'libs/oj/v2.0.2/resources',
                                'text': 'libs/require/text',
                                'signals': 'libs/js-signals/signals',
                                'mcs': 'libs/mcs/mcs',
                                'mcsconf': 'libs/mcs/oracle_mobile_cloud_config'
                            }

 

Implementation Steps

We will be implementing the entire code in dashboard.html and dashboard.js for easy implementation.

 

Add the additional modules: mcs and mcsconf to get loaded in dashboard.js file:

 

define(['ojs/ojcore', 'knockout', 'jquery', 'mcs', 'mcsconf', 'ojs/ojknockout','ojs/ojbutton'],
        function (oj, ko, $,mcs) {

Note: Please see that I have added "mcs" as a parameter to the function in dashboard.js file. This is required as I am using MCS SDK 16.3.x. In 16.3.3, Oracle  added support for RequireJS. So when MCS library is loaded in RequireJS environment, the global “mcs” variable is not declared like it was before in earlier version of the SDK.

 

Step 1: Loading Mobile Backend's Configuration

Get the mobile backend and set the authentication type to facebookAuth.

 

function initializeMCS() {
                    mcs.MobileBackendManager.platform = new mcs.CordovaPlatform();
                    mcs.MobileBackendManager.setConfig(mcs_config);
                    backend = mcs.MobileBackendManager.getMobileBackend("RDXTESTSSO");
                    if (backend != null) {
                        backend.setAuthenticationType("facebookAuth");
                        fbLogin();
                    }
                }

 

Step 2: Authenticate

Then add a function that calls Authorization.authenticate

 

function fbLogin() {
                    backend.Authorization.authenticate(
                            function (statusCode, data) {
                                console.log(data);
                                console.log(statusCode);
                                alert("FB Login success, status:" + statusCode);                                
                                invokeCustomTestAPI();
                            },
                            function (statusCode, data) {
                                console.log(statusCode + " with message:  " + data);
                                alert("FB Login failed, statusCode" + statusCode);
                            });
                }

Step 3: Invoke Custom API

Finally call the custom API secured by social identity:

 

function invokeCustomTestAPI()
                {
                    backend.CustomCode.invokeCustomCodeJSONRequest("TestFB/test", "GET", null, function (statusCode, data) {
                        console.log("statusCode"+statusCode);                        
                        console.log("data"+JSON.stringify(data)); 
                        alert("Status="+statusCode+"data="+JSON.stringify(data));                        
                    },
                            function (statusCode, data) {
                                console.log("statusCode"+statusCode);                        
                                console.log("data"+data);       
                                alert("Status="+statusCode+"data="+JSON.stringify(data));
                            });
                }

Build and Run

 

In your command prompt, please change directory to project folder and run the following command:

 

Build the application using following command

 

  1.  grunt build --platform=android  

 

Once build is success, then run the application using following command, assuming android emulator is already up and running:

 

 

grunt serve --platform=android  --disableLiveReload=true    

 

Output

Open App and Click on Login

  1. On opening, "Login via Facebook" button is shown
  2. Also open Chrome://Inspect to view logs
  3. Touch/Click Login via Facebook button

output1.png

 

Login via Facebook

Enter your facebook credentials

output2.png,

 

Facebook Authentication result

output3.png

Calling MCS Custom API Secured using Social Identity

output4.png

This is the first of a two-part blog series. It leverages the Oracle Cloud platform (in concert with some widely used open source technologies) to demonstrate message based, loosely coupled and asynchronous interaction between microservices with the help of a sample application. It deals with

 

  • Development of individual microservices
  • Using asynchronous messaging for loosely coupled interactions
  • Setup & deployment on respective Oracle Cloud services

 

The second part is available here

 

 

 

Technical components

 

Oracle Cloud

The following Oracle Cloud services have been leveraged

 

Oracle Cloud Service

Description

 

 

Application Container Cloud

Serves as a scalable platform for deploying our Java SE microservices

Compute Cloud

Hosts the Kafka cluster (broker)

 

 

 

Open source technologies

The following open source components were used to build the sample application

 

Component

Description

 

 

Apache Kafka

A scalable, pub-sub message hub

Jersey

Used to implement REST and SSE services. Uses Grizzly as a (pluggable) runtime/container

Maven

Used as the standard Java build tool (along with its assembly plugin)

 

Messaging in Microservices

 

A microservice based system comprises of multiple applications (services) which typically focus on a specialized aspect (business scenario) within the overall system. It’s possible for these individual services to function independently without any interaction what so ever, but that’s rarely the case. They cannot function in isolation and need to communicate with each other to get the job done. There are multiple strategies used to implement inter-microservice communication and they are often categorized under buckets such as synchronous vs asynchronous styles, choreography vs orchestration, REST (HTTP) vs messaging etc.

 

 

About the sample application

Architecture

 

The use case chosen for the sample application in this example is a simple one. It works with randomly generated data (the producer microservice) which is received by a another entity (the consumer microservice) and ultimately made available using the browser for the user to see it in real time

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 





A highly available setup has not been taken into account in this post. What we have is a single Kafka node i.e. there is just one server in the Kafka cluster and both the Producer and Consumer microservices are deployed in Application Container Cloud (both have a single instance each)

 

Let’s look at the individual components depicted in the above diagram

 

Apache Kafka

Apache Kafka is popularly referred to as a ‘messaging system or a streaming platform implemented as a distributed commit log’. It would be nice to have a simpler explanation

 

  • Basic: Kafka is a publish-subscribe based messaging system written in Scala (runs on the JVM) where publishers write to topics and consumers poll these topics to get data
  • Distributed: the parts (broker, publisher and consumer) are designed to be horizontally scalable
  • Master slave architecture: data in topics is distributed amongst multiple nodes in a cluster (based on the replication factor). Only one node serves as a master for a specific piece of data while 0 or more nodes can contain copies of that data i.e. act as followers
  • Partitions: Topics are further divided into partitions. Each partition basically acts as a commit log where the data (key-value pairs) is stored. The data is immutable, has strict ordering (offset is assigned for each data entry), is persisted and retained to disk (based on configuration)
  • Fitment: Kafka is suitable for handling high volume, high velocity, real time streaming data
  • Not JMS: Similar yet different from JMS. It does not implement the JMS specification, neither is it meant to serve as a drop in replacement for a JMS based solution

The Kafka broker is nothing but a Kafka server process (node). Multiple such nodes can form a cluster which act as a distributed, fault-tolerant and horizontally scalable message hub.

 

Producer Microservice

 

It leverages the Kafka Java API and Jersey (the JAX-RS implementation). This microservice publishes sample set of events at a rapid pace since the goal is to showcase a real time data pub-sub pipeline.

 

Sample data

 

Data emitted by the producer is modeled around metrics. In this example it’s the CPU usage of a particular machine and can be thought of as simple key-value pairs (name, % usage etc.). Here is what it looks like (ignore the Partition attribute info)

 

: Partition 0
event: machine-2
id: 19
data: 14%

: Partition 1
event: machine-1
id: 20
data: 5%

 

 

Consumer Microservice

 

This is the 2nd microservice in our system. Just like the Producer, it makes use of Jersey as well as the Kafka Java (consumer) API. Another noteworthy Jersey component which is used is the Server Sent Events module which helps implement subscribe-and-broadcast semantics required by our sample application (more on this later)

 

Both the microservices are deployed as separate applications on the Application Container Cloud platform and can be managed and scaled independently

 

Setting up Apache Kafka on Oracle Compute Cloud

 

You have a couple of options for setting up Apache Kafka on Oracle Compute Cloud (IaaS)

 

Bootstrap a Kafka instance using Oracle Cloud Marketplace

Use the Bitnami image for Apache Kafka from the marketplace (for detailed documentation, please refer this link)

 

 

 

Use a VM on Oracle Compute Cloud

Start by provisioning a Compute Cloud VM on the operating system of your choice – this documentation provides an excellent starting point

 

Enable SSH access to VM

 

To execute any of the configurations, you first need to enable SSH access (create security policies/rules) to your Oracle Compute Cloud VM. Please find the instructions for Oracle Linux and Oracle Solaris based VMs respectively

 

 

Install Kafka on the VM

 

This section assumes Oracle Enterprise Linux based VM

 

Here are the commands

 

sudo yum install java-1.8.0-openjdk
sudo yum install wget
mkdir -p ~/kafka-download
wget "http://redrockdigimark.com/apachemirror/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz" -O ~/kafka-download/kafka-binary.tgz
mkdir -p ~/kafka-install && cd ~/kafka-install
tar -xvzf ~/kafka-download/kafka-binary.tgz --strip 1

 

 

 

Open Kafka listener port

 

You need to allow access to Kafka broker service (on port 9092 in this case) for the microservices deployed on Oracle Application Container Cloud. This documentation provides a great reference in the form of a use case. Create a Security Application to specify the protocol and the respective port – detailed documentation here

 

 

Reference the Security Application created in the previous step to configure the Security Rule. This will allow traffic from public internet (as defined in the rule) onto port 9092 (as per Security Application configuration). Please refer to the following documentation for details

 

 

You will end up with a configuration similar to what's depicted below

 

 

 

Configure Kafka broker

 

Make sure that you edit the below mentioned attributes in Kafka server properties (<KAFKA_INSTALL>/config/server.properties) as per your Compute Cloud environment

 

Public DNS of your Compute Cloud instance: if the public IP is 140.44.88.200, then the public DNS will be oc-140-44-88-200.compute.oraclecloud.com

 

AttributeValue
listeners

PLAINTEXT://<oracle-compute-private-IP>:<kafka-listen-port>

e.g. PLAINTEXT://10.190.210.199:9092
advertised.listeners

PLAINTEXT://<oracle-compute-public-DNS>:<kafka-listen-port>

e.g. PLAINTEXT://oc-140-44-88-200.compute.oraclecloud.com:9092

 

 

Here is a snapshot of the server.properties file

 

Start Zookeeper by executing KAFKA_INSTALL/bin/zookeeper-server-start.sh config/zookeeper.properties

 

 

Start Kafka Broker by executing KAFKA_INSTALL/bin/kafka-server-start.sh config/server.properties

 

 

Do not start Kafka broker before Zookeeper

 

High level solution overview

 

Event flow/sequence

Let’s look at how these components work together to support the entire use case

 

 

The producer pushes events into the Kafka broker

 

On the consumer end

 

  • The application polls Kafka broker for the data (yes, the poll/pull model is used in Kafka as opposed to the more commonly seen push model)
  • A client (browser/http client) subscribes for events by simply sending a HTTP GET to a specific URL (e.g. https://<acc-app-url>/metrics). This is one time subscribe after which the client will get events as they are produced within the application and it can choose to disconnect any time

 

 

Asynchronous, loosely coupled: The metrics data is produced by the consumer. One consumer makes it available as a real time feed for browser based clients, but there can be multiple such consuming entities which can implement a different set of business logic around the same data set e.g. push the metrics data to a persistent data store for processing/analysis etc.

 

More on Server Sent Events (SSE)

 

SSE is the middle ground between HTTP and WebSocket. The client sends the request, and once established, the connection is kept open and it can continue to receive data from server

 

  • This is more efficient compared to HTTP request-response paradigm for every single request i.e. polling the server can be avoided
  • It’s not the same as WebSocket which are full duplex in nature i.e. the client and server can exchange messages anytime after connection is established. In SSE, the client only sends a request once

 

This model suits our sample application since the client just needs to connect and wait for data to arrive (it does not need to interact with the server after the initial subscription)

 

Other noteworthy points

  • SSE is a formal W3C specification
  • It defines a specific media type for the data
  • Has JavaScript implementation in most browsers

 

Scalability

It’s worth noting that, all the parts of this system are stateless and horizontally scalable in order to maintain high throughput and performance. The second part of this blog will dive deeper into the scalability aspects and see how Application Container Cloud makes it easy to achieve this

Code

 

This section will briefly cover the code used for this sample and highlight the important points (for both our microservices)

 

Producer microservice

 

It consists of a cohesive bunch of classes which handle application bootstrapping, event production etc.

 

Class

Details

 

 

ProducerBootstrap.java

Entry point for the application. Kicks off Grizzly container

Producer.java

Runs in a dedicated thread. Contains core logic for producing event.

ProducerManagerResource.java

Exposes a HTTP(s) endpoint to start/stops the producer process

ProducerLifecycleManager.java

Implements logic to manage Producer thread using ExecutorService. Used internally by ProducerManagerResource

 

 

ProducerBootstrap.java

 

public class ProducerBootstrap {
    private static final Logger LOGGER = Logger.getLogger(ProducerBootstrap.class.getName());


    private static void bootstrap() throws IOException {


        String hostname = Optional.ofNullable(System.getenv("HOSTNAME")).orElse("localhost");
        String port = Optional.ofNullable(System.getenv("PORT")).orElse("8080");


        URI baseUri = UriBuilder.fromUri("http://" + hostname + "/").port(Integer.parseInt(port)).build();


        ResourceConfig config = new ResourceConfig(ProducerManagerResource.class);


        HttpServer server = GrizzlyHttpServerFactory.createHttpServer(baseUri, config);
        LOGGER.log(Level.INFO,  "Application accessible at {0}", baseUri.toString());


        //gracefully exit Grizzly services when app is shut down
        Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {
            @Override
            public void run() {
                LOGGER.log(Level.INFO, "Exiting......");
                try {
                    server.shutdownNow();
                  LOGGER.log(Level.INFO, "REST services stopped");


                    ProducerLifecycleManager.getInstance().stop();
                    LOGGER.log(Level.INFO, "Kafka producer thread stopped");
                } catch (Exception ex) {
                    //log & continue....
                    LOGGER.log(Level.SEVERE, ex, ex::getMessage);
                }


            }
        }));
        server.start();


    }


    public static void main(String[] args) throws Exception {


        bootstrap();


    }
}

 

 

Producer.java

 

public class Producer implements Runnable {
    private static final Logger LOGGER = Logger.getLogger(Producer.class.getName());
    private static final String TOPIC_NAME = "cpu-metrics-topic";
    private KafkaProducer<String, String> kafkaProducer = null;
    private final String KAFKA_CLUSTER_ENV_VAR_NAME = "KAFKA_CLUSTER";
    public Producer() {
        LOGGER.log(Level.INFO, "Kafka Producer running in thread {0}", Thread.currentThread().getName());
        Properties kafkaProps = new Properties();
        String defaultClusterValue = "localhost";
        String kafkaCluster = System.getenv().getOrDefault(KAFKA_CLUSTER_ENV_VAR_NAME, defaultClusterValue);
        LOGGER.log(Level.INFO, "Kafka cluster {0}", kafkaCluster);
        kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaCluster);
        kafkaProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        kafkaProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        kafkaProps.put(ProducerConfig.ACKS_CONFIG, "0");
        this.kafkaProducer = new KafkaProducer<>(kafkaProps);
    }
    @Override
    public void run() {
        try {
            produce();
        } catch (Exception e) {
            LOGGER.log(Level.SEVERE, e.getMessage(), e);
        }
    }
    /**
    * produce messages
    *
    * @throws Exception
    */
    private void produce() throws Exception {
        ProducerRecord<String, String> record = null;
        try {
            Random rnd = new Random();
            while (true) {
                String key = "machine-" + rnd.nextInt(5);
                String value = String.valueOf(rnd.nextInt(20));
                record = new ProducerRecord<>(TOPIC_NAME, key, value);
                kafkaProducer.send(record, new Callback() {
                    @Override
                    public void onCompletion(RecordMetadata rm, Exception excptn) {
                        if (excptn != null) {
                            LOGGER.log(Level.WARNING, "Error sending message with key {0}\n{1}", new Object[]{key, excptn.getMessage()});
                        } else {
                            LOGGER.log(Level.INFO, "Partition for key {0} is {1}", new Object[]{key, rm.partition()});
                        }
                    }
                });
                /**
                * wait before sending next message. this has been done on
                * purpose
                */
                Thread.sleep(1000);
            }
        } catch (Exception e) {
            LOGGER.log(Level.SEVERE, "Producer thread was interrupted");
        } finally {
            kafkaProducer.close();
            LOGGER.log(Level.INFO, "Producer closed");
        }
    }
}

 

ProducerLifecycleManager.java

 

public final class ProducerLifecycleManager {
private static final Logger LOGGER = Logger.getLogger(ProducerLifecycleManager.class.getName());
    private ExecutorService es;
    private static ProducerLifecycleManager INSTANCE = null;
    private final AtomicBoolean RUNNING = new AtomicBoolean(false);
    private ProducerLifecycleManager() {
        es = Executors.newSingleThreadExecutor();
    }
    public static ProducerLifecycleManager getInstance(){
        if(INSTANCE == null){
            INSTANCE = new ProducerLifecycleManager();
        }
        return INSTANCE;
    }
    public void start() throws Exception{
        if(RUNNING.get()){
            throw new IllegalStateException("Service is already running");
        }
        if(es.isShutdown()){
            es = Executors.newSingleThreadExecutor();
            System.out.println("Reinit executor service");
        }
        es.execute(new Producer());
        LOGGER.info("started producer thread");
        RUNNING.set(true);
    }
    public void stop() throws Exception{
        if(!RUNNING.get()){
            throw new IllegalStateException("Service is NOT running. Cannot stop");
        }
        es.shutdownNow();
        LOGGER.info("stopped producer thread");
        RUNNING.set(false);
    }
}

 

ProducerManagerResource.java

 

@Path("producer")

public class ProducerManagerResource {

    /**

    * start the Kafka Producer service

    * @return 200 OK for success, 500 in case of issues

    */

    @GET

    public Response start() {

        Response r = null;

        try {

            ProducerLifecycleManager.getInstance().start();

            r = Response.ok("Kafka Producer started")

                .build();

        } catch (Exception ex) {

            Logger.getLogger(ProducerManagerResource.class.getName()).log(Level.SEVERE, null, ex);

            r = Response.serverError().build();

        }

        return r;

    }

    /**

    * stop consumer

    * @return 200 OK for success, 500 in case of issues

    */

    @DELETE

    public Response stop() {

        Response r = null;

        try {

            ProducerLifecycleManager.getInstance().stop();

            r = Response.ok("Kafka Producer stopped")

                .build();

        } catch (Exception ex) {

            Logger.getLogger(ProducerManagerResource.class.getName()).log(Level.SEVERE, null, ex);

            r = Response.serverError().build();

        }

        return r;

    }

}

 

 

Consumer microservice

 

Class

Details

 

 

ConsumerBootstrap.java

Entry point for the application. Kicks off Grizzly container and triggers the Consumer process

Consumer.java

Runs in a dedicated thread. Contains core logic for consuming events

ConsumerEventResource.java

Exposes a HTTP(s) endpoint for end users to consume events

EventCoordinator.java

Wrapper around Jersey SSEBroadcaster to implement event subscription & broadcasting. Used internally by ConsumerEventResource

 

 

Consumer.java

 

public class Consumer implements Runnable {
    private static final Logger LOGGER = Logger.getLogger(Consumer.class.getName());
    private static final String TOPIC_NAME = "cpu-metrics-topic";
    private static final String CONSUMER_GROUP = "cpu-metrics-group";
    private final AtomicBoolean CONSUMER_STOPPED = new AtomicBoolean(false);
    private KafkaConsumer<String, String> consumer = null;
    private final String KAFKA_CLUSTER_ENV_VAR_NAME = "KAFKA_CLUSTER";
    /**
    * c'tor
    */
    public Consumer() {
        Properties kafkaProps = new Properties();
        LOGGER.log(Level.INFO, "Kafka Consumer running in thread {0}", Thread.currentThread().getName());
        String defaultClusterValue = "localhost";
        String kafkaCluster = System.getenv().getOrDefault(KAFKA_CLUSTER_ENV_VAR_NAME, defaultClusterValue);
        LOGGER.log(Level.INFO, "Kafka cluster {0}", kafkaCluster);
        kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaCluster);
        kafkaProps.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP);
        kafkaProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        kafkaProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        this.consumer = new KafkaConsumer<>(kafkaProps);
    }
    /**
    * invoke this to stop this consumer from a different thread
    */
    public void stop() {
        if(CONSUMER_STOPPED.get()){
            throw new IllegalStateException("Kafka consumer service thread is not running");
        }
        LOGGER.log(Level.INFO, "signalling shut down for consumer");
        if (consumer != null) {
            CONSUMER_STOPPED.set(true);
            consumer.wakeup();
        }
    }
    @Override
    public void run() {
        consume();
    }
    /**
    * poll the topic and invoke broadcast service to send information to connected SSE clients
    */
    private void consume() {
        consumer.subscribe(Arrays.asList(TOPIC_NAME));
        LOGGER.log(Level.INFO, "Subcribed to: {0}", TOPIC_NAME);
        try {
            while (!CONSUMER_STOPPED.get()) {
                LOGGER.log(Level.INFO, "Polling broker");
                ConsumerRecords<String, String> msg = consumer.poll(1000);
                for (ConsumerRecord<String, String> record : msg) {
                    EventCoordinator.getInstance().broadcast(record);
                }
            }
            LOGGER.log(Level.INFO, "Poll loop interrupted");
        } catch (Exception e) {
            //ignored
        } finally {
            consumer.close();
            LOGGER.log(Level.INFO, "consumer shut down complete");
        }
    }
}

 

ConsumerBootstrap.java

 

public final class ConsumerBootstrap {
    private static final Logger LOGGER = Logger.getLogger(ConsumerBootstrap.class.getName());
    /**
    * Start Grizzly services and Kafka consumer thread
    *
    * @throws IOException
    */
    private static void bootstrap() throws IOException {
        String hostname = Optional.ofNullable(System.getenv("HOSTNAME")).orElse("localhost");
        String port = Optional.ofNullable(System.getenv("PORT")).orElse("8081");
        URI baseUri = UriBuilder.fromUri("http://" + hostname + "/").port(Integer.parseInt(port)).build();
        ResourceConfig config = new ResourceConfig(ConsumerEventResource.class, SseFeature.class);
        HttpServer server = GrizzlyHttpServerFactory.createHttpServer(baseUri, config);
        Logger.getLogger(ConsumerBootstrap.class.getName()).log(Level.INFO, "Application accessible at {0}", baseUri.toString());
        Consumer kafkaConsumer = new Consumer(); //will initiate connection to Kafka broker
        //gracefully exit Grizzly services and close Kafka consumer when app is shut down
        Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {
            @Override
            public void run() {
              LOGGER.log(Level.INFO, "Exiting......");
                try {
                    server.shutdownNow();
                    LOGGER.log(Level.INFO, "Grizzly services stopped");
                    kafkaConsumer.stop();
                    LOGGER.log(Level.INFO, "Kafka consumer thread stopped");
                } catch (Exception ex) {
                    //log & continue....
                    LOGGER.log(Level.SEVERE, ex, ex::getMessage);
                }
            }
        }));
        server.start();
        new Thread(kafkaConsumer).start();
    }
    /**
    * Entry point
    *
    * @param args
    * @throws Exception
    */
    public static void main(String[] args) throws Exception {
        bootstrap();
    }
}

 

ConsumerEventResource.java

 

/**
* This class allows clients to subscribe to events by
* sending a HTTP GET to host:port/events. The server will keep the connection open
* and send events (as and when received) unless closed by the client
*
*/
@Path("metrics")
public final class ConsumerEventResource {
    //private static final Logger LOGGER = Logger.getLogger(ConsumerEventResource.class.getName());
    /**
    * Call me to subscribe to events. Delegates to EventCoordinator
    *
    * @return EventOutput which will keep the connection open
    */
    @GET
    @Produces(SseFeature.SERVER_SENT_EVENTS)
    public EventOutput subscribe() {
        return EventCoordinator.getInstance().subscribe();
    }
}

 

EventCoordinator.java

 

public final class EventCoordinator {
    private static final Logger LOGGER = Logger.getLogger(EventCoordinator.class.getName());
    private EventCoordinator() {
    }
    /**
    * SseBroadcaster is used because
    * 1. it tracks client stats
    * 2. automatically dispose server resources if clients disconnect
    * 3. it's thread safe
    */
    private final SseBroadcaster broadcaster = new SseBroadcaster();
    private static final EventCoordinator INSTANCE = new EventCoordinator();
    public static EventCoordinator getInstance() {
        return INSTANCE;
    }
    /**
    * add to SSE broadcaster list of clients/subscribers
    * @return EventOutput which will keep the connection open.
    *
    * Note: broadcaster.add(output) is a slow operation
    * Please see (https://jersey.java.net/apidocs/2.23.2/jersey/org/glassfish/jersey/server/Broadcaster.html#add(org.glassfish.jersey.server.BroadcasterListener))
    */
    public EventOutput subscribe() {
        final EventOutput eOutput = new EventOutput();
        broadcaster.add(eOutput);
        LOGGER.log(Level.INFO, "Client Subscribed successfully {0}", eOutput.toString());
        return eOutput;
    }
    /**
    * broadcast record details to all connected clients
    * @param record kafka record obtained from broker
    */
    public void broadcast(ConsumerRecord<String, String> record) {
        OutboundEvent.Builder eventBuilder = new OutboundEvent.Builder();
        OutboundEvent event = eventBuilder.name(record.key())
                                        .id(String.valueOf(record.offset()))
                                        .data(String.class, record.value()+"%")
                                        .comment("Partition "+Integer.toString(record.partition()))
                                        .mediaType(MediaType.TEXT_PLAIN_TYPE)
                                        .build();
        broadcaster.broadcast(event);
        LOGGER.log(Level.INFO, "Broadcasted record {0}", record);
    }
}

 

The Jersey SSE Broadcaster is used because of its following characteristics

  • it tracks client statistics
  • automatically disposes server resources if clients disconnect
  • it's thread safe

 

 

Deploy to Oracle Application Container Cloud

 

Now that you have a fair idea of the application, it’s time to look at the build, packaging & deployment

 

Metadata files

 

manifest.json: You can use this file in its original state (for both producer and consumer microservices)

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar accs-kafka-producer.jar",
    "release": {
        "build": "12042016.1400",
        "commit": "007",
        "version": "0.0.1"
    },
    "notes": "Kafka Producer powered by Oracle Application Container Cloud"
}

 

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar accs-kafka-consumer.jar",
    "release": {
        "build": "12042016.1400",
        "commit": "007",
        "version": "0.0.1"
    },
    "notes": "Kafka consumer powered by Oracle Application Container Cloud"
}

 

deployment.json

It contains environment variable corresponding to your Kafka broker. The value is left as a placeholder for the user to fill prior to deployment.

 

{

    "environment": {

        "KAFKA_CLUSTER":"<as-configured-in-kafka-server-properties>"

    }

}

 

 

This value (Oracle Compute Cloud instance public DNS) should be the same as the one you configured in the advertised.listeners attribute of the Kafka server.properties file

 

Please refer to the following documentation for more details on metadata files

 

Build & zip

 

Build JAR and zip it with (only) the manifest.json file to create a cloud-ready artifact

 

Producer application

 

cd <code_dir>/producer //maven project directory
mvn clean install
zip accs-kafka-producer.zip manifest.json target/accs-kafka-producer.jar //you can also use tar to create a tgz file

 

 

Consumer application

 

cd <code_dir> //maven project directory
mvn clean install 
zip accs-kafka-consumer.zip manifest.json target/accs-kafka-consumer.jar

 

Upload application zip to Oracle Storage cloud

You would first need to upload your application ZIP file to Oracle Storage Cloud and then reference it in the subsequent steps. Here are the required the cURL commands

 

Create a container in Oracle Storage cloud (if it doesn't already exist)

curl -i -X PUT -u <USER_ID>:<USER_PASSWORD> <STORAGE_CLOUD_CONTAINER_URL>
e.g. curl -X PUT –u jdoe:foobar "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accs-kafka-consumer/"

Upload your zip file into the container (zip file is nothing but a Storage Cloud object)

curl -X PUT -u <USER_ID>:<USER_PASSWORD> <STORAGE_CLOUD_CONTAINER_URL> -T <zip_file> "<storage_cloud_object_URL>" //template
e.g. curl -X PUT –u jdoe:foobar -T accs-kafka-consumer.zip "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accs-kafka-consumer/accs-kafka-consumer.zip"

 

Repeat the same for the producer microservice

 

Deploy to Application Container Cloud

Once you have finished uploading the ZIP, you can now reference its (Oracle Storage cloud) path while using the Application Container Cloud REST API which you would use in order to deploy the application. Here is a sample cURL command which makes use of the REST API

 

curl -X POST -u joe@example.com:password \  
-H "X-ID-TENANT-NAME:domain007" \  
-H "Content-Type: multipart/form-data" -F "name=accs-kafka-consumer" \  
-F "runtime=java" -F "subscription=Monthly" \  
-F "deployment=@deployment.json" \  
-F "archiveURL=accs-kafka-consumer/accs-kafka-consumer.zip" \  
-F "notes=notes for deployment" \  
https://apaas.oraclecloud.com/paas/service/apaas/api/v1.1/apps/domain007

 

Repeat the same for the producer microservice

 

Post deployment

 

You should be able to see your microservices under the Applications section in Application Container Cloud console

 

 

If you look at the details of a specific application, the environment variable should also be present

 

 

Test the application

 

Producer

For the accs-kafka-producer microservice, the Kafka Producer process (thread) needs to be started by the user (this is just meant to provide flexibility). Manage the producer process by issuing appropriate commands as per below table (using cURL, Postman etc.)

 

Action

HTTP verb

URI

 

 

 

Start

GET

https://<ACCS-APP-URL>/producer

e.g. https://accs-kafka-producer-domain007.apaas.us.oraclecloud.com/producer

Stop

DELETE

Same as above

 

 

Once you start the producer, it will continue publishing events to the Kafka broker it is stopped

 

Consumer

In the accs-kafka-consumer microservice, the Kafka consumer process starts along with the application itself i.e. it starts polling the Kafka broker for metrics. As previously mentioned, the consumer application provides a HTTP(s) endpoint (powered by Server Sent Events) to look at metric data in real time

 

 

You should see a real time stream of data similar to below. The event attribute is the machine name/id and the data attribute represents (models) CPU usage

 

Please ignore the Partition attribute as it is meant to demonstrate a specific concept (scalability & load distribution) which will be covered in the second part of this blog

 

 

References

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

 

This blog post demonstrates usage of Oracle Application Container Cloud and Database Cloud service. To be precise, it covers the following

 

  • An introduction to Service bindings (in Application Container Cloud) including setup + configuration and leveraging them to integrate with Oracle Database Cloud service
  • Developing a sample (Java SE based) application using JPA (Eclipselink implementation) for persistence and JAX-RS (Jersey framework) to expose a REST API
  • Build, package and deploy the solution to the Application Container cloud using its REST APIs

 

 

 

 

The table below lists the components used

 

Component/service name

Description

 

 

Oracle Application Container Cloud

The aPaaS solution which hosts the Fat JAR based Java application exposing REST APIs

Oracle Database Cloud

hosts the application data

Oracle Storage Cloud

stores the application zip (for deployment)

Eclipselink

used as the JPA implementation (v 2.5.2)

Jersey

JAX-RS implementation (v 2.23.2)

Maven

Build tool. Makes use of the shade plugin to create a Fat JAR packaged with all dependent libraries

 

 

 

 

Service bindings: the concept

 

Service bindings serve as references to other Oracle Cloud services. At their core, they are a set of environment variables for a particular cloud service which are automatically seeded once you configure them. You can refer to these variables from within your application code. For example

 

String port = Optional.ofNullable(System.getenv("PORT")).orElse("8080"); //PORT is the environment variable

 

At the time of writing of this blog post, integration with following services are supported as far as service bindings are concerned - Oracle Database Cloud, Oracle Java Cloud and Oracle MySQL Cloud

 

What purpose do service bindings solve?

 

Service bindings make lives easier for developers

 

  • It's possible to consume other Oracle Cloud services in a declarative fashion
  • They allow secure storage of credentials required to access the service
  • Connection details are conveniently stored as environment variables (de facto standard) and can be easily used within code. This in turn, shields you from hard coding connectivity information or using/building a custom mechanism to handle these concerns

 

How to configure them?

 

Service binding configuration can be configured in a couple of ways

 

  • Metadata file (deployment.json) - It is possible to use this method both during as well as post application deployment
  • Application Container Cloud console - this is possible only post application deployment i.e. the application specific menu exposes this feature

 

 

We will use option #1 in the sample presented in this blog and will be covered in depth later

 

About the sample

 

Pre-requisites

 

You need to have access to the below mentioned Oracle Cloud Platform services to execute this sample end to end. Please refer to the links to find out more with regards to procuring service instances

 

 

An Oracle Storage Cloud account is automatically  provisioned along with your Application Container Cloud instance

Database Cloud service needs to be in the same identity domain as the Application Container Cloud for it to be available as a service binding

Architecture

 

The sample application presented in this blog is not complicated, yet, it makes sense to grasp the high level details with the help of a diagram

 

 

 

  • As already mentioned, the application leverages JPA (DB persistence) and JAX-RS (RESTful) APIs
  • The client invokes a HTTP(s) URL (GET request) which internally calls the JAX-RS resource, which in turn invokes the JPA (persistence) layer to communicate with Oracle Database Cloud instance
  • Connectivity to the Oracle Database Cloud instance is achieved with the help of service bindings which expose database connectivity details as environment variables
  • These variables are used within the code

 

Persistence (JPA) layer

 

Here is a summary of the JPA piece. Eclipselink is used as the JPA implementation and the sample makes use of specific JPA 2.1 features like automated schema creation and data seeding during application bootstrap phase

The table and its associated data will be created in Oracle Database cloud during application deployment phase. This approach been used on purpose in order to make this easy for you to test the application. It’s possible to manually bootstrap your Oracle Database Cloud instance with the table and some test data. You can turn off this feature by commenting out the highlighted line from persistence.xml

 

 

 

 

 

 

The important classes/components are as follows

 

Name

Description

 

 

                             persistence.xml

JPA deployment descriptor

PaasAppDev.java

JPA entity class

JPAFacade.java

Manages EntityManagerFactory life cycle and provides access to EntityManager

 

PaasAppDev.java

 

/**
 * JPA entity
 * 
 */
@Entity
@Table(name = "PAAS_APPDEV_PRODUCTS")
@XmlRootElement
public class PaasAppDev implements Serializable {


    @Id
    private String name;


    @Column(nullable = false)
    private String webUrl;


    public PaasAppDev() {
        //for JPA
    }
//getters & setters ommitted
}

 

JPAFacade.java

 

public class JPAFacade {


    private static EntityManagerFactory emf;


    private JPAFacade() {
    }


    public static void bootstrapEMF(String persistenceUnitName, Map<String, String> props) {
        if (emf == null) {
            emf = Persistence.createEntityManagerFactory(persistenceUnitName, props);
            emf.createEntityManager().close(); //a hack to initiate 'eager' deployment of persistence unit during deploy time as opposed to on-demand
        }
    }


    public static EntityManager getEM() {
        if (emf == null) {
            throw new IllegalStateException("Please call bootstrapEMF(String persistenceUnitName, Map<String, String> props) first");
        }


        return emf.createEntityManager();
    }


    public static void closeEMF() {


        if (emf == null) {
            throw new IllegalStateException("Please call bootstrapEMF(String persistenceUnitName, Map<String, String> props) first");
        }


        emf.close();


    }


}

 

 

 

Leveraging service binding information

 

It’s important to note how the service bindings are being used in this case. Generally, in case of standalone (with RESOURCE_LOCAL transactions) JPA usage, the DB connectivity information is stored as a part of the persistence.xml. In our sample, we are using programmatic configuration of EntityManagerFactory because the DB connection info can be extracted only at runtime using the following environment variables

 

  • DBAAS_DEFAULT_CONNECT_DESCRIPTOR
  • DBAAS_USER_NAME
  • DBAAS_USER_PASSWORD

 

This is leveraged in Bootstrap.java (which serves as the entry point to the applicaiton)

 

/**
 * The 'bootstrap' class. Sets up persistence and starts Grizzly HTTP server
 *
 */
public class Bootstrap {


    static void bootstrapREST() throws IOException {


        String hostname = Optional.ofNullable(System.getenv("HOSTNAME")).orElse("localhost");
        String port = Optional.ofNullable(System.getenv("PORT")).orElse("8080");


        URI baseUri = UriBuilder.fromUri("http://" + hostname + "/").port(Integer.parseInt(port)).build();


        ResourceConfig config = new ResourceConfig(PaasAppDevProductsResource.class)
                                                    .register(MoxyJsonFeature.class);


        HttpServer server = GrizzlyHttpServerFactory.createHttpServer(baseUri, config);
        Logger.getLogger(Bootstrap.class.getName()).log(Level.INFO, "Application accessible at {0}", baseUri.toString());


        //gracefully exit Grizzly and Eclipselink services when app is shut down
        Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {
            @Override
            public void run() {
                Logger.getLogger(Bootstrap.class.getName()).info("Exiting......");
                server.shutdownNow();
                JPAFacade.closeEMF();
                Logger.getLogger(Bootstrap.class.getName()).info("REST and Persistence services stopped");
            }
        }));
        server.start();


    }


    private static final String PERSISTENCE_UNIT_NAME = "oracle-cloud-db-PU";


    static void bootstrapJPA(String puName, Map<String, String> props) {


        JPAFacade.bootstrapEMF(puName, props);
        Logger.getLogger(Bootstrap.class.getName()).info("EMF bootstrapped");


    }


    public static void main(String[] args) throws IOException {
        Map<String, String> props = new HashMap<>();
        props.put("javax.persistence.jdbc.url", "jdbc:oracle:thin:@" + System.getenv("DBAAS_DEFAULT_CONNECT_DESCRIPTOR"));
        props.put("javax.persistence.jdbc.user", System.getenv("DBAAS_USER_NAME"));
        props.put("javax.persistence.jdbc.password", System.getenv("DBAAS_USER_PASSWORD"));
        bootstrapREST();
        bootstrapJPA(PERSISTENCE_UNIT_NAME, props);


    }
}

 

 

REST (JAX-RS) layer

 

Jersey is used as the JAX-RS implementation. It has support for multiple containers - Grizzly being one of them and it’s used in this example as well. Also, the Moxy media provider is leveraged in order to ensure that JAXB annotated (JPA) entity class can be marshaled as both XML and JSON without any additional code

 

Important classes

 

Name

Description

 

 

PaasAppDevProductsResource.java

Contains logic to GET information about all (appdev/products) or a specific PaaS product (e.g. appdev/products/ACC)

 

PaasAppDevProductsResource.java

 

@Path("appdev/products")
public class PaasAppDevProductsResource {


    @GET
    @Path("{name}")
    public Response paasOffering(@PathParam("name") String name) {


        EntityManager em = null;
        PaasAppDev product = null;
        try {
            em = JPAFacade.getEM();
            product = em.find(PaasAppDev.class, name);
        } catch (Exception e) {
            throw e;
        } finally {


            if (em != null) {
                em.close();
            }


        }
        
        return Response.ok(product).build();
    }
    
    @GET
    public Response all() {


        EntityManager em = null;
        List<PaasAppDev> products = null;
        try {
            em = JPAFacade.getEM();
            products = em.createQuery("SELECT c FROM PaasAppDev c").getResultList();
        } catch (Exception e) {
            throw e;
        } finally {


            if (em != null) {
                em.close();
            }


        }
        GenericEntity<List<PaasAppDev>> list = new GenericEntity<List<PaasAppDev>>(products) {
        };
        return Response.ok(list).build();
    }


}

Build & cloud deployment

 

Now that you have a fair idea of the application, it’s time to look at the build, packaging & deployment

 

Seed Maven with ojdbc7 driver JAR

 

 

mvn install:install-file -DgroupId=com.oracle -DartifactId=ojdbc7 -Dversion=12.1.0.1 -Dpackaging=jar -Dfile=<download_path>\ojdbc7.jar -DgeneratePom=true

 

Here is a snippet from the pom.xml

 

 

 

Metadata files

 

The manifest.json

 

You can use the manifest.json file as it is

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar accs-dbcs-service-binding-sample-1.0.jar",
    "release": {
        "build": "27092016.1020",
        "commit": "007",
        "version": "0.0.2"
    },
    "notes": "notes related to release"
}

 

Service bindings in deployment.json

 

The deployment.json file should contain your service bindings and you would need to upload this file during deployment (explained below) for them to be associated with your Application Container cloud instance.

 

{
    "services": [
    {
        "identifier": "DBService",
        "type": "DBAAS",
        "name": <Oracle DB Cloud service name>,
        "username": <Oracle DB Cloud username>,
        "password": <Oracle DB Cloud password>
    }]
}

 

 

You need to replace the placeholders with the appropriate values. Here is an example

 

{
    "services": [
    {
        "identifier": "OraDBService",
        "type": "DBAAS",
        "name": OracleCloudTestDB,
        "username": db_user,
        "password": Foo@Bar_007
    }]
}

 

 

In case of multiple service bindings for the same service (e.g. Java Cloud), the Application Container Cloud service automatically generates a unique set of environment variables for each service instance

 

Please refer to the following documentation if you need further details

 

Build & zip

 

Build JAR and zip it with (only) the manifest.json file to create a cloud-ready artifact

 

cd <code_dir> 
mvn clean install
zip accs-dbcs-service-binding-sample.zip manifest.json target\accs-dbcs-service-binding-sample-1.0.jar

 

 

Upload application zip to Oracle Storage cloud

 

You would first need to upload your application ZIP file to Oracle Storage Cloud and then reference it later. Here are the steps along with the cURL commands

Please refer to the following documentation for more details

 

Get authentication token for Oracle Storage cloud

 

you will receive the token in the HTTP Response header and you can use it to execute subsequent operations

 

curl -X GET -H "X-Storage-User: Storage-<identity_domain>:<user_name>" -H "X-Storage-Pass: <user_password>" "storage_cloud_URL" //template

curl -X GET -H "X-Storage-User: Storage-domain007:john.doe" -H "X-Storage-Pass: foo@bar" "https://domain007.storage.oraclecloud.com/auth/v1.0" //example

 

Create a container in Oracle Storage cloud (if it doesn't already exist)

 

curl -X PUT -H "X-Auth-Token: <your_auth_token>" "<storage_cloud_container_URL>" //template

curl -X PUT -H "X-Auth-Token: AUTH_foobaar007" "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accscontainer/" //example

 

Upload your zip file into the container (zip file is nothing but a Storage Cloud object)

 

curl -X PUT -H "X-Auth-Token: <your_auth_token>" -T <zip_file> "<storage_cloud_object_URL>" //template

curl -X PUT -H "X-Auth-Token: AUTH_foobaar007" -T accs-dbcs-service-binding-sample.zip "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accscontainer/accs-dbcs-service-binding-sample.zip" //example

 

Things to note

  • the <zip_file> is the application zip file which needs to uploaded and should be present on your file system from where you're executing these commands
  • the (storage cloud) object name needs to end with .zip extension (in this context/case)

 

Deploy to Application Container Cloud

 

Once you have finished uploading the ZIP, you can now reference its (Oracle Storage cloud) path while using the Application Container Cloud REST API which you would use in order to deploy the application. Here is a sample cURL command which makes use of the REST API

 

curl -X POST -u joe@example.com:password \
-H "X-ID-TENANT-NAME:domain007" \
-H "Content-Type: multipart/form-data" -F "name=accs-dbcs-service-binding-sample" \
-F "runtime=java" -F "subscription=Monthly" \
-F "deployment=@deployment.json" \
-F "archiveURL=accscontainer/accs-dbcs-service-binding-sample.zip" \
-F "notes=notes for deployment" \
https://apaas.oraclecloud.com/paas/service/apaas/api/v1.1/apps/domain007

 

 

During this process, you will also be pushing the deployment.json metadata file which contains the service bindings info (should be present on the file system on which the command is being executed). This in turn will automatically seed the required environment variables during application creation phase

 

More details available here

 

Test the application

 

Once deployment is successful, you should be able to see the deployed application and its details in Application Container cloud

 

 

 

Access your Oracle Database Cloud service

 

You can use Oracle SQL developer or similar client to confirm that the required table has been created and seeded with some test data

 

Details on how to configure your Oracle Database Cloud instance to connect via external tool (like SQL Developer) is out of scope of this article, but you can follow the steps outlined in the official documentation to set things up quickly

 

 

Access the REST endpoints

 

 

Purpose

Command

 

 

GET all products

curl -H "Accept: application/json" https://<application_URL>/appdev/products

GET a specific product

 

curl -H "Accept: application/json" https://<application_URL>/appdev/products/<product_name>

 

e.g. curl -H "Accept: application/json" https://<application_URL>/appdev/products/ACC

 

Refer above image for all product names

 

 

Please use the below mentioned application URL formats

 

https://accs-dbcs-service-binding-sample-<identity_domain>.apaas.<region>.oraclecloud.com/  //template

https://accs-dbcs-service-binding-sample-domain007.apaas.us2.oraclecloud.com/  //example

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

As part of the DevOps culture, and in particular the Ops processes, Stack Automation Engineers will always be interested in orchestrating the infrastructure and make it available for various environments, starting from Dev & Test, Benchmark, and/or Production or others.

 

Depending on your preferences, you could deploy a Chef Server in Oracle IaaS and orchestrate all your infrastructure ( the nodes ) from it, that being probably one of the most common setups, also my preferred one and to which I will make reference in this current article.

                     ChefServer-in-OPC.png

 

Alternatively, you could work directly from your workstation where you have installed Chef DK and orchestrate all your infrastructure in Oracle IaaS ( the various nodes  ) through a Chef Server that you can deploy it anywhere:

 

Chef_101.png

 

or have your Nodes Chef orchestrating themselves together with opc-init from remote accessible URL(s) where you deployed the desired scripts (pre-bootstrap), you can read more about it here.

                                         Chef_103.png

Follow this link, for a full tutorial on how to deploy Chef Server on Oracle Compute Cloud and how to knife a simple cookbook named "custom-ssh-banner".

 

Once you uploaded your cookbook on the chef server and knife it on a node, presuming your orchestrated node is named "chef-node1", once you will ssh into your "chef-node1" you should receive a similar notification as in the below:

 

This banner was brought to you by Chef using custom-ssh-banner.
[opc@chef-node1 ~]$

 

Tip: to knife bootstrap your node with a public ip <Node_Public_IP> and a hostname chef-node1:

 

knife bootstrap <Node_Public_IP> --ssh-user opc --sudo --identity-file <SSH_private_key> --node-name chef-node1.compute-identity-domain.oraclecloud.internal --run-list 'recipe[your_recipe]'

 

  • opc refers to the Oracle Public Cloud user, through which you can sudo to root on the machine if needed
  • chef-node1.compute-identity_domain.oraclecloud.internal refers to the internal hostname as qualified in Oracle IaaS

 

Once you finished the above tutorial you should be able to build your own Cookbooks and recipes or use the Chef Market.

 

Interested to learn more on how to deploy a Standalone Oracle Database with Chef cookbooks ? Check this tutorial.

Interested to learn more on how to deploy Middleware/Weblogic Chef cookbooks ? Check this GitHub Chef Samples.

 

Happy coding.

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

I explained in the first post From Disaster to Recovery part I the importance of having a DR site set up for your IT infrastructure and for your business overall.

 

Here are three examples of architectures that could lead to a DR solution set in place would be:

 

     1. Using Database Backup as a Service

 

 

 

 

This is effective when all your database backups are done using Database Backup Cloud Service and all the backups from production environment go on the Cloud Storage. This allows you to restore your databases both in the on-premise servers or/and on the Cloud Services like Database Cloud Service or Exadata Cloud Service.

 

     2. Using Data Guard or Active Data Guard.

 

 

 

If backing up the databases on the cloud storage is not enough you could chose for an Active Data Guard replication between the sites.

The Active Data Guard Replication could be implemented using the Maximum Availability Architecture giving you also data consistency and protection.

This way the standby database would be in read-only mode and could also be used for reporting, queries or sandbox creation.

 

     

     3. Full Stack Disaster Recovery

 

 

 

  • Database
    • Use Database Backup Service to send on-premise database backups to Oracle Cloud using RMAN
    • Restore the database in the cloud from the backup
  • Application
    • Use JAVA / REST calls to copy on-premise application data to Storage Cloud Service
    • You can also use OSC Appliance for the copy
    • Restore the data into the compute cloud from the object storage

 

The solutions and the architectures may vary also according to each company’s restrictions, SLAs or prerequisites. Using Database Cloud service as a standby-site for disaster recovery would prove to be both efficient and cost effective.

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

In this blog, I will present a sample implementation for a hybrid mobile app using Oracle JET (version 2.0.x) that will display data fetched from Oracle Sales Cloud via Oracle Mobile Cloud Service (version 16.3.x)

 

Building enterprise grade bespoke mobile apps pose a number of challenges. Implementing data retrieval in mobile application client code is incredibly complex because it needs to be shaped it in the required format within reasonable performance limits with all aspects of security handled appropriately. These requirements are addressed by mobile back-end implemented in Mobile Cloud Service (MCS), resulting into a superior user experience that implies increased user adoption.

 

In this sample, we will retrieve Opportunities for a specific Account from Sales Cloud and display on the mobile application.

 

Key Components

Below tables summarizes the key components involved in this solution:

 

Component
Details
Client Application (JET Hybrid app)A JET Hybrid application that communicates with Oracle MCS
Oracle MCSMCS (Mobile Cloud Service) delivers cloud-based, server-side mobile services to enable quick and easy app development.  It reduces complexity and redundancy when creating code blocks by providing ready-to-integrate features and templates.
Oracle  Sales CloudOracle Sales Cloud delivers a wide range of functionality to improve sales effectiveness, better understand customers, and build a pipeline for success. It exposes multiple public REST APIs that can be used to access data stored in Sales Cloud and construct integrations to other systems

 

Life-Cycle Flow

Below diagram shows the life-cycle flow for data flow from Oracle Sales Cloud to Mobile device via MCS:

 

jet-mcs-salescloud.png

 

Below are the main steps involved:

  1. The mobile app initiates connection with the mobile back-end server (MCS) and authenticate the logged in user.
  2. The mobile app make a REST call to custom API
  3. The custom API internally uses connector API to interact with Oracle Sales Cloud
  4. The response received to custom API’s is sent back to display data in the app

 

Below is the security approach followed:

  • Client Side: Hybrid Mobile Application:
    1. Authenticated the mobile application via SSO: On success , status 200 OK, API returns SSO TOKEN
    2. Used this SSO Token in Authorization header while calling MCS Custom API. This token is used to propagate identity to MCS Connector

 

  • Mobile Back-end: Oracle Mobile Cloud Service
    1. MCS API:
      • Developed custom API which in turn calls REST based connector (Oracle Sales Cloud)
    2. MCS Connector:
      • Developed Connector API pointing to Sales Cloud
      • Set the security policy to “oracle/http_saml20_token_bearer_over_ssl_client_policy” , keeping everything as default

1. In order to follow this security approach mentioned in this blog, please ensure that single sign-on (SSO) is set up for your Oracle Cloud account.

2. Both MCS and Sales Cloud service should be in same identity domain.

Component: Oracle Mobile Cloud Service

Step 1: Create a mobile back-end

  • Login into Oracle MCS and create a new mobile back-end, provide a suitable Name and description
  • Enable OAuth Consumer
  • Check-box to select "Enable Single Sign-On"

If the Mobile Backend is in Draft state and you don't see the Enable SSO checkbox, SSO isn't set up for your account. If you want to set it up, have your team's identity domain administrator go to the Oracle Cloud My Services page and configure SSO.

mcs-backend-settings.png

 

Step 2: Create a custom API

Here are the steps to create a custom API:

  • General: Create a new custom API, provide API display Name and API name. This API name translates to the external URL for the custom API for our mobile apps to connect with it.

mcs-api-general.png

  • Endpoints:  The next step is to define REST end points

mcs-api-endpoints.png

          Add details to the endpoint, in this case it is a GET request:

mcs-api-endpoint.png

 

  • Security: For sake of simplicity, we are keeping the disabling the credentials required to access this API

 

  • Implementation:  Here is the implementation code for this custom API, to know more about what you can do with custom code, I would recommend to visit “Implementing Custom APIs”

Package.json

{
  "name" : "rdhsalesapplapi",
  "version" : "1.0.0",
  "description" : "Sales Cloud API ",
  "main" : "rdhsalesapplapi.js",
  "oracleMobile" : {
    "dependencies" : {
      "apis" : { },
      "connectors" : { "/mobile/connector/RDOSCAPIConnectorOriginSecured": "1.0"}
    }
  }
}

 

rdhsalesapplapi.js

 

/**
 * Mobile Cloud custom code service entry point.
 * @param {external:ExpressApplicationObject}
 * service 
 */
module.exports = function (service) {
    /**
     *  The file samples.txt in the archive that this file was packaged with contains some example code.
     */
    service.get('/mobile/custom/RDHSalesApplAPI/opportunities', function (req, res) {
        var sdk = req.oracleMobile;
        var result = [];
        var statusCodeOk = 200;
        var statusCodeError = 404;
        // handling the Rest call response and processing the data
        var handler = function (error, response, body) {
            var responseMessage = JSON.parse(body);
            if (error) {
                responseMessage = error.message;
            } else {
                if (!responseMessage.items) {
                    res.send(statusCodeError, 'No Opportunities found for the user');
                }
            }
            var opps = [];
            opps = responseMessage.items;
            opps.forEach(function (oppty) {
                var temp = {};
                temp.TargetPartyId = oppty.TargetPartyId;
                temp.OptyId = oppty.OptyId;
                temp.OptyNumber = oppty.OptyNumber;
                temp.SalesStage = oppty.SalesStage;
                temp.Name = oppty.Name;
                temp.Revenue = oppty.Revenue;
                result.push(temp);
            });
            res.send(statusCodeOk, result);
            res.end;
        };
        //call for REST api to get list of opportunities
        var optionsList = {};
        var SalesAccountConnectorBaseURI = '/mobile/connector/RDOSCAPIConnectorOriginSecured';       
        optionsList.uri = SalesAccountConnectorBaseURI + '/salesApi/resources/latest/opportunities/';
        console.log('optionList  = ' + JSON.stringify(optionsList));
        sdk.rest.get(optionsList, handler);
    });
};

 

Step 3: Create a connector API

Before moving ahead, I would recommend you to go through this blog MCS Connector APIs: why should you be using them?  to know why connector APIs are required in spite of having the option to implement external service integration using Custom APIs directly

 

The next step is to create REST connector for accessing Oracle Sales Cloud:

 

  • General Configuration: Provide a name and description for new REST Connector API, and the path to the remote service it will expose.

  mcs-connector-general.png

  • Create Rules: User can create rules that automatically add default parameters when calling specific resources on this service. For this sample, we donot require any such rule so skip this option
  • Security Configuration: Use SAML as the client-side security policy in the MCS Connector, i.e. oracle/http_saml20_token_bearer_over_ssl_client_policy

mcs-connector-securityconfiguration.png

 

Step 4: Select API and associate with your Mobile Backend

 

The next step is to select the custom API created and associate it with your mobile  backend.

mcs-backend-api.png

 

Step 4: Test the Custom API

The last step is to test the custom API

 

A. Get Single Sign-On Auth Token

Open the following URL in an incognito or private browser window. The URL formation is as below:

 

<SSO_Token_Endpoint>?clientID=<client_ID>

 

example: https://xyz.oraclecorp.com:443/mobile/platform/sso/token?clientID=5xxxx7-bf49-45c1-aeda-2xxx4

 

The browser login screen shall look like:

browser-sso-login-screen.png

 

Upon Success, the browser will show Single Sign-On OAuth Token:

 

browser-sso-token-screen.png

 

B.  Test Custom API using MCS UI Test Endpoint

  • Select Mobile Backend
  • Paste SSO Token
  • Click Test Endpoint

mcs-api-testendpoint.png

 

 

Upon Success : Status 200 , data would be displayed:

mcs-api-testendpoint-result.png

 

Component: Oracle JET Hybrid Mobile App

Once the mobile back-end is up and ready, our next step is to display the data fetched data from MCS on the mobile interface.

You may please refer to Troubleshooting while developing your First JET based Hybrid Application blog in case of initial issues faced during development/configuration issues.

Project Setup using Yeoman

Yeoman generator for Oracle JET lets you quickly set up a project for use as a Web application or mobile-hybrid application for Android and iOS.

Use following command to generate hybrid application for Android:

yo oraclejet:hybrid oscmcssample --appId=com.rdh.oscmcs --appName="oscmcssample" --template=navBar --platforms=android  

 

 

Cordova Plugin Required

Please refer to Cordova Applications section in Oracle Mobile Cloud Service to obtain details of the cordova plugin.

 

Following cordova plugin needs to be added in our application:

  • oracle-mobile-cloud-cookies: Plugin for authenticating with MCS via SSO
  • cordova-plugin-inappbrowser: Plugin to provide a web browser view, required for displaying SSO Login Page

 

Adding Oracle MCS Cordova SDK

In order to communicate with Oracle MCS, following steps are required:

  1. Download the Cordova SDK from Oracle MCS. Extract the same on your local machine. It will contain Javascript based Cordova SDK , configuration files and documentation
  2. Add Oracle MCS Cordova SDK to your application, Copy mcs.js, mcs.min.js and oracle_mobile_cloud_config.js into the directory where you keep your JavaScript libraries.

 

For example, in this implementation, I have kept these files in mcs folder added in js/libs folder as shown in below image:

mcs-additions.png

 

2. Fill in your mobile backend details in oracle_mobile_cloud_config.js.

var mcs_config = {
  "logLevel": 3,
  "mobileBackends": {
    "RDXTESTSSO": {
      "default": true,
      "baseUrl": "https://xxx.oraclecorp.com:443",
      "applicationKey": "YOUR_BACKEND_APPLICATION_KEY",
      "synchronization": {
        "periodicRefreshPolicy": "PERIODIC_REFRESH_POLICY_REFRESH_NONE",
        "policies": [
          {
            "path": '/mobile/custom/taskApi/*',
            "fetchPolicy": 'FETCH_FROM_SERVICE_ON_CACHE_MISS_OR_EXPIRY',
            "expiryPolicy": 'EXPIRE_ON_RESTART',
            "evictionPolicy": 'EVICT_ON_EXPIRY_AT_STARTUP',
            "updatePolicy": 'QUEUE_IF_OFFLINE',
            "noCache" : false
          },
          {
            "path": '/mobile/custom/firstApi/tasks',
            "fetchPolicy": 'FETCH_FROM_SERVICE_ON_CACHE_MISS'
          },
          {
            "path": '/mobile/custom/secondApi/tasks',
          }
        ],
        "default" :{
          "fetchPolicy": 'FETCH_FROM_SERVICE_IF_ONLINE',
          "expiryPolicy": 'EXPIRE_ON_RESTART'
        }
      },
        "authorization": {
        "basicAuth": {
          "backendId": "YOUR_BACKEND_ID",
          "anonymousToken": "YOUR_BACKEND_ANONYMOUS_TOKEN"
        },
        "oAuth": {
          "clientId": "YOUR_CLIENT_ID",
          "clientSecret": "YOUR_ClIENT_SECRET",
          "tokenEndpoint": "YOUR_TOKEN_ENDPOINT"
        },
        "facebookAuth":{
          "facebookAppId": "YOUR_FACEBOOK_APP_ID",
          "backendId": "YOUR_BACKEND_ID",
          "anonymousToken": "YOUR_BACKEND_ANONYMOUS_TOKEN"
        },
        "ssoAuth":{
          "clientId": "5xxxxxx7-bf49-45c1-aeda-2xxxxx4",
          "clientSecret": "yxxxxxxx",
          "tokenEndpoint": "https://xxx.oraclecorp.com:443/mobile/platform/sso/token"
        }
      }
    }
  }
};

 

For more details, please See Configuring SDK Properties for Cordova.

 

After adding the physical files, update the paths mapping for mcs and mcs_cloud_config  in main.js file under requirejs.config section:

 paths:
                    //injector:mainReleasePaths
                            {
                                'knockout': 'libs/knockout/knockout-3.4.0.debug',
                                'jquery': 'libs/jquery/jquery-2.1.3',
                                'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4',
                                'promise': 'libs/es6-promise/promise-1.0.0',
                                'hammerjs': 'libs/hammer/hammer-2.0.4',
                                'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0',
                                'ojs': 'libs/oj/v2.0.2/debug',
                                'ojL10n': 'libs/oj/v2.0.2/ojL10n',
                                'ojtranslations': 'libs/oj/v2.0.2/resources',
                                'text': 'libs/require/text',
                                'signals': 'libs/js-signals/signals',
                                'mcs': 'libs/mcs/mcs',
                                'mcsconf': 'libs/mcs/oracle_mobile_cloud_config'
                            }
                    //endinjector

Implementation Steps

We will be implementing the entire code in dashboard.html and dashboard.js for easy implementation.

 

Add the additional modules: mcs and mcsconf to get loaded in dashboard.js file:

define(['ojs/ojcore', 'knockout', 'jquery', 'mcs', 'mcsconf', 'ojs/ojknockout', 'ojs/ojfilmstrip', 'ojs/ojpagingcontrol', 'ojs/ojbutton'],

 

Initialize following variables in dasbhoardViewModel function

var backend = 'empty';
                self.model = ko.observable('filmstrip-navdots-example');
                self.dataReady = ko.observable(false);
                self.data = ko.observableArray();
                self.buttonSSOLogin = function () {
                    initializeMCS();
                };

 

 

Updated dashboard.html file: Please find attached the updated dashboard.html file

 

Step 1.  Load Mobile Back end’s Configuration into the application:

function initializeMCS() {
                    mcs.MobileBackendManager.platform = new mcs.CordovaPlatform();
                    mcs.MobileBackendManager.setConfig(mcs_config);
                    backend = mcs.MobileBackendManager.getMobileBackend("RDXTESTSSO");
                    if (backend != null) {
                        backend.setAuthenticationType("ssoAuth");
                        ssoLogin();
                    }
                }

Step 2. Authenticate and Log In Using the SDK:

function ssoLogin() {
                    backend.Authorization.authenticate(
                            function (statusCode, data) {
                                console.log(data);
                                console.log(statusCode);
                                alert("SSO Login success, status:" + statusCode);                               
                                fetchSalesCloudData(data.access_token);
                            },
                            function (statusCode, data) {
                                console.log(statusCode + " with message:  " + data);
                                alert("SSO Login failed, statusCode" + statusCode);
                            });
                }

Step 3. In all REST calls to MCS APIs, include the given token in the Authorization header. In this case, we will be passing this token while calling API to fetch data from our SalesCloud custom API:

function fetchSalesCloudData(ssoToken)
                {
                    var mcsbackendURL = mcs_config.mobileBackends.RDXTESTSSO.baseUrl + "/mobile/custom/RDHSalesApplAPI/opportunities";
                    console.log(mcsbackendURL);
                    var token = "Bearer " + ssoToken;
                    console.log(token);
                    var settings = {
                        "async": true,
                        "crossDomain": true,
                        "url": mcsbackendURL,
                        "method": "GET",
                        "headers": {
                            "authorization": token
                        }
                    };
                    $.ajax(settings).done(function (response) {
                        console.log(response);
                        $.each(response, function () {
                            self.data.push({
                                Name: this.Name,
                                OptyId: this.OptyId,
                                OptyNumber: this.OptyNumber,
                                SalesStage: this.SalesStage,
                                Revenue: this.Revenue
                            });
                        });
                        self.dataReady = true;
                        displaySalesCloudData();
                    });
                }

Step 4: Display data using Oracle JET filmstrip component

function displaySalesCloudData()
                {
                    console.log("inside displayFilmStrip");
                    self.pagingModel = null;
                    getItemInitialDisplay = function (index)
                    {
                        return index < 1 ? '' : 'none';
                    };
                    getPagingModel = function ()
                    {
                        if (!self.pagingModel)
                        {
                            var filmStrip = $("#filmStrip");
                            var pagingModel = filmStrip.ojFilmStrip("getPagingModel");
                            self.pagingModel = pagingModel;
                        }
                        return self.pagingModel;
                    };
                }

 

Build and Run the application on Android emulator/device

 

In your command prompt, please change directory to project folder and run the following command:

 

Build the application using following command

 grunt build --platform=android

 

Once build is success, then run the application using following command, assuming android emulator is already up and running:

grunt serve --platform=android  --disableLiveReload=true  

 

Output

Please find attached the screen-shot output of the emulator:

1. Application Icon:

app-icon.png

 

2. SSO Login

 

app-dashboard-sso-login-button.png

 

3. SSO Login view in embedded web-view. Enter mobile user credentials and press SignIn Button

app-login-page.png

 

4. Success Alert message

app-login-success.png

5. Display of Sales Cloud data on view using JET Filmstrip component

app-dashboard-filmstrip.png

Although the picture below seems extremely funny, in theory screaming for help is the last thing that we should do when disaster happens. But the theory is not reality!

 

 

We would expect that Disaster Recovery plans are just an extra measure for natural disasters. So we build our datacenters deep in the ground, Earth-quake free zones, and different geographic arias. But I saw some dazzling numbers that contradicted me:

 

It is rather difficult to keep your entire business alive when a junior electric engineer misunderstands which cable is the most important power cord for the datacenter and disconnects it. And this is just the simplest example that you can’t predict all the downtimes.

A disaster recovery solution should cover at least these aspects:

    • Avoid Single Point of Failure
    • Prevent Data Loss
    • Reduce Downtime Cost & Revenue Impact for Planned & Unplanned Outages
    • Disaster and Data Protection for Compliance & Regulatory Purposes

 

Depending on the tools and architecture used a Disaster Recovery plan can lead to a complex and expensive implementations that will make you question twice: why should we do it.

 

It is just a matter of choosing the right tools.

A low cost, simple and low risk solution for Oracle databases would be using Data Guard to Oracle Cloud environment.

The combo would easily cover:

  • Primary / DR synchronisation

Using Data Guard it would assure the synchronization on different MAA best practices.

 

  • On-Demand elasticity after migration to the DR site

Using the Database Cloud Service offers you the elasticity needed to fit your database in the cloud.

 

  • High investment in Hardware and Software & DR site operational aspects

DR requires High investments in new Hardware, Software licenses and additional staff to operate the site... Using Oracle Database Cloud Service would drastically reduce these costs.

 

  • Data inconsistency or corruption

Data corruption can happen if remote storage mirroring solutions are used to replicate database files instead of using Oracle Data Guard of Golden Gate.

 

From Disaster to Recovery it’s just a matter of choosing the proper tools, software and the right partners. In my next post I will detail 3 Disaster Recovery architectures using Database Cloud Backup Service, Data Guard, and obviously the Oracle Public Cloud environment.

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Oracle Developer Cloud Service provides you with following capabilities as far as JUnit is concerned

 

  • Viewing the list of all executed tests and the cumulative test metrics
  • Test Result History to track the history of all tests in a graphical form

 

This blog contains a simple JPA based project which uses an in-memory Derby database to execute JUnit tests. You will see how to

 

  • Setup the source code in your Developer Cloud service instance Git repository
  • Configure the build process along with the JUnit test related actions
  • Execute the build and track the test results

 

 

Steps

 

Unit test

 

The JUnit test case

 

public class JSRRepositoryTest {


    public JSRRepositoryTest() {
    }
    static Map<String, String> props = new HashMap<>();
    final static String PU_NAME = "derby-in-memory-PU";


    @BeforeClass
    public static void setUpClass() {


        props.put("javax.persistence.jdbc.url", "jdbc:derby:target/derbydb;create=true");
        props.put("javax.persistence.jdbc.driver", "org.apache.derby.jdbc.EmbeddedDriver");
        JPAFacade.bootstrapEMF(PU_NAME, props);


    }


    @AfterClass
    public static void tearDownClass() {
        props.clear();
        props = null;


        JPAFacade.closeEMF();
    }


    JSRRepository cut;


    @Before
    public void setUp() {
        cut = new JSRRepository();
    }


    @After
    public void tearDown() {
        //nothing to do
    }


    @Test
    public void getSingleJSRTest() {
        JavaEESpecification spec = cut.get("123");
        assertNotNull("Spec was null!", spec);
        assertEquals("Wrong spec id", spec.getJsrId(), new Integer(123));
        assertEquals("Wrong spec name", spec.getName(), "jsonb");
    }


    @Test(expected = RuntimeException.class)
    public void getSingleJSRTestForNullValue() {
        cut.get(null);


    }


    @Test(expected = RuntimeException.class)
    public void getSingleJSRTestForBlankValue() {
        cut.get("");


    }


    @Test
    public void getSingleJSRTestForInvalidValue() {
        JavaEESpecification spec = cut.get("007");
        assertNull("Spec was not null!", spec);
    }


    @Test
    public void getAllJSRsTest() {
        List<JavaEESpecification> specs = cut.all();
        assertNotNull("Specs list was null!", specs);
        assertEquals("2 specs were not found", specs.size(), 2);
    }


    @Test
    public void createNewJSRTest() {
        JavaEESpecification newSpec = new JavaEESpecification(366, "Java EE Platform", "8");
        cut.newJSR(newSpec);
        JavaEESpecification spec = cut.get("366");
        assertNotNull("Spec was null!", spec);
        assertEquals("Wrong spec id", spec.getJsrId(), new Integer(366));
        assertEquals("Wrong spec name", spec.getName(), "Java EE Platform");
        assertEquals("Wrong spec version", spec.getVersion(), "8");
    }


    @Test
    public void updateJSRDescTest() {


        String specID = "375";
        String oldDesc = "security for the Java EE platform";
        String newDesc = "updated desc on " + new Date();


        JavaEESpecification newSpec = new JavaEESpecification(Integer.parseInt(specID), oldDesc, "Security", "1.0");
        cut.newJSR(newSpec);
        JavaEESpecification updatedSpec = new JavaEESpecification(Integer.parseInt(specID), newDesc, "Security", "1.0");


        cut.updateJSRDescription(updatedSpec);
        JavaEESpecification spec = cut.get(specID);


        assertNotNull("Spec was null!", spec);
        assertEquals("Description was not updated", spec.getDescription(), newDesc);
        assertEquals("Wrong spec id", spec.getJsrId(), new Integer(specID));
        assertEquals("Wrong spec name", spec.getName(), "Security");
        assertEquals("Wrong spec version", spec.getVersion(), "1.0");
    }
}

 

Project & code repository creation

 

Create a project in your Oracle Developer Cloud instance

 

 

 

 

 

 

 

Create a Git repository – browse to the Home tab, click New Repository and follow the steps

 

 

 

 

 

You should see your new repository created

 

Populating the Git repo

 

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use this or any other tool of your choice

 

cd <project_folder> 
git init
git remote add origin <developer_cloud_git_repo>
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/junit-sample-app-repo.git 
git add .
git commit -m "first commit"
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

You should be able to see the code in your Developer Cloud console

 

 

 

Configure build job

 

 

 

 

 

 

 

Important

 

Activate the following post build actions

 

  • Publishing of JUnit test result reports
  • Archiving of test reports (if needed)

 

 

 

 

Trigger build

 

 

Check test results

 

After the build process is over (it will fail in this case), check the top right corner of your build page and click Tests

 

 

Overall metrics

 

 

Failed tests snapshot

 

 

Failed test details

 

 

Example of a passed test

 

 

Result History

 

 

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Oracle Documents Cloud Service  provides “Conversations” – which is a great way for to share information with each other in an enterprise.  This can reap great benefits by extending to the developer teams in your enterprise by integrating with the Developer Cloud Service to automatically provide updates about the latest status of a project.  Furthermore, conversations can be accessed by any user on the go, using the Android/iPhone mobile app for Documents Cloud Service (for downloads refer to this page) and ensure that everybody is on the same page always with regards to what is happening in the project.

 

Do note that conversations are actually a feature of Oracle Social Network Cloud (OSN) which is bundled along with a Documents Cloud Service instance and I will be using these interchangeably in this blog.

 

This blog post will demonstrate how to achieve this via Webhooks provided by the Developer Cloud Service (version 16.3.5) and the Documents Cloud Service (version 16.3.5). Webhooks are a callback mechanism for a producer application to provide updates about itself to other consumer applications in real time.  This eliminates the need for polling for the destination applications and improves efficiency for both.

 

So any activities in the project, like issue updates, commits, branch creation etc, will be automatically published into Documents Cloud Service (or Oracle Social Network Cloud) as posts to a conversation.  This way the entire team (which has access to the conversation) can be aware about the latest updates to the project, and can collaborate with each other to clear impediments during development like failed builds, merge conflicts etc.

 

In the case of Documents Cloud Service and Developer Cloud Service, this can be accomplished without any code using an incoming webhook at the Documents Cloud Service side and an outgoing webhook at the Developer Cloud Service as illustrated in the below figure.

 

Webhooks system.png

An outgoing webhook is used to push data out of the system, i.e. convert the event data into the required payload and post the same to the target URL, in this case the URL for the DOCS/OSN incoming webhook.  An incoming webhook is used at the destination system as a service listening for these post events and converts the payload into the meaningful destination data – in this case, conversations.

 

Below are the steps for achieving this setup:

 

Step 1 – Create a conversation in Documents Cloud Service

First, we need to create the conversation to which all the Developer Cloud Service updates would be automatically posted.  Go to the Documents Cloud Service console, click on “Conversations” on the left-hand menu and create a conversation.  I am going to name this conversation – “Webhooks 2016”

 

incoming webhook - 1.png

incoming webhook - 2.png

 

Step 2 – create a DCS (Developer Cloud Service) incoming hook in Documents Cloud Service

In the Documents Cloud Service, click on your user name on the top right hand corner, and select “Administration” from the menu.  This will take you to the Documents Cloud Service Administration console.  Select “Webhooks” from the administration menu as shown:

 

incoming webhook - 3.png

 

You would see 3 options –

  • DCS Incoming Webhook – handles incoming updates from a Developer Cloud Service Instance
  • Generic Incoming Webhook – handles incoming updates from any application
  • JIRA Incoming Webhook – handles incoming updates from a JIRA application

 

 

incoming webhook - 4.png

 

We will create a DCS Incoming Webhook as our source is a Developer Cloud Service project.  Click on New Instance and provide the various parameters to create the webhook.

 

incoming webhook - 5.png

 

For the fields:

Webhook Name – a relevant name to identify the webhook

Webhook enabled – tick this to enable the webhook

Target conversation – select the conversation created in Step 1

Message template - would be populated by default and contain two variables data.projectId and data.message.  The structure of the data object is as follows:

data {

message : ‘test’

projectId : ‘projectId’

}

 

Once you save this webhook, there will be a webhook URL that is generated along with the token.

 

incoming webhook - 6.png

 

Here the webhook URL is a relative URL.  The full webhook URL would be the following:

https://<OSN instance URL>/osn-public/hooks/webhooks

 

And the token here would be ‘6361690deedf10aad6147533636dfd49’

 

To find out the OSN (Oracle Social Network) instance URL attached to the Documents Cloud Service, open the following endpoint in your browser (note that you have to be logged in to the documents cloud service to access this)

https://<documents instance URL>/documents/web?IdcService=GET_SERVER_INFO

This will return a JSON payload containing a lot of server side information.  Look for an attribute called OSNServiceURL

[

"OSNServiceURL",

"https://instancexyz-test.socialnetwork.us.oraclecloud.com:443/osn"

],

 

Replace the osn with osn-public, this becomes your webhook base URL.

So in this example, the webhook URL will be

https://instancexyz-test.socialnetwork.us.oraclecloud.com:443/osn-public/hooks/webhooks

 

And the token will be 6361690deedf10aad6147533636dfd49

 

Step 3 – create an Outgoing webhook in Developer Cloud Service

The next step is to create an outgoing webhook in Developer cloud service to post updates to the Documents Cloud Service incoming webhook.  Note that you have to be the administrator for the project for which you want to configure the webhook to achieve this.

 

Here for demonstration purposes, I have created an empty GIT project called “test” for which I am the administrator.  Select the project and go to the administration tab, and click on Webhooks.

 

outgoing webhook - 1.png

 

Select New Webhook, which will take you to the ‘Create Webhook’ form

 

outgoing webhook - 2.png

 

In the “Create Webhook” form, use the following

Type = Oracle Social Network

Name = any description, I am using “test webhook” in this example

Active = Yes

URL = give the OSN webhook URL i.e. https://instancexyz-test.socialnetwork.us.oraclecloud.com:443/osn-public/hooks/webhooks

Authentication token – give the token value - 6361690deedf10aad6147533636dfd49

Event groups – You can select specific events, or Failed Builds.  For the purpose of this example, I am going to select “All events”

Click on Done when all the details are complete.  This will create a new Webhook ready to be used.

 

outgoing webhook - 3.png

 

Step 4 – Test the outgoing webhook


Developer Cloud Service provides a way to test the webhook by simply clicking on the “Test” button.  You can also go to Logs and see the full details of the webhook interactions including the message payload.  If the test is successful, it will show up in green as below.

 

outgoing webhook - 4.png

 

This should show up in our “Webhooks 2016” conversation in Documents Cloud Service.

 

conversation.png

 

If the test is successful, try performing some activity in the Developer Cloud Service project, like commit or create a wiki, and check whether the same is being populated in the conversation.

 

You can track this conversation from the mobile client for Documents Cloud Service as well (available for iPhone and Android).  An effortless way of increasing developer collaboration and keeping track of project activities!  Consult the official documentation of Developer Cloud Service and Documents Cloud Service for more on Webhooks.

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

oow-logo-2015.png

 

 

Oracle Application Builder Cloud Service (ABCS) is Oracle’s offering that empowers business users to create and publish web and mobile applications

quickly with a no-coding required browser based development experience.

 

 

Oracle will showcase the product at Oracle Open World 2016 San Francisco through the following Sessions, Hands-on Labs and Demo Booths

                                                                                                                                                                                                                                                                                    

Visual Application Development and Data Mashups for Busy Professionals [CON7296]                                                                                                       

Thursday, Sep 22, 12:00 p.m. - 12:45 p.m. | Marriott Marquis - Golden Gate C3

 

Solving IT Backlogs with Next-Generation Citizen Developer Tools [CON2888]                                                                                                    

Thursday, Sep 22, 9:30 a.m. - 10:15 a.m. | Marriott Marquis - Golden Gate B                                                                                                                                       

 

Dashboards and Custom Application Development for Oracle Customer Experience Users [CON2891]

Wednesday, Sep 21, 3:00 p.m. - 3:45 p.m. | Moscone West - 2016

 

No Code Required: Application Development and Publishing Made Easy [HOL7402]

Tuesday, Sep 20, 11:30 a.m. - 12:30 p.m. | Hotel Nikko - Nikko Ballroom III (3rd Floor)

Monday, Sep 19, 4:15 p.m. - 5:15 p.m. | Hotel Nikko - Nikko Ballroom III (3rd Floor)

 

HTML5 and JavaScript User Interface Development with Oracle’s Platform, Tools, and Frameworks [CON6492]

Wednesday, Sep 21, 12:15 p.m. - 1:00 p.m. | Moscone South - 306

 

Extend Digital Experiences through Component-Driven Integrations [CON7265]

Thursday, Sep 22, 9:30 a.m. - 10:15 a.m. | Moscone West - 2014

 

Simplified Multichannel App Development for Business Users [CON2884]

Monday, Sep 19, 1:45 p.m. - 2:30 p.m. | Moscone West – 2005

 

ABCS demos at the mobile mini-theater

Tuesday, Sep 20 10:30 a.m. - 11:30 a.m | Moscone Mobile mini-theater

Wednesday, Sep 21 10:30 a.m. - 11:30 a.m | Moscone Mobile mini-theater

 

Meet the product experts at Demogrounds

Monday-Wednesday, Sep 19-21, 10:15 a.m – 5:15 a.m. | Moscone South and Moscone West

 

More details regarding the session can be found here: http://bit.ly/OOW16ABCS

Register for the events to understand how to start creating rapid applications from the comfort of your browser!