Skip navigation

This blog will demonstrate how to a build and run a WebSocket based microservice. Here is what the blog will cover at a high level

 

  • Overview of WebSocket and the sample Java application
  • Continuous Integration setup: from source code in the IDE to a build artifact in Oracle Developer Cloud
  • Continuous Deployment setup: from a build artifact in Developer Cloud Service to an application running in Oracle Application Container Cloud
  • Testing the application

 

 

Overview

 

WebSocket: the standard

 

WebSocket is an IETF standard recognized by RFC 6455 and has the following key characteristics which make it great fit for real time applications

  • Bi-directional: both server and client an initiate a communication
  • Full duplex: once the WebSocket session is established, both server and client can communicate independent of each other
  • Less verbose (compared to HTTP)

 

A deep dive into the protocol is out of scope of this blog. Please refer to the RFC for further details

 

Java Websocket API

 

A standard Java equivalent (API) for this technology is defined by JSR 356. It is backed by a specification which makes it possible to have multiple implementations of the same. JSR 356 is also included as a part of the Java Enterprise Edition 7 (Java EE 7) Platform. This includes a pre-packaged (default) implementation of this API as well as integration with other Java EE technologies like EJB, CDI etc.

 

Tyrus

 

Tyrus is the reference implementation of the Java Websocket API. It is the default implementation which is packaged with Java EE 7 containers like Weblogic 12.2.1 (and above) and Glassfish (4.x). It provides both server and client side API for building web socket applications.

 

Tyrus grizzly module

 

Tyrus has a modular architecture i.e. it has different modules for server, client implementations, a SPI etc. It supports the notion of containers (you can think of them as connectors) for specific runtime support (these build on the modular setup). Grizzly is one of the supported containers which can be used for server or client (or both) modes as per your requirements (the sample application leverages the same)

 

About the sample application

 

The sample is a chat application – a canonical use case for WebSockets (this by no means a full-blown chat service). Users can

  • Join the chat room (duplicate usernames not allowed)
  • Get notified about new users joining
  • Send public messages
  • Send private messages
  • Leave the chat room (other users get notified)

 

The application is quite simple

  • It has a server side component which is a (fat) JAR based Java application deployed to Application Container Cloud
  • The client can be any component which has support for the WebSocket API e.g. your browser . The unit tests use the Java client API implementation of Tyrus

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Code

 

Here is a summary of the various classes and their roles

 

Class(es)

Category

Description

 

 

 

ChatServer

Core

It contains the core business logic of the application

WebSocketServerManager

Bootstrap

Manages bootstrap and shutdown process of the WebSocket container

ChatMessage,

DuplicateUserNotification, LogOutNotification,

NewJoineeNotification,

Reply,

WelcomeMessage

Domain objects

Simple POJOs to model the application level entities

ChatMessageDecoder

Decoder

Converts chats sent by users into Java (domain) object which can be used within the application

DuplicateUserMessageEncoder, LogOutMessageEncoder,

NewJoineeMessageEncoder,

ReplyEncoder,

WelcomeMessageEncoder

Encoder(s)

Converts Java (domain) objects into native (text) payloads which can be sent over the wire using the WebSocket protocol

 

Here is the WebSocket endpoint implementation (ChatServer.java)

 

@ServerEndpoint(
        value = "/chat/{user}/",
        encoders = {ReplyEncoder.class, 
                    WelcomeMessageEncoder.class, 
                    NewJoineeMessageEncoder.class, 
                    LogOutMessageEncoder.class,
                    DuplicateUserMessageEncoder.class},
        decoders = {ChatMessageDecoder.class}
)


public class ChatServer {


    private static final Set<String> USERS = new ConcurrentSkipListSet<>();
    private String user;
    private Session s;
    private boolean dupUserDetected;


    @OnOpen
    public void userConnectedCallback(@PathParam("user") String user, Session s) {
        if (USERS.contains(user)) {
            try {
                dupUserDetected = true;
                s.getBasicRemote().sendText("Username " + user + " has been taken. Retry with a different name");
                s.close();
                return;
            } catch (IOException ex) {
                Logger.getLogger(ChatServer.class.getName()).log(Level.SEVERE, null, ex);
            }


        }
        this.s = s;
        s.getUserProperties().put("user", user);
        this.user = user;
        USERS.add(user);


        welcomeNewJoinee();
        announceNewJoinee();
    }


    private void welcomeNewJoinee() {
        try {
            s.getBasicRemote().sendObject(new WelcomeMessage(this.user));
        } catch (Exception ex) {
            Logger.getLogger(ChatServer.class.getName()).log(Level.SEVERE, null, ex);
        }
    }


    private void announceNewJoinee() {
        s.getOpenSessions().stream()
                .filter((sn) -> !sn.getUserProperties().get("user").equals(this.user))
                //.filter((s) -> s.isOpen())
                .forEach((sn) -> sn.getAsyncRemote().sendObject(new NewJoineeNotification(user, USERS)));
    }


    public static final String LOGOUT_MSG = "[logout]";


    @OnMessage
    public void msgReceived(ChatMessage msg, Session s) {
        if (msg.getMsg().equals(LOGOUT_MSG)) {
            try {
                s.close();
                return;
            } catch (IOException ex) {
                Logger.getLogger(ChatServer.class.getName()).log(Level.SEVERE, null, ex);
            }
        }
        Predicate<Session> filterCriteria = null;
        if (!msg.isPrivate()) {
            //for ALL (except self)
            filterCriteria = (session) -> !session.getUserProperties().get("user").equals(user);
        } else {
            String privateRecepient = msg.getRecepient();
            //private IM
            filterCriteria = (session) -> privateRecepient.equals(session.getUserProperties().get("user"));
        }


        s.getOpenSessions().stream()
                .filter(filterCriteria)
                //.forEach((session) -> session.getAsyncRemote().sendText(msgContent));
                .forEach((session) -> session.getAsyncRemote().sendObject(new Reply(msg.getMsg(), user, msg.isPrivate())));


    }


    @OnClose
    public void onCloseCallback() {
        if(!dupUserDetected){
            processLogout();
        }
        
    }


    private void processLogout() {
        try {
            USERS.remove(this.user);
            s.getOpenSessions().stream()
                    .filter((sn) -> sn.isOpen())
                    .forEach((session) -> session.getAsyncRemote().sendObject(new LogOutNotification(user)));


        } catch (Exception ex) {
            Logger.getLogger(ChatServer.class.getName()).log(Level.SEVERE, null, ex);
        }
    }


}

 

Setting up Continuous Integration & Deployment

 

The below sections deal with the configurations to made within the Oracle Developer Cloud service

 

Project & code repository creation

 

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

 

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

cd <project_folder> 
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/acc-websocket-sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/acc-websocket-sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

Configure build

 

Create a New Job

 

 

Select JDK

 

 

 

Continuous Integration (CI)

 

Choose Git repo

 

 

 

Set build trigger - this build job will be triggered in response to updated within the Git repository (e.g. via git push)

 

 

 

Add Maven Build Step

 

 

 

Activate the following post build actions

  • Archive the Maven artifacts (contains deployable zip file)
  • Publish JUnit test result reports

 

 

 

Execute Build & check JUnit test results

 

Before configuring deployment, we need to trigger the build in order to produce the artifacts which can be referenced by the deployment configuration

 

 

 

After the build is complete, you can

  • Check the build logs
  • Check JUnit test results
  • Confirm archived Maven artifacts

 

 

 

 

Test results

 

 

 

Build logs

 

 

 

 

Continuous Deployment (CD) to Application Container Cloud

 

Create a New Confguration for deployment

 

 

 

Enter the required details and configure the Deployment Target

 

 

 

Configure the Application Container Cloud instance

 

 

 

 

 

Configure Automatic deployment option on the final confirmation page

 

 

 

Confirmation screen

 

 

 

 

Test the CI/CD flow

 

Make some code changes and push them to the Developer Cloud service Git repo. This should

 

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process

 

 

 

 

 

 

 

Check your application in Application Container Cloud

 

 

 

 

 

Here is the detailed view

 

 

 

 

Test

 

You would need a WebSocket client for this example. I would personally recommend using the client which can be installed into Chrome browser as a plugin – Simple WebSocket Client. See below snapshot for a general usage template of this client

 

 

 

The following is a template for the URL of the WebSocket endpoint

 

wss://<acc-app-url>/chat/<user-handle>/
e.g. wss://acc-websocket-chat-domain007.apaas.em2.oraclecloud.com/chat/abhi/

 

 

Test transcript

 

Here is a sequence of events which you can execute to test things out

 

Users foo and bar join the chatroom

 

wss://acc-websocket-chat-domain007.apaas.em2.oraclecloud.com/chat/foo/
wss://acc-websocket-chat-domain007.apaas.em2.oraclecloud.com/chat/bar/

 

   

 

 

foo gets notified about bar

 

 

 

 

User john joins

 

wss://acc-websocket-chat-domain007.apaas.em2.oraclecloud.com/chat/john/

 

 

 

foo and bar are notified

 

 

     

 

 

foo sends a message to everyone (public)

 

 

 

Both bar and john get the message

             

 

bar sends a private message to foo

 

 

Only foo gets it

 

 

In the meanwhile, john gets bored and decides to leave the chat room

 

 

 

Both foo and bar get notified

 

         

 

That's all folks !

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This is the first of a two-part blog series. It leverages the Oracle Cloud platform (in concert with some widely used open source technologies) to demonstrate message based, loosely coupled and asynchronous interaction between microservices with the help of a sample application. It deals with

 

  • Development of individual microservices
  • Using asynchronous messaging for loosely coupled interactions
  • Setup & deployment on respective Oracle Cloud services

 

The second part is available here

 

 

 

Technical components

 

Oracle Cloud

The following Oracle Cloud services have been leveraged

 

Oracle Cloud Service

Description

 

 

Application Container Cloud

Serves as a scalable platform for deploying our Java SE microservices

Compute Cloud

Hosts the Kafka cluster (broker)

 

 

 

Open source technologies

The following open source components were used to build the sample application

 

Component

Description

 

 

Apache Kafka

A scalable, pub-sub message hub

Jersey

Used to implement REST and SSE services. Uses Grizzly as a (pluggable) runtime/container

Maven

Used as the standard Java build tool (along with its assembly plugin)

 

Messaging in Microservices

 

A microservice based system comprises of multiple applications (services) which typically focus on a specialized aspect (business scenario) within the overall system. It’s possible for these individual services to function independently without any interaction what so ever, but that’s rarely the case. They cannot function in isolation and need to communicate with each other to get the job done. There are multiple strategies used to implement inter-microservice communication and they are often categorized under buckets such as synchronous vs asynchronous styles, choreography vs orchestration, REST (HTTP) vs messaging etc.

 

 

About the sample application

Architecture

 

The use case chosen for the sample application in this example is a simple one. It works with randomly generated data (the producer microservice) which is received by a another entity (the consumer microservice) and ultimately made available using the browser for the user to see it in real time

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 





A highly available setup has not been taken into account in this post. What we have is a single Kafka node i.e. there is just one server in the Kafka cluster and both the Producer and Consumer microservices are deployed in Application Container Cloud (both have a single instance each)

 

Let’s look at the individual components depicted in the above diagram

 

Apache Kafka

Apache Kafka is popularly referred to as a ‘messaging system or a streaming platform implemented as a distributed commit log’. It would be nice to have a simpler explanation

 

  • Basic: Kafka is a publish-subscribe based messaging system written in Scala (runs on the JVM) where publishers write to topics and consumers poll these topics to get data
  • Distributed: the parts (broker, publisher and consumer) are designed to be horizontally scalable
  • Master slave architecture: data in topics is distributed amongst multiple nodes in a cluster (based on the replication factor). Only one node serves as a master for a specific piece of data while 0 or more nodes can contain copies of that data i.e. act as followers
  • Partitions: Topics are further divided into partitions. Each partition basically acts as a commit log where the data (key-value pairs) is stored. The data is immutable, has strict ordering (offset is assigned for each data entry), is persisted and retained to disk (based on configuration)
  • Fitment: Kafka is suitable for handling high volume, high velocity, real time streaming data
  • Not JMS: Similar yet different from JMS. It does not implement the JMS specification, neither is it meant to serve as a drop in replacement for a JMS based solution

The Kafka broker is nothing but a Kafka server process (node). Multiple such nodes can form a cluster which act as a distributed, fault-tolerant and horizontally scalable message hub.

 

Producer Microservice

 

It leverages the Kafka Java API and Jersey (the JAX-RS implementation). This microservice publishes sample set of events at a rapid pace since the goal is to showcase a real time data pub-sub pipeline.

 

Sample data

 

Data emitted by the producer is modeled around metrics. In this example it’s the CPU usage of a particular machine and can be thought of as simple key-value pairs (name, % usage etc.). Here is what it looks like (ignore the Partition attribute info)

 

: Partition 0
event: machine-2
id: 19
data: 14%

: Partition 1
event: machine-1
id: 20
data: 5%

 

 

Consumer Microservice

 

This is the 2nd microservice in our system. Just like the Producer, it makes use of Jersey as well as the Kafka Java (consumer) API. Another noteworthy Jersey component which is used is the Server Sent Events module which helps implement subscribe-and-broadcast semantics required by our sample application (more on this later)

 

Both the microservices are deployed as separate applications on the Application Container Cloud platform and can be managed and scaled independently

 

Setting up Apache Kafka on Oracle Compute Cloud

 

You have a couple of options for setting up Apache Kafka on Oracle Compute Cloud (IaaS)

 

Bootstrap a Kafka instance using Oracle Cloud Marketplace

Use the Bitnami image for Apache Kafka from the marketplace (for detailed documentation, please refer this link)

 

 

 

Use a VM on Oracle Compute Cloud

Start by provisioning a Compute Cloud VM on the operating system of your choice – this documentation provides an excellent starting point

 

Enable SSH access to VM

 

To execute any of the configurations, you first need to enable SSH access (create security policies/rules) to your Oracle Compute Cloud VM. Please find the instructions for Oracle Linux and Oracle Solaris based VMs respectively

 

 

Install Kafka on the VM

 

This section assumes Oracle Enterprise Linux based VM

 

Here are the commands

 

sudo yum install java-1.8.0-openjdk
sudo yum install wget
mkdir -p ~/kafka-download
wget "http://redrockdigimark.com/apachemirror/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz" -O ~/kafka-download/kafka-binary.tgz
mkdir -p ~/kafka-install && cd ~/kafka-install
tar -xvzf ~/kafka-download/kafka-binary.tgz --strip 1

 

 

 

Open Kafka listener port

 

You need to allow access to Kafka broker service (on port 9092 in this case) for the microservices deployed on Oracle Application Container Cloud. This documentation provides a great reference in the form of a use case. Create a Security Application to specify the protocol and the respective port – detailed documentation here

 

 

Reference the Security Application created in the previous step to configure the Security Rule. This will allow traffic from public internet (as defined in the rule) onto port 9092 (as per Security Application configuration). Please refer to the following documentation for details

 

 

You will end up with a configuration similar to what's depicted below

 

 

 

Configure Kafka broker

 

Make sure that you edit the below mentioned attributes in Kafka server properties (<KAFKA_INSTALL>/config/server.properties) as per your Compute Cloud environment

 

Public DNS of your Compute Cloud instance: if the public IP is 140.44.88.200, then the public DNS will be oc-140-44-88-200.compute.oraclecloud.com

 

AttributeValue
listeners

PLAINTEXT://<oracle-compute-private-IP>:<kafka-listen-port>

e.g. PLAINTEXT://10.190.210.199:9092
advertised.listeners

PLAINTEXT://<oracle-compute-public-DNS>:<kafka-listen-port>

e.g. PLAINTEXT://oc-140-44-88-200.compute.oraclecloud.com:9092

 

 

Here is a snapshot of the server.properties file

 

Start Zookeeper by executing KAFKA_INSTALL/bin/zookeeper-server-start.sh config/zookeeper.properties

 

 

Start Kafka Broker by executing KAFKA_INSTALL/bin/kafka-server-start.sh config/server.properties

 

 

Do not start Kafka broker before Zookeeper

 

High level solution overview

 

Event flow/sequence

Let’s look at how these components work together to support the entire use case

 

 

The producer pushes events into the Kafka broker

 

On the consumer end

 

  • The application polls Kafka broker for the data (yes, the poll/pull model is used in Kafka as opposed to the more commonly seen push model)
  • A client (browser/http client) subscribes for events by simply sending a HTTP GET to a specific URL (e.g. https://<acc-app-url>/metrics). This is one time subscribe after which the client will get events as they are produced within the application and it can choose to disconnect any time

 

 

Asynchronous, loosely coupled: The metrics data is produced by the consumer. One consumer makes it available as a real time feed for browser based clients, but there can be multiple such consuming entities which can implement a different set of business logic around the same data set e.g. push the metrics data to a persistent data store for processing/analysis etc.

 

More on Server Sent Events (SSE)

 

SSE is the middle ground between HTTP and WebSocket. The client sends the request, and once established, the connection is kept open and it can continue to receive data from server

 

  • This is more efficient compared to HTTP request-response paradigm for every single request i.e. polling the server can be avoided
  • It’s not the same as WebSocket which are full duplex in nature i.e. the client and server can exchange messages anytime after connection is established. In SSE, the client only sends a request once

 

This model suits our sample application since the client just needs to connect and wait for data to arrive (it does not need to interact with the server after the initial subscription)

 

Other noteworthy points

  • SSE is a formal W3C specification
  • It defines a specific media type for the data
  • Has JavaScript implementation in most browsers

 

Scalability

It’s worth noting that, all the parts of this system are stateless and horizontally scalable in order to maintain high throughput and performance. The second part of this blog will dive deeper into the scalability aspects and see how Application Container Cloud makes it easy to achieve this

Code

 

This section will briefly cover the code used for this sample and highlight the important points (for both our microservices)

 

Producer microservice

 

It consists of a cohesive bunch of classes which handle application bootstrapping, event production etc.

 

Class

Details

 

 

ProducerBootstrap.java

Entry point for the application. Kicks off Grizzly container

Producer.java

Runs in a dedicated thread. Contains core logic for producing event.

ProducerManagerResource.java

Exposes a HTTP(s) endpoint to start/stops the producer process

ProducerLifecycleManager.java

Implements logic to manage Producer thread using ExecutorService. Used internally by ProducerManagerResource

 

 

ProducerBootstrap.java

 

public class ProducerBootstrap {
    private static final Logger LOGGER = Logger.getLogger(ProducerBootstrap.class.getName());


    private static void bootstrap() throws IOException {


        String hostname = Optional.ofNullable(System.getenv("HOSTNAME")).orElse("localhost");
        String port = Optional.ofNullable(System.getenv("PORT")).orElse("8080");


        URI baseUri = UriBuilder.fromUri("http://" + hostname + "/").port(Integer.parseInt(port)).build();


        ResourceConfig config = new ResourceConfig(ProducerManagerResource.class);


        HttpServer server = GrizzlyHttpServerFactory.createHttpServer(baseUri, config);
        LOGGER.log(Level.INFO,  "Application accessible at {0}", baseUri.toString());


        //gracefully exit Grizzly services when app is shut down
        Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {
            @Override
            public void run() {
                LOGGER.log(Level.INFO, "Exiting......");
                try {
                    server.shutdownNow();
                  LOGGER.log(Level.INFO, "REST services stopped");


                    ProducerLifecycleManager.getInstance().stop();
                    LOGGER.log(Level.INFO, "Kafka producer thread stopped");
                } catch (Exception ex) {
                    //log & continue....
                    LOGGER.log(Level.SEVERE, ex, ex::getMessage);
                }


            }
        }));
        server.start();


    }


    public static void main(String[] args) throws Exception {


        bootstrap();


    }
}

 

 

Producer.java

 

public class Producer implements Runnable {
    private static final Logger LOGGER = Logger.getLogger(Producer.class.getName());
    private static final String TOPIC_NAME = "cpu-metrics-topic";
    private KafkaProducer<String, String> kafkaProducer = null;
    private final String KAFKA_CLUSTER_ENV_VAR_NAME = "KAFKA_CLUSTER";
    public Producer() {
        LOGGER.log(Level.INFO, "Kafka Producer running in thread {0}", Thread.currentThread().getName());
        Properties kafkaProps = new Properties();
        String defaultClusterValue = "localhost";
        String kafkaCluster = System.getenv().getOrDefault(KAFKA_CLUSTER_ENV_VAR_NAME, defaultClusterValue);
        LOGGER.log(Level.INFO, "Kafka cluster {0}", kafkaCluster);
        kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaCluster);
        kafkaProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        kafkaProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        kafkaProps.put(ProducerConfig.ACKS_CONFIG, "0");
        this.kafkaProducer = new KafkaProducer<>(kafkaProps);
    }
    @Override
    public void run() {
        try {
            produce();
        } catch (Exception e) {
            LOGGER.log(Level.SEVERE, e.getMessage(), e);
        }
    }
    /**
    * produce messages
    *
    * @throws Exception
    */
    private void produce() throws Exception {
        ProducerRecord<String, String> record = null;
        try {
            Random rnd = new Random();
            while (true) {
                String key = "machine-" + rnd.nextInt(5);
                String value = String.valueOf(rnd.nextInt(20));
                record = new ProducerRecord<>(TOPIC_NAME, key, value);
                kafkaProducer.send(record, new Callback() {
                    @Override
                    public void onCompletion(RecordMetadata rm, Exception excptn) {
                        if (excptn != null) {
                            LOGGER.log(Level.WARNING, "Error sending message with key {0}\n{1}", new Object[]{key, excptn.getMessage()});
                        } else {
                            LOGGER.log(Level.INFO, "Partition for key {0} is {1}", new Object[]{key, rm.partition()});
                        }
                    }
                });
                /**
                * wait before sending next message. this has been done on
                * purpose
                */
                Thread.sleep(1000);
            }
        } catch (Exception e) {
            LOGGER.log(Level.SEVERE, "Producer thread was interrupted");
        } finally {
            kafkaProducer.close();
            LOGGER.log(Level.INFO, "Producer closed");
        }
    }
}

 

ProducerLifecycleManager.java

 

public final class ProducerLifecycleManager {
private static final Logger LOGGER = Logger.getLogger(ProducerLifecycleManager.class.getName());
    private ExecutorService es;
    private static ProducerLifecycleManager INSTANCE = null;
    private final AtomicBoolean RUNNING = new AtomicBoolean(false);
    private ProducerLifecycleManager() {
        es = Executors.newSingleThreadExecutor();
    }
    public static ProducerLifecycleManager getInstance(){
        if(INSTANCE == null){
            INSTANCE = new ProducerLifecycleManager();
        }
        return INSTANCE;
    }
    public void start() throws Exception{
        if(RUNNING.get()){
            throw new IllegalStateException("Service is already running");
        }
        if(es.isShutdown()){
            es = Executors.newSingleThreadExecutor();
            System.out.println("Reinit executor service");
        }
        es.execute(new Producer());
        LOGGER.info("started producer thread");
        RUNNING.set(true);
    }
    public void stop() throws Exception{
        if(!RUNNING.get()){
            throw new IllegalStateException("Service is NOT running. Cannot stop");
        }
        es.shutdownNow();
        LOGGER.info("stopped producer thread");
        RUNNING.set(false);
    }
}

 

ProducerManagerResource.java

 

@Path("producer")

public class ProducerManagerResource {

    /**

    * start the Kafka Producer service

    * @return 200 OK for success, 500 in case of issues

    */

    @GET

    public Response start() {

        Response r = null;

        try {

            ProducerLifecycleManager.getInstance().start();

            r = Response.ok("Kafka Producer started")

                .build();

        } catch (Exception ex) {

            Logger.getLogger(ProducerManagerResource.class.getName()).log(Level.SEVERE, null, ex);

            r = Response.serverError().build();

        }

        return r;

    }

    /**

    * stop consumer

    * @return 200 OK for success, 500 in case of issues

    */

    @DELETE

    public Response stop() {

        Response r = null;

        try {

            ProducerLifecycleManager.getInstance().stop();

            r = Response.ok("Kafka Producer stopped")

                .build();

        } catch (Exception ex) {

            Logger.getLogger(ProducerManagerResource.class.getName()).log(Level.SEVERE, null, ex);

            r = Response.serverError().build();

        }

        return r;

    }

}

 

 

Consumer microservice

 

Class

Details

 

 

ConsumerBootstrap.java

Entry point for the application. Kicks off Grizzly container and triggers the Consumer process

Consumer.java

Runs in a dedicated thread. Contains core logic for consuming events

ConsumerEventResource.java

Exposes a HTTP(s) endpoint for end users to consume events

EventCoordinator.java

Wrapper around Jersey SSEBroadcaster to implement event subscription & broadcasting. Used internally by ConsumerEventResource

 

 

Consumer.java

 

public class Consumer implements Runnable {
    private static final Logger LOGGER = Logger.getLogger(Consumer.class.getName());
    private static final String TOPIC_NAME = "cpu-metrics-topic";
    private static final String CONSUMER_GROUP = "cpu-metrics-group";
    private final AtomicBoolean CONSUMER_STOPPED = new AtomicBoolean(false);
    private KafkaConsumer<String, String> consumer = null;
    private final String KAFKA_CLUSTER_ENV_VAR_NAME = "KAFKA_CLUSTER";
    /**
    * c'tor
    */
    public Consumer() {
        Properties kafkaProps = new Properties();
        LOGGER.log(Level.INFO, "Kafka Consumer running in thread {0}", Thread.currentThread().getName());
        String defaultClusterValue = "localhost";
        String kafkaCluster = System.getenv().getOrDefault(KAFKA_CLUSTER_ENV_VAR_NAME, defaultClusterValue);
        LOGGER.log(Level.INFO, "Kafka cluster {0}", kafkaCluster);
        kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaCluster);
        kafkaProps.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP);
        kafkaProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        kafkaProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        this.consumer = new KafkaConsumer<>(kafkaProps);
    }
    /**
    * invoke this to stop this consumer from a different thread
    */
    public void stop() {
        if(CONSUMER_STOPPED.get()){
            throw new IllegalStateException("Kafka consumer service thread is not running");
        }
        LOGGER.log(Level.INFO, "signalling shut down for consumer");
        if (consumer != null) {
            CONSUMER_STOPPED.set(true);
            consumer.wakeup();
        }
    }
    @Override
    public void run() {
        consume();
    }
    /**
    * poll the topic and invoke broadcast service to send information to connected SSE clients
    */
    private void consume() {
        consumer.subscribe(Arrays.asList(TOPIC_NAME));
        LOGGER.log(Level.INFO, "Subcribed to: {0}", TOPIC_NAME);
        try {
            while (!CONSUMER_STOPPED.get()) {
                LOGGER.log(Level.INFO, "Polling broker");
                ConsumerRecords<String, String> msg = consumer.poll(1000);
                for (ConsumerRecord<String, String> record : msg) {
                    EventCoordinator.getInstance().broadcast(record);
                }
            }
            LOGGER.log(Level.INFO, "Poll loop interrupted");
        } catch (Exception e) {
            //ignored
        } finally {
            consumer.close();
            LOGGER.log(Level.INFO, "consumer shut down complete");
        }
    }
}

 

ConsumerBootstrap.java

 

public final class ConsumerBootstrap {
    private static final Logger LOGGER = Logger.getLogger(ConsumerBootstrap.class.getName());
    /**
    * Start Grizzly services and Kafka consumer thread
    *
    * @throws IOException
    */
    private static void bootstrap() throws IOException {
        String hostname = Optional.ofNullable(System.getenv("HOSTNAME")).orElse("localhost");
        String port = Optional.ofNullable(System.getenv("PORT")).orElse("8081");
        URI baseUri = UriBuilder.fromUri("http://" + hostname + "/").port(Integer.parseInt(port)).build();
        ResourceConfig config = new ResourceConfig(ConsumerEventResource.class, SseFeature.class);
        HttpServer server = GrizzlyHttpServerFactory.createHttpServer(baseUri, config);
        Logger.getLogger(ConsumerBootstrap.class.getName()).log(Level.INFO, "Application accessible at {0}", baseUri.toString());
        Consumer kafkaConsumer = new Consumer(); //will initiate connection to Kafka broker
        //gracefully exit Grizzly services and close Kafka consumer when app is shut down
        Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {
            @Override
            public void run() {
              LOGGER.log(Level.INFO, "Exiting......");
                try {
                    server.shutdownNow();
                    LOGGER.log(Level.INFO, "Grizzly services stopped");
                    kafkaConsumer.stop();
                    LOGGER.log(Level.INFO, "Kafka consumer thread stopped");
                } catch (Exception ex) {
                    //log & continue....
                    LOGGER.log(Level.SEVERE, ex, ex::getMessage);
                }
            }
        }));
        server.start();
        new Thread(kafkaConsumer).start();
    }
    /**
    * Entry point
    *
    * @param args
    * @throws Exception
    */
    public static void main(String[] args) throws Exception {
        bootstrap();
    }
}

 

ConsumerEventResource.java

 

/**
* This class allows clients to subscribe to events by
* sending a HTTP GET to host:port/events. The server will keep the connection open
* and send events (as and when received) unless closed by the client
*
*/
@Path("metrics")
public final class ConsumerEventResource {
    //private static final Logger LOGGER = Logger.getLogger(ConsumerEventResource.class.getName());
    /**
    * Call me to subscribe to events. Delegates to EventCoordinator
    *
    * @return EventOutput which will keep the connection open
    */
    @GET
    @Produces(SseFeature.SERVER_SENT_EVENTS)
    public EventOutput subscribe() {
        return EventCoordinator.getInstance().subscribe();
    }
}

 

EventCoordinator.java

 

public final class EventCoordinator {
    private static final Logger LOGGER = Logger.getLogger(EventCoordinator.class.getName());
    private EventCoordinator() {
    }
    /**
    * SseBroadcaster is used because
    * 1. it tracks client stats
    * 2. automatically dispose server resources if clients disconnect
    * 3. it's thread safe
    */
    private final SseBroadcaster broadcaster = new SseBroadcaster();
    private static final EventCoordinator INSTANCE = new EventCoordinator();
    public static EventCoordinator getInstance() {
        return INSTANCE;
    }
    /**
    * add to SSE broadcaster list of clients/subscribers
    * @return EventOutput which will keep the connection open.
    *
    * Note: broadcaster.add(output) is a slow operation
    * Please see (https://jersey.java.net/apidocs/2.23.2/jersey/org/glassfish/jersey/server/Broadcaster.html#add(org.glassfish.jersey.server.BroadcasterListener))
    */
    public EventOutput subscribe() {
        final EventOutput eOutput = new EventOutput();
        broadcaster.add(eOutput);
        LOGGER.log(Level.INFO, "Client Subscribed successfully {0}", eOutput.toString());
        return eOutput;
    }
    /**
    * broadcast record details to all connected clients
    * @param record kafka record obtained from broker
    */
    public void broadcast(ConsumerRecord<String, String> record) {
        OutboundEvent.Builder eventBuilder = new OutboundEvent.Builder();
        OutboundEvent event = eventBuilder.name(record.key())
                                        .id(String.valueOf(record.offset()))
                                        .data(String.class, record.value()+"%")
                                        .comment("Partition "+Integer.toString(record.partition()))
                                        .mediaType(MediaType.TEXT_PLAIN_TYPE)
                                        .build();
        broadcaster.broadcast(event);
        LOGGER.log(Level.INFO, "Broadcasted record {0}", record);
    }
}

 

The Jersey SSE Broadcaster is used because of its following characteristics

  • it tracks client statistics
  • automatically disposes server resources if clients disconnect
  • it's thread safe

 

 

Deploy to Oracle Application Container Cloud

 

Now that you have a fair idea of the application, it’s time to look at the build, packaging & deployment

 

Metadata files

 

manifest.json: You can use this file in its original state (for both producer and consumer microservices)

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar accs-kafka-producer.jar",
    "release": {
        "build": "12042016.1400",
        "commit": "007",
        "version": "0.0.1"
    },
    "notes": "Kafka Producer powered by Oracle Application Container Cloud"
}

 

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar accs-kafka-consumer.jar",
    "release": {
        "build": "12042016.1400",
        "commit": "007",
        "version": "0.0.1"
    },
    "notes": "Kafka consumer powered by Oracle Application Container Cloud"
}

 

deployment.json

It contains environment variable corresponding to your Kafka broker. The value is left as a placeholder for the user to fill prior to deployment.

 

{

    "environment": {

        "KAFKA_CLUSTER":"<as-configured-in-kafka-server-properties>"

    }

}

 

 

This value (Oracle Compute Cloud instance public DNS) should be the same as the one you configured in the advertised.listeners attribute of the Kafka server.properties file

 

Please refer to the following documentation for more details on metadata files

 

Build & zip

 

Build JAR and zip it with (only) the manifest.json file to create a cloud-ready artifact

 

Producer application

 

cd <code_dir>/producer //maven project directory
mvn clean install
zip accs-kafka-producer.zip manifest.json target/accs-kafka-producer.jar //you can also use tar to create a tgz file

 

 

Consumer application

 

cd <code_dir> //maven project directory
mvn clean install 
zip accs-kafka-consumer.zip manifest.json target/accs-kafka-consumer.jar

 

Upload application zip to Oracle Storage cloud

You would first need to upload your application ZIP file to Oracle Storage Cloud and then reference it in the subsequent steps. Here are the required the cURL commands

 

Create a container in Oracle Storage cloud (if it doesn't already exist)

curl -i -X PUT -u <USER_ID>:<USER_PASSWORD> <STORAGE_CLOUD_CONTAINER_URL>
e.g. curl -X PUT –u jdoe:foobar "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accs-kafka-consumer/"

Upload your zip file into the container (zip file is nothing but a Storage Cloud object)

curl -X PUT -u <USER_ID>:<USER_PASSWORD> <STORAGE_CLOUD_CONTAINER_URL> -T <zip_file> "<storage_cloud_object_URL>" //template
e.g. curl -X PUT –u jdoe:foobar -T accs-kafka-consumer.zip "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accs-kafka-consumer/accs-kafka-consumer.zip"

 

Repeat the same for the producer microservice

 

Deploy to Application Container Cloud

Once you have finished uploading the ZIP, you can now reference its (Oracle Storage cloud) path while using the Application Container Cloud REST API which you would use in order to deploy the application. Here is a sample cURL command which makes use of the REST API

 

curl -X POST -u joe@example.com:password \  
-H "X-ID-TENANT-NAME:domain007" \  
-H "Content-Type: multipart/form-data" -F "name=accs-kafka-consumer" \  
-F "runtime=java" -F "subscription=Monthly" \  
-F "deployment=@deployment.json" \  
-F "archiveURL=accs-kafka-consumer/accs-kafka-consumer.zip" \  
-F "notes=notes for deployment" \  
https://apaas.oraclecloud.com/paas/service/apaas/api/v1.1/apps/domain007

 

Repeat the same for the producer microservice

 

Post deployment

 

You should be able to see your microservices under the Applications section in Application Container Cloud console

 

 

If you look at the details of a specific application, the environment variable should also be present

 

 

Test the application

 

Producer

For the accs-kafka-producer microservice, the Kafka Producer process (thread) needs to be started by the user (this is just meant to provide flexibility). Manage the producer process by issuing appropriate commands as per below table (using cURL, Postman etc.)

 

Action

HTTP verb

URI

 

 

 

Start

GET

https://<ACCS-APP-URL>/producer

e.g. https://accs-kafka-producer-domain007.apaas.us.oraclecloud.com/producer

Stop

DELETE

Same as above

 

 

Once you start the producer, it will continue publishing events to the Kafka broker it is stopped

 

Consumer

In the accs-kafka-consumer microservice, the Kafka consumer process starts along with the application itself i.e. it starts polling the Kafka broker for metrics. As previously mentioned, the consumer application provides a HTTP(s) endpoint (powered by Server Sent Events) to look at metric data in real time

 

 

You should see a real time stream of data similar to below. The event attribute is the machine name/id and the data attribute represents (models) CPU usage

 

Please ignore the Partition attribute as it is meant to demonstrate a specific concept (scalability & load distribution) which will be covered in the second part of this blog

 

 

References

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

 

This blog post demonstrates usage of Oracle Application Container Cloud and Database Cloud service. To be precise, it covers the following

 

  • An introduction to Service bindings (in Application Container Cloud) including setup + configuration and leveraging them to integrate with Oracle Database Cloud service
  • Developing a sample (Java SE based) application using JPA (Eclipselink implementation) for persistence and JAX-RS (Jersey framework) to expose a REST API
  • Build, package and deploy the solution to the Application Container cloud using its REST APIs

 

 

 

 

The table below lists the components used

 

Component/service name

Description

 

 

Oracle Application Container Cloud

The aPaaS solution which hosts the Fat JAR based Java application exposing REST APIs

Oracle Database Cloud

hosts the application data

Oracle Storage Cloud

stores the application zip (for deployment)

Eclipselink

used as the JPA implementation (v 2.5.2)

Jersey

JAX-RS implementation (v 2.23.2)

Maven

Build tool. Makes use of the shade plugin to create a Fat JAR packaged with all dependent libraries

 

 

 

 

Service bindings: the concept

 

Service bindings serve as references to other Oracle Cloud services. At their core, they are a set of environment variables for a particular cloud service which are automatically seeded once you configure them. You can refer to these variables from within your application code. For example

 

String port = Optional.ofNullable(System.getenv("PORT")).orElse("8080"); //PORT is the environment variable

 

At the time of writing of this blog post, integration with following services are supported as far as service bindings are concerned - Oracle Database Cloud, Oracle Java Cloud and Oracle MySQL Cloud

 

What purpose do service bindings solve?

 

Service bindings make lives easier for developers

 

  • It's possible to consume other Oracle Cloud services in a declarative fashion
  • They allow secure storage of credentials required to access the service
  • Connection details are conveniently stored as environment variables (de facto standard) and can be easily used within code. This in turn, shields you from hard coding connectivity information or using/building a custom mechanism to handle these concerns

 

How to configure them?

 

Service binding configuration can be configured in a couple of ways

 

  • Metadata file (deployment.json) - It is possible to use this method both during as well as post application deployment
  • Application Container Cloud console - this is possible only post application deployment i.e. the application specific menu exposes this feature

 

 

We will use option #1 in the sample presented in this blog and will be covered in depth later

 

About the sample

 

Pre-requisites

 

You need to have access to the below mentioned Oracle Cloud Platform services to execute this sample end to end. Please refer to the links to find out more with regards to procuring service instances

 

 

An Oracle Storage Cloud account is automatically  provisioned along with your Application Container Cloud instance

Database Cloud service needs to be in the same identity domain as the Application Container Cloud for it to be available as a service binding

Architecture

 

The sample application presented in this blog is not complicated, yet, it makes sense to grasp the high level details with the help of a diagram

 

 

 

  • As already mentioned, the application leverages JPA (DB persistence) and JAX-RS (RESTful) APIs
  • The client invokes a HTTP(s) URL (GET request) which internally calls the JAX-RS resource, which in turn invokes the JPA (persistence) layer to communicate with Oracle Database Cloud instance
  • Connectivity to the Oracle Database Cloud instance is achieved with the help of service bindings which expose database connectivity details as environment variables
  • These variables are used within the code

 

Persistence (JPA) layer

 

Here is a summary of the JPA piece. Eclipselink is used as the JPA implementation and the sample makes use of specific JPA 2.1 features like automated schema creation and data seeding during application bootstrap phase

The table and its associated data will be created in Oracle Database cloud during application deployment phase. This approach been used on purpose in order to make this easy for you to test the application. It’s possible to manually bootstrap your Oracle Database Cloud instance with the table and some test data. You can turn off this feature by commenting out the highlighted line from persistence.xml

 

 

 

 

 

 

The important classes/components are as follows

 

Name

Description

 

 

                             persistence.xml

JPA deployment descriptor

PaasAppDev.java

JPA entity class

JPAFacade.java

Manages EntityManagerFactory life cycle and provides access to EntityManager

 

PaasAppDev.java

 

/**
 * JPA entity
 * 
 */
@Entity
@Table(name = "PAAS_APPDEV_PRODUCTS")
@XmlRootElement
public class PaasAppDev implements Serializable {


    @Id
    private String name;


    @Column(nullable = false)
    private String webUrl;


    public PaasAppDev() {
        //for JPA
    }
//getters & setters ommitted
}

 

JPAFacade.java

 

public class JPAFacade {


    private static EntityManagerFactory emf;


    private JPAFacade() {
    }


    public static void bootstrapEMF(String persistenceUnitName, Map<String, String> props) {
        if (emf == null) {
            emf = Persistence.createEntityManagerFactory(persistenceUnitName, props);
            emf.createEntityManager().close(); //a hack to initiate 'eager' deployment of persistence unit during deploy time as opposed to on-demand
        }
    }


    public static EntityManager getEM() {
        if (emf == null) {
            throw new IllegalStateException("Please call bootstrapEMF(String persistenceUnitName, Map<String, String> props) first");
        }


        return emf.createEntityManager();
    }


    public static void closeEMF() {


        if (emf == null) {
            throw new IllegalStateException("Please call bootstrapEMF(String persistenceUnitName, Map<String, String> props) first");
        }


        emf.close();


    }


}

 

 

 

Leveraging service binding information

 

It’s important to note how the service bindings are being used in this case. Generally, in case of standalone (with RESOURCE_LOCAL transactions) JPA usage, the DB connectivity information is stored as a part of the persistence.xml. In our sample, we are using programmatic configuration of EntityManagerFactory because the DB connection info can be extracted only at runtime using the following environment variables

 

  • DBAAS_DEFAULT_CONNECT_DESCRIPTOR
  • DBAAS_USER_NAME
  • DBAAS_USER_PASSWORD

 

This is leveraged in Bootstrap.java (which serves as the entry point to the applicaiton)

 

/**
 * The 'bootstrap' class. Sets up persistence and starts Grizzly HTTP server
 *
 */
public class Bootstrap {


    static void bootstrapREST() throws IOException {


        String hostname = Optional.ofNullable(System.getenv("HOSTNAME")).orElse("localhost");
        String port = Optional.ofNullable(System.getenv("PORT")).orElse("8080");


        URI baseUri = UriBuilder.fromUri("http://" + hostname + "/").port(Integer.parseInt(port)).build();


        ResourceConfig config = new ResourceConfig(PaasAppDevProductsResource.class)
                                                    .register(MoxyJsonFeature.class);


        HttpServer server = GrizzlyHttpServerFactory.createHttpServer(baseUri, config);
        Logger.getLogger(Bootstrap.class.getName()).log(Level.INFO, "Application accessible at {0}", baseUri.toString());


        //gracefully exit Grizzly and Eclipselink services when app is shut down
        Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {
            @Override
            public void run() {
                Logger.getLogger(Bootstrap.class.getName()).info("Exiting......");
                server.shutdownNow();
                JPAFacade.closeEMF();
                Logger.getLogger(Bootstrap.class.getName()).info("REST and Persistence services stopped");
            }
        }));
        server.start();


    }


    private static final String PERSISTENCE_UNIT_NAME = "oracle-cloud-db-PU";


    static void bootstrapJPA(String puName, Map<String, String> props) {


        JPAFacade.bootstrapEMF(puName, props);
        Logger.getLogger(Bootstrap.class.getName()).info("EMF bootstrapped");


    }


    public static void main(String[] args) throws IOException {
        Map<String, String> props = new HashMap<>();
        props.put("javax.persistence.jdbc.url", "jdbc:oracle:thin:@" + System.getenv("DBAAS_DEFAULT_CONNECT_DESCRIPTOR"));
        props.put("javax.persistence.jdbc.user", System.getenv("DBAAS_USER_NAME"));
        props.put("javax.persistence.jdbc.password", System.getenv("DBAAS_USER_PASSWORD"));
        bootstrapREST();
        bootstrapJPA(PERSISTENCE_UNIT_NAME, props);


    }
}

 

 

REST (JAX-RS) layer

 

Jersey is used as the JAX-RS implementation. It has support for multiple containers - Grizzly being one of them and it’s used in this example as well. Also, the Moxy media provider is leveraged in order to ensure that JAXB annotated (JPA) entity class can be marshaled as both XML and JSON without any additional code

 

Important classes

 

Name

Description

 

 

PaasAppDevProductsResource.java

Contains logic to GET information about all (appdev/products) or a specific PaaS product (e.g. appdev/products/ACC)

 

PaasAppDevProductsResource.java

 

@Path("appdev/products")
public class PaasAppDevProductsResource {


    @GET
    @Path("{name}")
    public Response paasOffering(@PathParam("name") String name) {


        EntityManager em = null;
        PaasAppDev product = null;
        try {
            em = JPAFacade.getEM();
            product = em.find(PaasAppDev.class, name);
        } catch (Exception e) {
            throw e;
        } finally {


            if (em != null) {
                em.close();
            }


        }
        
        return Response.ok(product).build();
    }
    
    @GET
    public Response all() {


        EntityManager em = null;
        List<PaasAppDev> products = null;
        try {
            em = JPAFacade.getEM();
            products = em.createQuery("SELECT c FROM PaasAppDev c").getResultList();
        } catch (Exception e) {
            throw e;
        } finally {


            if (em != null) {
                em.close();
            }


        }
        GenericEntity<List<PaasAppDev>> list = new GenericEntity<List<PaasAppDev>>(products) {
        };
        return Response.ok(list).build();
    }


}

Build & cloud deployment

 

Now that you have a fair idea of the application, it’s time to look at the build, packaging & deployment

 

Seed Maven with ojdbc7 driver JAR

 

 

mvn install:install-file -DgroupId=com.oracle -DartifactId=ojdbc7 -Dversion=12.1.0.1 -Dpackaging=jar -Dfile=<download_path>\ojdbc7.jar -DgeneratePom=true

 

Here is a snippet from the pom.xml

 

 

 

Metadata files

 

The manifest.json

 

You can use the manifest.json file as it is

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar accs-dbcs-service-binding-sample-1.0.jar",
    "release": {
        "build": "27092016.1020",
        "commit": "007",
        "version": "0.0.2"
    },
    "notes": "notes related to release"
}

 

Service bindings in deployment.json

 

The deployment.json file should contain your service bindings and you would need to upload this file during deployment (explained below) for them to be associated with your Application Container cloud instance.

 

{
    "services": [
    {
        "identifier": "DBService",
        "type": "DBAAS",
        "name": <Oracle DB Cloud service name>,
        "username": <Oracle DB Cloud username>,
        "password": <Oracle DB Cloud password>
    }]
}

 

 

You need to replace the placeholders with the appropriate values. Here is an example

 

{
    "services": [
    {
        "identifier": "OraDBService",
        "type": "DBAAS",
        "name": OracleCloudTestDB,
        "username": db_user,
        "password": Foo@Bar_007
    }]
}

 

 

In case of multiple service bindings for the same service (e.g. Java Cloud), the Application Container Cloud service automatically generates a unique set of environment variables for each service instance

 

Please refer to the following documentation if you need further details

 

Build & zip

 

Build JAR and zip it with (only) the manifest.json file to create a cloud-ready artifact

 

cd <code_dir> 
mvn clean install
zip accs-dbcs-service-binding-sample.zip manifest.json target\accs-dbcs-service-binding-sample-1.0.jar

 

 

Upload application zip to Oracle Storage cloud

 

You would first need to upload your application ZIP file to Oracle Storage Cloud and then reference it later. Here are the steps along with the cURL commands

Please refer to the following documentation for more details

 

Get authentication token for Oracle Storage cloud

 

you will receive the token in the HTTP Response header and you can use it to execute subsequent operations

 

curl -X GET -H "X-Storage-User: Storage-<identity_domain>:<user_name>" -H "X-Storage-Pass: <user_password>" "storage_cloud_URL" //template

curl -X GET -H "X-Storage-User: Storage-domain007:john.doe" -H "X-Storage-Pass: foo@bar" "https://domain007.storage.oraclecloud.com/auth/v1.0" //example

 

Create a container in Oracle Storage cloud (if it doesn't already exist)

 

curl -X PUT -H "X-Auth-Token: <your_auth_token>" "<storage_cloud_container_URL>" //template

curl -X PUT -H "X-Auth-Token: AUTH_foobaar007" "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accscontainer/" //example

 

Upload your zip file into the container (zip file is nothing but a Storage Cloud object)

 

curl -X PUT -H "X-Auth-Token: <your_auth_token>" -T <zip_file> "<storage_cloud_object_URL>" //template

curl -X PUT -H "X-Auth-Token: AUTH_foobaar007" -T accs-dbcs-service-binding-sample.zip "https://domain007.storage.oraclecloud.com/v1/Storage-domain007/accscontainer/accs-dbcs-service-binding-sample.zip" //example

 

Things to note

  • the <zip_file> is the application zip file which needs to uploaded and should be present on your file system from where you're executing these commands
  • the (storage cloud) object name needs to end with .zip extension (in this context/case)

 

Deploy to Application Container Cloud

 

Once you have finished uploading the ZIP, you can now reference its (Oracle Storage cloud) path while using the Application Container Cloud REST API which you would use in order to deploy the application. Here is a sample cURL command which makes use of the REST API

 

curl -X POST -u joe@example.com:password \
-H "X-ID-TENANT-NAME:domain007" \
-H "Content-Type: multipart/form-data" -F "name=accs-dbcs-service-binding-sample" \
-F "runtime=java" -F "subscription=Monthly" \
-F "deployment=@deployment.json" \
-F "archiveURL=accscontainer/accs-dbcs-service-binding-sample.zip" \
-F "notes=notes for deployment" \
https://apaas.oraclecloud.com/paas/service/apaas/api/v1.1/apps/domain007

 

 

During this process, you will also be pushing the deployment.json metadata file which contains the service bindings info (should be present on the file system on which the command is being executed). This in turn will automatically seed the required environment variables during application creation phase

 

More details available here

 

Test the application

 

Once deployment is successful, you should be able to see the deployed application and its details in Application Container cloud

 

 

 

Access your Oracle Database Cloud service

 

You can use Oracle SQL developer or similar client to confirm that the required table has been created and seeded with some test data

 

Details on how to configure your Oracle Database Cloud instance to connect via external tool (like SQL Developer) is out of scope of this article, but you can follow the steps outlined in the official documentation to set things up quickly

 

 

Access the REST endpoints

 

 

Purpose

Command

 

 

GET all products

curl -H "Accept: application/json" https://<application_URL>/appdev/products

GET a specific product

 

curl -H "Accept: application/json" https://<application_URL>/appdev/products/<product_name>

 

e.g. curl -H "Accept: application/json" https://<application_URL>/appdev/products/ACC

 

Refer above image for all product names

 

 

Please use the below mentioned application URL formats

 

https://accs-dbcs-service-binding-sample-<identity_domain>.apaas.<region>.oraclecloud.com/  //template

https://accs-dbcs-service-binding-sample-domain007.apaas.us2.oraclecloud.com/  //example

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Oracle Developer Cloud Service provides you with following capabilities as far as JUnit is concerned

 

  • Viewing the list of all executed tests and the cumulative test metrics
  • Test Result History to track the history of all tests in a graphical form

 

This blog contains a simple JPA based project which uses an in-memory Derby database to execute JUnit tests. You will see how to

 

  • Setup the source code in your Developer Cloud service instance Git repository
  • Configure the build process along with the JUnit test related actions
  • Execute the build and track the test results

 

 

Steps

 

Unit test

 

The JUnit test case

 

public class JSRRepositoryTest {


    public JSRRepositoryTest() {
    }
    static Map<String, String> props = new HashMap<>();
    final static String PU_NAME = "derby-in-memory-PU";


    @BeforeClass
    public static void setUpClass() {


        props.put("javax.persistence.jdbc.url", "jdbc:derby:target/derbydb;create=true");
        props.put("javax.persistence.jdbc.driver", "org.apache.derby.jdbc.EmbeddedDriver");
        JPAFacade.bootstrapEMF(PU_NAME, props);


    }


    @AfterClass
    public static void tearDownClass() {
        props.clear();
        props = null;


        JPAFacade.closeEMF();
    }


    JSRRepository cut;


    @Before
    public void setUp() {
        cut = new JSRRepository();
    }


    @After
    public void tearDown() {
        //nothing to do
    }


    @Test
    public void getSingleJSRTest() {
        JavaEESpecification spec = cut.get("123");
        assertNotNull("Spec was null!", spec);
        assertEquals("Wrong spec id", spec.getJsrId(), new Integer(123));
        assertEquals("Wrong spec name", spec.getName(), "jsonb");
    }


    @Test(expected = RuntimeException.class)
    public void getSingleJSRTestForNullValue() {
        cut.get(null);


    }


    @Test(expected = RuntimeException.class)
    public void getSingleJSRTestForBlankValue() {
        cut.get("");


    }


    @Test
    public void getSingleJSRTestForInvalidValue() {
        JavaEESpecification spec = cut.get("007");
        assertNull("Spec was not null!", spec);
    }


    @Test
    public void getAllJSRsTest() {
        List<JavaEESpecification> specs = cut.all();
        assertNotNull("Specs list was null!", specs);
        assertEquals("2 specs were not found", specs.size(), 2);
    }


    @Test
    public void createNewJSRTest() {
        JavaEESpecification newSpec = new JavaEESpecification(366, "Java EE Platform", "8");
        cut.newJSR(newSpec);
        JavaEESpecification spec = cut.get("366");
        assertNotNull("Spec was null!", spec);
        assertEquals("Wrong spec id", spec.getJsrId(), new Integer(366));
        assertEquals("Wrong spec name", spec.getName(), "Java EE Platform");
        assertEquals("Wrong spec version", spec.getVersion(), "8");
    }


    @Test
    public void updateJSRDescTest() {


        String specID = "375";
        String oldDesc = "security for the Java EE platform";
        String newDesc = "updated desc on " + new Date();


        JavaEESpecification newSpec = new JavaEESpecification(Integer.parseInt(specID), oldDesc, "Security", "1.0");
        cut.newJSR(newSpec);
        JavaEESpecification updatedSpec = new JavaEESpecification(Integer.parseInt(specID), newDesc, "Security", "1.0");


        cut.updateJSRDescription(updatedSpec);
        JavaEESpecification spec = cut.get(specID);


        assertNotNull("Spec was null!", spec);
        assertEquals("Description was not updated", spec.getDescription(), newDesc);
        assertEquals("Wrong spec id", spec.getJsrId(), new Integer(specID));
        assertEquals("Wrong spec name", spec.getName(), "Security");
        assertEquals("Wrong spec version", spec.getVersion(), "1.0");
    }
}

 

Project & code repository creation

 

Create a project in your Oracle Developer Cloud instance

 

 

 

 

 

 

 

Create a Git repository – browse to the Home tab, click New Repository and follow the steps

 

 

 

 

 

You should see your new repository created

 

Populating the Git repo

 

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use this or any other tool of your choice

 

cd <project_folder> 
git init
git remote add origin <developer_cloud_git_repo>
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/junit-sample-app-repo.git 
git add .
git commit -m "first commit"
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

You should be able to see the code in your Developer Cloud console

 

 

 

Configure build job

 

 

 

 

 

 

 

Important

 

Activate the following post build actions

 

  • Publishing of JUnit test result reports
  • Archiving of test reports (if needed)

 

 

 

 

Trigger build

 

 

Check test results

 

After the build process is over (it will fail in this case), check the top right corner of your build page and click Tests

 

 

Overall metrics

 

 

Failed tests snapshot

 

 

Failed test details

 

 

Example of a passed test

 

 

Result History

 

 

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Part I of this blog series covered the specifics of building our lightweight Java EE application - Ticker Tracker. This part will focus on the following aspects

 

  • Leveraging NetBeans IDE while working with Oracle Developer Cloud service. This includes
    • Setting up the Team Server plugin
    • Leveraging it to interact with Developer Cloud service instance from within your IDE
  • Priming Developer Cloud service for Continuous Deployment

 

 

 

 

Oracle Developer Cloud Service is a cloud-based software development Platform as a Service (PaaS) and a hosted environment for your application development infrastructure. It provides an open source standards-based solution to develop, collaborate, build, and deploy applications within Oracle Cloud. It provides a number of services such as

 

  • Source code management
  • Build Automation
  • Continuous Integration
  • Issue tracking
  • Code review
  • Deployment automation
  • Agile process management
  • Wiki
  • Activity Stream

 

 

 

Let’s dive into the details now…

 

Using the Team Server plugin with NetBeans IDE

Leveraging NetBeans Team Server plugin along with the automated build & deploy features in Developer Cloud (explained in sections below) lets you stay within your IDE and enjoy a seamless development experience. Using NetBeans IDE to Create an Oracle Developer Cloud Service Project is a great tutorial to understand the details

 

 

Download & setup the Team Server plugin in NetBeans IDE

This is a fairly straightforward process and you can refer to the Installing the Team Server Plugin for NetBeans IDE section from Using NetBeans IDE to Create an Oracle Developer Cloud Service Project tutorial to get this up and running

 

Importing the project into NetBeans

 

This section provides a quick outline of the steps involved

 

 

 

 

You should see a similar project structure in your IDE

 

 

Configure the Oracle Developer Cloud instance within NetBeans

The Adding the Team Server for Your Oracle Cloud Developer Service Instance to Your NetBeans IDE Instance section provides clear instructions for these steps

 

Creating the project in Oracle Developer Cloud

The steps involved are outlined below

 

Use the Team pane, select the Developer Cloud instance and choose New Project

 

 

Enter the details such as project name, description etc.

 

 

In this step, we will link our Ticker Tracker application repository (source) to the project which is being created

 

 

 

 

 

 

 

Log into your Oracle Developer Cloud Service instance to explore the new project

 

 

 

 

Push your project to the Oracle Developer Cloud Git repository

What follows is a series of Git related steps which are applicable when working with any such Git repository. In this case, we are working against the repository which was automatically setup for us when we initially created the project in Developer Cloud Service

 

 

 

 

Since we already linked our project to the Git repository in the project, all that’s required is a Git > Remote > Push to Upstream, without having to explicitly provide Git repo information like location, credentials etc.

 

 

 

 

Check your Oracle Developer Cloud service instance to ensure that the source code is now present in the master branch

 

 

Configure Oracle Developer Cloud

 

Configure build job

We will need to configure the build job in Developer Cloud. It will take care of the following

  • Invoking our Maven build based off the source in out Git repo
  • Prepare our Zip artifact which is a deployable artifact compliant with the Application Container Cloud

All in all, this is a traditional Build configuration which you can read about in the documentation here. The components of the build configuration have been highlighted below

 

 

 

Note: The business logic in the Ticker Tracker application itself does not have any dependency on JDK 8 and can be compiled with JDK 7. In case you’re wondering why we need JDK 8 - it is because Wildfly Swarm build plugin is dependent on the same, hence the build process will require it as well. The good part is that Oracle Developer Cloud Service makes this configurable and provides you the flexibility

 

Select the project Git repository as the Source for the build process

 

 

 

Configure your build process to trigger on commits to your Git repository (read more in the Configuring Build Triggers section). This is important from a Continuous Integration perspective

 

 

 

 

 

A note on ‘cloud-ready’ package

Before you explore the next set of steps, it would be good to recap the fact that, in the context of Application Container Cloud, a valid deployment artifact is a ZIP file which consists of not only the (uber) JAR, plus other metadata files such as manifest.json, deployment.json etc

 

Developer Cloud build configuration invokes the maven build which creates the Uber JAR (thanks to the Wildfly Maven plugin), but the Application Container Cloud compliant ZIP is created by including an additional Build Step (highlighted below)

 

 

 

 

 

zip -j target/ticker-tracker.zip target/ticker-tracker-swarm.jar manifest.json

 

 

The above shell command is executed after the Maven build and it packages the Uber JAR along with the mandatory deployment descriptor (manifest.json) as a ZIP file

 

Configure & initiate (manual) deployment

 

In addition to automating your build process (as explained previously), Developer Cloud service also enables you to deploy your Java applications to Application Container Cloud (details are available in the product documentation)

 

Initiate the process by navigating to Deploy > New Configuration

 

 

 

Enter the details corresponding to your Application Container cloud instance

Note: You can deploy to any instance of Application Container Cloud (irrespective of whether or not it is in the same identity domain as the Developer Cloud service

 

 

 

Complete the deployment configuration and start the process

 

 

 

 

 

At the same time, log into Application Container Cloud service and look at the Applications section and verify that the deployment is in fact in flight

 

 

 

 

The below screenshot depicts a successfully completed deployment process

 

 

Post deployment completion, you can also explore further details about the application (in Application Container Cloud) by clicking on the application name

 

 

 

You might also want to refer this tutorial - Creating a Project from a Template and Deploying It to Oracle Application Container Cloud Service

 

But we need Continuous Deployment !

 

So far, you were able to

  • configure the NetBeans project along with the Team Server plugin
  • push the project to Oracle DevCS git repository
  • configure the build and initiate it manually

 

Now, we’ll look the final piece of the puzzle – Continuous Deployment from Developer Cloud to Application Container Cloud

 

Back to the deployment configuration

Your post build process should be configured to trigger automatic deployment to Application Container Cloud (details available here). You can tweak the existing deployment configuration to weave the CI/CD magic!

 

 

Ensure that you check the Automatic radio button. In this scenario, we have opted to deploy stable builds only

 

 

 

Make a code change, commit, push & monitor build from your IDE

After you initiate Push to Upstream (as demonstrated previously), and refresh the application view on your Team view, you will notice (after some time) that the build process has been automatically kicked off (thanks to the configuration we just made)

 

 

You can track the build progress in real time from within your IDE

 

 

 

 

 

The same would reflect in your Developer Cloud console as well (as expected)

 

 

 

Quick recap..

 

Since this post had a lot to do with configuration (than code!) it’s easy to lose track. Here is a quick summary of what we did

  • Bootstrapped our project, application (and its source) within Developer Cloud service using the Team Server plugin for NetBeans – this served as the foundation for handling other life cycle activities related to our application
  • Configured Developer Cloud Service
    • Build process
    • Deployment process
    • Tweaked both to enable CI/CD

 

Conclusion

This marks the end of this 2-part blog series. In case you stumbled onto this post directly, here is the link for the first part. There is obviously more to Oracle Developer Cloud (as well as Application Container Cloud) than what has been covered via this blog post and the official product documentation is probably the best place to start digging in further.

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This is a two-part blog series which covers setup, development, integration and deployment aspects of developing lightweight Java EE applications for the Oracle Application Container Cloud service – all with the help of a sample application. The following are the key items which will be covered in this post

 

  • Developing ‘right sized’ applications by using ’just enough’ of the Java EE platform, thanks to Wildfly Swarm
  • Leveraging a scalable and easy to use cloud platform: Oracle Application Container Cloud service
  • Evolutionary not revolutionary: use your existing Java EE skills and continue building WARs
  • Second part of this blog will focus more on Oracle Developer Cloud service

 

 

 

 

 

 

 

Introduction

Here is a quick peek into the major services/tools/frameworks which we’ll be used for the sample application

 

Runtime/Service/Framework/Tool

Version

 

 

Oracle JDK

7

Java EE Platform

Java EE 7

Oracle Application Container Cloud Service

16.3.3

Wildfly Swarm

1.0.0

Maven

3.3.9

Netbeans IDE

8.1

 

Oracle Cloud services

 

Oracle Application Container Cloud

 

Oracle Application Container Cloud service provides a robust, polyglot PaaS infrastructure for building lightweight applications in Oracle Cloud. Without going into the details, here are the major highlights of this service

 

  • Open & Polyglot: you're free to leverage any of the thousands of open source or commercial Java SE, Node or PHP (latest addition) frameworks. More programming languages will be added in future
  • Docker based: Built on the proven containerization technology
  • Elastic: Your applications can be easily scaled in or out using the REST API or the service console
  • Easy to manage: update your language runtimes to their latest releases with a single click
  • Profiling: Java based applications can leverage the Flight Recorder to monitor the JVM and analyze using Mission Control
  • Provides integration with other Oracle Cloud services: Developer Cloud, Java Cloud, Database Cloud

 

Oracle Developer Cloud

 

Oracle Developer Cloud Service is a cloud-based software development Platform as a Service (PaaS) and a hosted environment for your application development infrastructure. It provides an open source standards-based solution to develop, collaborate, build, and deploy applications within Oracle Cloud.

 

Platform, frameworks & tools

 

Wildfly Swarm

 

WildFly Swarm is an open source framework that allows the selective reconstitution of Java EE APIs (JAX-RS, CDI, EJB JPA, JTA etc.) within your applications. The goal is to use just enough of a Java EE application server to support whatever subset of the APIs your application requires.

 

Maven

 

Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.

 

IDE

 

For development, this sample uses NetBeans which is a free, open source and immensely popular IDE. For more details, please feel free to check https://netbeans.org/features/index.html

 

Why NetBeans ?

 

  • Integration with Oracle Developer Cloud service powered by the Team Server plug-in
  • Great for Java EE development with support for the latest (v7) platform version
  • Excellent Maven support

 

Diving in…

 

Introducing the Ticker Tracker application

 

Here is a quick intro to the functional aspect of the application. Ticker Tracker revolves around the ability to keep track of stock prices of NYSE scrips.

 

  • Users can check the stock price of a scrip (listed on NASDAQ) using a simple REST interface
  • Real time price tracking is also available – but this is only for Oracle (ORCL)

 

The logic itself is kept simple in order to focus on the core concepts rather than get sucked into complexity of technical implementation. The sample application uses below mentioned Java EE APIs

 

API

Version

JSR #

 

 

 

JAX-RS

2.0

JSR-339

WebSocket

1.0

JSR-356

EJB

3.2

JSR-345

CDI

1.1

JSR-346

JSON-P

1.0

JSR-353

 

Evolutionary not revolutionary

Wildfly Swarm supports two development modes

 

Uber JAR (Java SE)

 

  • Include the required components/fractions (e.g. JAX-RS, CDI, JPA etc.) in pom.xml
  • Write your code
  • Configure your components and deployment using the Swarm Container API within a main method and

 

Hybrid Mode i.e. WAR to JAR (Java EE roots)

 

  • Stick to the business logic which you have already developed
  • Let Swarm auto-magically detect and configure required fractions and create an Uber JAR from your WAR

 

Why go with WAR?

 

  • Less intrusive: let’s you continue your development process with minimum changes/additions
  • Suitable for existing applications: In addition to embarking upon developing Uber JAR Java EE application from scratch, one would also want to retrofit existing deployments to the Fat/Uber JAR model
  • Developer friendly: As a developer, you would do not need to wrap your head around auxiliary concerns such as WAR to Uber JAR conversion – just continue writing your Java EE application and let the framework help you out with this
  • As mentioned before, the Swarm Maven plugin does the job of creating your Fat/Uber JAR
  • Use the Swarm Maven plugin to assist you with the Uber JAR creation

 

Implementation details

 

Let’s go over this in a step-by-step manner

 

Initialize NetBeans project

Start with a Maven based Java EE project in NetBeans. If you want to explore this in further detail, feel free to refer NetBeans documentation around this topic (starting with this source)

 

 

 

 

 

It’s not necessary to include/link a Java EE application server at this point. You can opt for ‘No Server selected’ and click Finish to complete the process

 

 

After bootstrapping the Maven based Java EE Web application project, you should see the following project structure along with a pom.xml

 

 

 

Note: the only dependency defined in your pom.xml would be the Java EE 7 APIs

 

..........
<dependencies>
        <dependency>
            <groupId>javax</groupId>
            <artifactId>javaee-web-api</artifactId>
            <version>7.0</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>
..........

 

Add Swarm specific magic

 

Once NetBeans sets up the base version of your project, all you need to do is include the wildfly-swarm-plugin configuration your Maven pom.xml

 

......
<plugin>
                <groupId>org.wildfly.swarm</groupId>
                <artifactId>wildfly-swarm-plugin</artifactId>
                <version>1.0.0.Final</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>package</goal>
                        </goals>
                    </execution>
                </executions>
</plugin>
......

 

Note: You can choose to explicitly add dependencies for the fractions (the required Java EE components) which you are using in your application or you can offload this duty to Swarm which in turn will automatically detect the required fractions (after code introspection) and pull them during the build phase. Below is an example of how you can add a JAX-RS fraction for RESTful services

 

.....
<dependency>  
    <groupId>org.wildfly.swarm</groupId>  
    <artifactId>jaxrs</artifactId>  
</dependency>
.....

 

Once the setup process is complete, we can proceed with our application development.

 

Develop business logic

Let’s look a quick look at the classes/code artifacts along with some code snippets to get you warmed up.

 

 

RealTimeStockTicker.java

 

  • A server side WebSocket endpoint (@ServerEndpoint) to broadcast stock prices to connected clients/users
  • The data is pushed to clients in an asynchronous manner, thanks to this feature in the Java WebSocket API
  • Please note that for this sample, users are restricted to be able to track real time prices for Oracle (ORCL) stocks only

 

.....
public void broadcast(@Observes @StockDataEventQualifier String tickTock) {

        for (final Session s : CLIENTS) {
            if (s != null && s.isOpen()) {
                /**
                 * Asynchronous push
                 */
                s.getAsyncRemote().sendText(tickTock, new SendHandler() {
                    @Override
                    public void onResult(SendResult result) {
                        if (result.isOK()) {
                            Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.INFO, "Price sent to client {0}", s.getId());
                        } else {
                            Logger.getLogger(RealTimeStockTicker.class.getName()).log(Level.SEVERE, "Could not send price update to client " + s.getId(),
                                    result.getException());
                        }
                    }
                });
            }


        }
}
.....

 

StockPriceScheduler.java

 

    • A Singleton EJB (@Singleton) acts as the source of stock price data
    • It uses native (EJB) scheduling (combination of TimerService and @Timeout) capabilities to periodically poll the Google Finance REST endpoint using the JAX-RS client API to pull stock prices
    • Leverages CDI capabilities like events and qualifiers (@Qualifier) to push latest stock prices to the connected WebSocket clients in real time

 

.....
@Timeout
public void timeout(Timer timer) {

        /**
         * Invoked asynchronously
         */
        Future<String> tickFuture = ClientBuilder.newClient().
                target("https://www.google.com/finance/info?q=NASDAQ:ORCL").
                request().buildGet().submit(String.class);

        /**
         * Extracting result immediately with a timeout (3 seconds) limit. This
         * is a workaround since we cannot impose timeouts for synchronous
         * invocations
         */
        String tick = null;
        try {
            tick = tickFuture.get(3, TimeUnit.SECONDS);
        } catch (InterruptedException | ExecutionException | TimeoutException ex) {
            Logger.getLogger(StockPriceScheduler.class.getName()).log(Level.INFO, "GET timed out. Next iteration due on - {0}", timer.getNextTimeout());
            return;
        }

        if (tick != null) {
            /**
             * cleaning the JSON payload
             */
            tick = tick.replace("// [", "");
            tick = tick.replace("]", "");
            msgEvent.fire(StockDataParser.parse(tick));
        }
}
.....

 

StockPriceResource.java

A simple REST endpoint (@Path, @GET) exposed for the end users to be able to query (on demand) the stock price of any index listed on NYSE

 

.....
@GET
public String getQuote(@QueryParam("ticker") final String ticker) {


        Response response = ClientBuilder.newClient().
                target("https://www.google.com/finance/info?q=NASDAQ:" + ticker).
                request().get();


        if (response.getStatus() != 200) {
            //throw new WebApplicationException(Response.Status.NOT_FOUND);
            return String.format("Could not find price for ticker %s", ticker);
        }
        String tick = response.readEntity(String.class);
        tick = tick.replace("// [", "");
        tick = tick.replace("]", "");


        return StockDataParser.parse(tick);
}
.....

 

RESTConfig.java

Basic configuration class for the JAX-RS container

 

StockDataParser.java

A simple utility class which leverages the JSON Processing (JSON-P) API to filter the JSON payload obtained from the Google Finance REST endpoint and returns data useful for the end users

 

beans.xml

This is the CDI configuration file. Although it does not have too much content, it does enable the CDI events feature

 

Execution

 

In this section, we’ll see how to

  • Build and run the example on your local machine
  • Package it as per Oracle Application Container semantics, upload it and see our sample app in action in the cloud

 

Standalone mode

Now that we have our business logic ready, it’s time to build our application! You can do so by right-clicking on your project in NetBeans and choosing Build with dependencies (please note that this might take time during the first iteration)

 

 

The file structure of your maven build directory (target) will look similar to this. Please note that the ticker-tracker-swarm.jar is the Uber JAR produced by Wildfly Swarm plugin

 

 

Since the artifact produced by the build process is just a JAR file, running this is very simple (all you need is a JRE). Here is template for the deployment command

 

java -jar -Dswarm.http.port=<port_number> -Dswarm.context.path=/<custom_context_root> <full_path_to_jarfile>

 

Here is an example

 

java -jar -Dswarm.http.port=9090 -Dswarm.context.path=/ticker-tracker /work/demo/ticker-tracker-swarm.jar

 

Your application should now be up and running on port 9090 with the ticker-tracker  context root

 

  • Access the REST endpoint as follows - http://localhost:9090/ticker-tracker/api/stocks?ticker=AAPL
  • Here is the WebSocket endpoint - ws://localhost:9090/ticker-tracker/rt/stocks

 

Configuration

The below mentioned parameters are specific to Wildfly Swarm and can be used to configure its containers and components

 

Parameter

Description

swarm.http.port

The port on which the Servlet container (Undertow in case of Wildfly) accepts incoming connections. This is 8080 by default

swarm.context.path

Defines a custom context root for your web application

 

Package, upload to Application Container Cloud

 

Package

 

Here are the important points to note in terms of packaging as far as the sample application in this post is concerned (some of these are generally applicable to all JAR based deployments on Application Container Cloud). For a deep dive into this topic, please refer the Packaging Your Application section in the product documentation

 

  • Its packaged as an Uber (Fat) JAR
  • Uses a manifest.json (compulsory deployment artifact) which includes the launch command as well

 

{
    "runtime": {
        "majorVersion": "8"
    },
    "command": "java -jar -Dswarm.https.port=$PORT -Dswarm.context.path=/ticker-tracker ticker-tracker-swarm.jar",
    "release": {
        "build": "24082016.2052",
        "commit": "007",
        "version": "0.0.1"
    },
    "notes": "notes related to release"
}

 

 

A note about the $PORT environment variable

As a platform, Application Container Cloud is ephemeral by nature and parameters like hostname and port etc. are allocated dynamically by the platform and are subject to change (in case your application is redeployed). The startup command takes this into account and binds to the dynamic port allotted by the underlying Application Container Cloud instance. More on this in the following sections - Making the Application Configurable at Runtime, Design Considerations

 

Upload

Oracle Application Container Cloud provides multiple channels for uploading our application

 

Using the Web UI

  • You can choose to upload your ZIP file directly, or
  • You can upload it to your Storage Cloud Service instance first and provide its path

 

Using REST API – Refer Create an Application for the API semantics

 

You can refer the Getting Started with Oracle Application Container Cloud Service tutorial to get a better understanding of the above mentioned points with a focus on the following topics

 

Test out your Cloud deployment

 

To check price of a specific stock just issue a GET request on the specified URL. Mentioned below is the template

 

https://<your-accs-app-url>/ticker-tracker/api/stocks?ticker=<ticker_symbol>

 

Example:

 

https://ticker-tracker-mydomain.apaas.em1.oraclecloud.com/ticker-tracker/api/stocks?ticker=AAPL

 

For real time tracking of Oracle stock prices, you would need a WebSocket client. I would personally recommend using the client which can be installed into Chrome browser as a plugin – Simple WebSocket Client. This client will be used to demonstrate the real time stock tracking feature. The following is a template for the URL of the WebSocket endpoint

 

wss://<your-accs-app-url>/ticker-tracker/rt/stocks

 

Use this as the value for the URL attribute in the client and click on the Open button to start tracking

 

Example

 

wss://ticker-tracker-mydomain.apaas.em1.oraclecloud.com/ticker-tracker/rt/stocks

 

Note: Please notice the fact that we're using secured channel for both REST (https) and WebSocket (wss) protocols since the applications deployed on Application Container Cloud service only listen on secured ports (via a load balancer which the end users and developers do not need to worry about)

 

 

 

 

You should start receiving real time price updates in the Message Log box. You can choose to disconnect any time by clicking the Close button. You now have a lightweight Java EE application running and fully functional on the Oracle Application Container Cloud !

 

This marks the end of part I. As promised, the second part of this post will focus on Oracle Developer Cloud service features such as IDE Integration for source code management, project configuration, build monitoring, seamless deployment to Application Container and more…

 

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

The first part of this blog covered aspects that are important for developers to understand in order to build bespoke mobile applications that expose functionality empowered by Oracle eBusiness Suite (EBS). This blog builds on the information provided in Part I and deals with some of the implementation details

 

 

Implementation details

 

Overview

 

The content of this post primarily deals with details around the following components

 

  • Oracle Mobile Cloud Service: mobile backend, custom API and connector configuration
  • Oracle Integration Cloud Service: DB adapter, REST connection, mapping, agent configuration etc.

 

Please note that, for the ICS related aspects of the solution, a lot of technical details have been explained already in other blogs, thanks to the Oracle A-team

 

 

Therefore, this blog will not repeat those details but briefly touch base on the ICS part in order to capture comprehensive details of the solution.

 

Create Mobile Backend in Oracle MCS

 

The first and foremost step is to create a Mobile Backend for our integration. It serves as a way to group APIs and other resources for a set of mobile apps. A mobile application can only access MCS APIs in the context of a mobile back-end. By and large, a mobile back-end encapsulates the following

 

  • Connectivity information
  • Security related configuration: HTTP Basic, OAuth, Enterprise SSO etc.
  • Security Realm configuration
  • Mobile Application registration information and more..

 

mcs-mbe.png

 

                                                                 MCS Mobile Backend

 

 

For a step-by-step guide on how to work with Mobile Backend, please refer to the following section from the official product documentation

 

Custom API Design

 

Next, we will design the interface/contract of the API which will be exposed to the mobile clients. This will involve creation of the necessary REST endpoints along with the actions (HTTP verbs like GET, POST etc.) and required parameters (query, path etc.)

 

custom-api-1.png

 

The above diagram depicts two endpoints – one which deals with risk orders and the other deals with unpaid invoices

 

custom-api-2.jpg

 

This figure above lists the details of the GET operation on the riskOrders endpoint – it consists of a set of mandatory query parameters which need to be passed along with the HTTP GET request. Oracle MCS gives you the capability to design your Custom APIs from scratch using its declarative UI interface. For a deep dive of this process, please refer ‘Creating a Complete Custom API’ section from the product documentation

 

Data Integration

 

The following sequences of diagrams provide an overview of the stored procedures which contain the business logic to work with data in EBS

 

ebs-pl-sql-1.jpg

 

 

ebs-pl-sql-2.jpg

 

 

 

ebs-pl-sql-3.jpg

 

 

ebs-pl-sql-4.jpg

 

 

ebs-pl-sql-5.jpg

 

 

Test the stored procedure as follows

 

ebs-pl-sql-6.jpg

 

You should get results like these

 

ebs-pl-sql-7.jpg

 

The below set of screens showcase the ICS EBS DB adapter configuration which includes both ICS agent and EBS connectivity configurations

 

ICS Agent Group

 

ics-agent-group.jpg

 

 

The following section from the product documentation covers the ICS Agent installation process

 

ICS DB Adapter

 

ics-db-adapter-1.jpg

 

ics-db-adapter-2.jpg

 

 

ICS (inbound) REST Endpoint

 

The ICS REST endpoint is a gateway for external callers (e.g. Oracle MCS REST Connector API) to invoke business logic associated with EBS integration. The following screens depict the associated configuration

 

ics-rest-ep-1.jpg

 

 

ics-rest-ep-2.jpg

 

 

 

ics-rest-ep-3.jpg

 

 

ics-rest-ep-4.jpg

 

 

ics-rest-ep-5.jpg

 

 

ics-rest-ep-6.jpg

 

 

 

ics-rest-ep-7.jpg

 

 

ics-rest-ep-8.jpg

 

 

Once the REST endpoint configuration is complete, one can test the associated operations. Here is an example of GET operation which fetches sales orders from EBS for a specific customer e.g.

https://icssandbox-a167512.integration.us2.oraclecloud.com/integration/flowapi/rest/GET_SALES_ORDERS_FROM_EBS/v01/salesOrders?customerName=Pinnacle Technologies

 

rest-test-1.jpg

 

 

rest-test-3.jpg

 

 

Oracle MCS: Connector API Configuration

 

The REST Connector API in MCS acts as a client for the inbound ICS REST endpoint (whose configuration was outlined above). The Connector API greatly aids in declarative security and testing.

 

mcs-conn-api.jpg

 

 

The Remote URL highlighted in the above figure is the URL of the ICS REST endpoint. For details on Connector API configuration in MCS, please refer their respective sections in the product documentation – REST, ICS, SOAP

 

Oracle MCS: Custom API Implementation

 

The custom API implementation

 

  • Uses custom Node.js code to implement the contract/interface sketched out initially
  • It internally calls the MCS REST Connector APIs (for ICS)

 

mcs-custom-api-impl-1.jpg

 

 

This snippet demonstrates a HTTP GET operation on the riskOrders endpoint. It takes care of the following

  • Build Basic authorization token (base-64 encoded)
  • Execution of business logic by making a secured call to the Connector API

 

Please note how the Connector API is invoked

 

  • the constant (represented by ‘ICS4EBSConnectorBaseURI’ is nothing but the base URI exposed by the MCS REST Connector API i.e. /mobile/custom/AccountHealthEbsAPI
  • the remaining URI (highlighted) is the one which we defined during ICS (Inbound) REST endpoint configuration

 

mcs-custom-api-impl-2.jpg

 

 

mcs-custom-api-impl-3.jpg

 

 

Mobile UI Development

 

As mentioned in the Key Integration Components section of part I of this blog, one choose from a number of supported client frameworks in order to build the mobile application. This mostly involves leveraging the REST APIs exposed by the mobile back end. If we consider the example of a hybrid mobile application, here is the  Javascript jQuery Ajax call  for invoking a GET request

 

var settings =
        {
            "async": true,
            "url": "https://my-mcs-instance:443/mobile/custom/AccountHealthEbsAPI/riskOrders?customer=your_customer_name&username=your_user_name&password=your_password",
            "method": "GET",
            "headers": {
                "oracle-mobile-backend-id": "oracle_mcs_backend_id",
                "authorization": "Basic eW91cl91c2VyX25hbWU6eW91cl9wYXNzd29yZA==",
                "cache-control": "no-cache"
            }
        }


$.ajax(settings).done(function (response) {
    console.log(response);
});
     

MCS Platform Aspects

 

It is rather tough to cover all the feature of Oracle MCS is a couple of blogs. The important thing to note is that, MCS provides a comprehensive mobile back end platform which includes support for features such as Push Notifications, Analytics, Storage, offline data and sync etc. These can be best explored using Platform APIs section in the official product documentation

 

Security

 

This blog does not cover the relevant security aspects from code & implementation perspective, but the below mentioned items are worth exploring from a deep dive perspective

 

  • Authentication: identifying valid mobile users using options ranging from HTTP Basic Auth, OAuth, Enterprise SSO as well as social login (Facebook is supported at the time of writing)
  • Authorization: protecting Custom APIs based on roles configured within MCS
  • Outbound Security: Connector APIs leverage declarative security policies to connect to external services
  • Identity Propagation: this involves transmission of the authenticated user context from the mobile app right down to the external system (using Connector APIs) based on tokens/assertions (without exchanging sensitive user credentials)

 

With this, we have reached the end of this two-part blog series !

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This blog covers topics which are important to understand in order to build bespoke mobile applications which expose functionality powered by Oracle eBusiness Suite (EBS). Teams can leverage the information presented in this blog for multiple aspect of their mobility projects, including planning, design and development

 

 

Current offerings & potential opportunities

 

EBS has a considerable number of mobile apps available as part of the product stack. Also, Oracle Mobile Cloud Service (MCS) latest offerings contain few mobile apps that have been released for EBS Self service and expense management features . However, as a developer working for an enterprise IT having EBS deployed, you still may need to build custom mobile apps that would need to expose business critical data, fetched from EBS, to end users. This could be a mobile app which exposes valuable information congregated from disparate cloud and on-premises applications on a single and intuitive interface, and is therefore completely isolated from other EBS Mobile apps available out of the box.

As an example, consider a fictitious company VISION CORP that sells computing hardware and data center equipment. They used to manage customer experience focused aspects using On-premises deployed Oracle Apps, but have now invested in Sales Cloud, for managing deals and opportunities, and also service cloud for customer service requests. They still use on-premises EBS for tracking Sales orders and invoices. Vision tracks customer data in several systems and to pull together the right information at the right time is quite challenging. Some of the existing customers may be new opportunities too based on their potential need to purchase new products and services. Therefore Vision IT department need to build an “Account Health” Mobile app for sales reps that aggregates customer information into a single, easy to use interface available from any smart phone. This single app should fetch data from Sales Cloud, Service Cloud and on-premises EBS, as shown in the figure below. Sales reps should now be able to quickly review and act on the critical account health information, before meeting existing customers (who are also prospective opportunities for additional sales) in order to successfully conclude relevant deals.

 

Mobile Application fetching data from Oracle Sales Cloud, Service Cloud and on-premises EBS.JPG

 

                             Mobile Application fetching data from Oracle Sales Cloud, Service Cloud and on-premises EBS

 

Selecting the right Cloud Platform for your Mobile Backend

 

A comprehensive Mobile Backend is the foundation for a successful mobile app development practice. A wise approach would be to leverage the compelling features which Oracle Mobile Cloud Service (MCS) gives you right out of the box

 

mcs-high-level-architecture.JPG

 

                                                            Oracle MCS: high level architecture

 

These include

 

  • Platform to host back-end APIs and implementations
  • Support for mobile platform aspects – Security, Storage, Offline Data synchronization, Device Notifications, Location Services and Analytics.
  • Easy interfaces for no-code UI development and SDKs for diverse set of UI development frameworks
  • Quick & secure Integration with back-end systems, using connectors

 

It’s important to note that all of this is available without you having the need to install/configure and thereafter monitoring/patching/upgrading mobile back-end software infrastructure in your data centers.

 

Architecture

 

Reference architecture presented below can be used for building bespoke mobile apps exposing data from on-premises EBS. It’s the same architecture also covered in the other blog from Oracle A-team.

 

ebs-mcs-ics-mob-apps-architecture.JPG

 

                                                            Architecture for Mobile App fetching data from on-premises EBS

 

Key integration components

 

Client Side

 

The client side is nothing but the mobile application. It needs to fetch relevant information from on-premises EBS e.g. information related to sales orders and invoices. It could be developed using any client technology which MCS supports

 

  • Native client SDKs: Android, iOS, Windows
  • Hybrid clients SDK: Apache Cordova
  • JavaScript SDK: for use with browser based apps and hybrid frameworks (other than Cordova)
  • Support for Xamarin and Swift: you can also use the MCS SDK with Swift applications and Xamarin platform also provides and SDK which you can use with MCS

 

You can refer the official documentation (Part II) for a detailed insight into all of the SDKs mentioned above.

 

In addition to existing SDKs, you can also use the following

 

  • Oracle Javascript Extension Toolkit (JET): from the context of the use case presented in this blog, JET (which is Javascript development toolkit) can serve as a suitable platform for hybrid application development
  • Oracle Mobile Application Framework (MAF): It is a comprehensive mobile application development platform using which you can build applications and deploy to iOS, Android and Windows platforms, all using a single code base. MAF is ideal for building mobile applications on top of backends built with Oracle MCS since it can leverage the REST APIs which MCS exposes

 

Target Systems

 

Oracle EBS serves as our target system in this case. It is the repository of information on top of which integrations are built in order to work with the data it contains. Data can be collected from EBS by setting up Integrated SOA Gateway and accessing the SOAP/REST interfaces. Relevant details are mentioned in My Oracle Support documents for EBS version 12.2.x (Document ID 1311068.1) or 12.1.x (Document ID 1998019.1). Considering this integration approach, Oracle Integration Cloud Service (ICS) EBS Adapter can be used to further build integrations. If EBS version is 11.x, the provision to setup Integrated SOA Gateway is not available and therefore the data will have to be collected by directly accessing EBS database and invoking PL/SQL APIs. Considering this integration approach, ICS Database Adapter can be used to further build integration in ICS

 

Integration

 

Oracle Mobile Cloud Service

 

MCS would be used to model the mobile backend for integration with the mobile application. MCS REST connector will be configured to connect with ICS integration source REST endpoint and get the data fetched from EBS. MCS Custom API implementation (Node.js code) will invoke the MCS REST connector to gather EBS data, and will expose a REST endpoint for the Mobile app client to invoke.

 

Oracle Integration Cloud Service 

 

ICS is feature-abundant, but, from the context of this post, the following capabilities are critical

  • Inbound REST adapter (source)
  • Outbound DB adapter
  • On-premise agent

 

It will provide a suitable platform for integrating with EBS and exposing the fetched data to Oracle Cloud Services (like MCS) in a desirable format. The usage of ICS gains more relevance when EBS is behind corporate firewall. Enterprises do not want to open firewall ports to have Cloud services directly invoke EBS integration interfaces (even http enabled). The solution to this problem lies in an ‘agent’ based approach and is achieved through a combination of an on-premise ICS Agent and Oracle Messaging Cloud Service.

 

ICS On-Premises Agent

 

ICS On-Premises Agent would be used as a communication channel between EBS and ICS. The key point to remember is that you’re not obliged to allow incoming requests from Oracle cloud to on-premises EBS – thanks to the combination of ICS and Oracle Messaging Cloud Service

 

Oracle Messaging Cloud Service

 

It would act as a mediator between the ICS On-Premises agent and ICS on Oracle Cloud. The ICS agent will be reading messages and publishing to messaging infrastructure powered by Oracle Messaging Cloud Service. Integration should be built in ICS with target using EBS or DB adapter and source as REST. This would let ICS fetch data from EBS and expose it via REST endpoints.

 

Here is the sequence of events which takes place when the mobile application interacts with the server side integration[AG1]

 

  • The hybrid mobile app makes a call to the REST API exposed by the Custom API layer within Oracle MCS
  • The Custom API implementation internally invokes the ICS Connector API (configured in MCS)
  • The ICS REST endpoint is called by virtue of the ICS Connector API integration
  • Now, the control flow passes on to the EBS DB adapter integration within ICS, which internally translates the (request) as a message and puts into the Oracle Messaging Cloud Service queue.
  • The ICS on-premise agent picks up the message from the queue and processes it – this results in the invocation of EBS
  • The results obtained from EBS are put into a Oracle Messaging Cloud service by the ICS on-premise agent and this is picked up by ICS component running in Oracle Public Cloud
  • The call chain now reverses i.e. the response is passed back to the Connector API, from where it reaches the Custom API and finally back to the mobile client

 

Mobile App Development Approach

 

Using Oracle Mobility platform, following approach is recommended for building mobile apps. Though most of these considerations are applicable as general best practices, there are a few specific ones related to integrating the mobile implementation with on-premises EBS

 

Building the Core Functionality

 

Based on the requirements of the mobile app and the exact information required from the back end systems (EBS, in the current context), the actual core functionality of the solution is most important. It means - what is the purpose of the app, who are the user actors who would use it, what are the exact screens are required, what should be displayed on those screens, where will that data be obtained from.

 

Leveraging Platform Aspects

 

Once the core functionality is fathomed, additional relevant aspects that are more sort of mobility platform aspects need to be also devised - like Security, Offline synchronization/caching of relevant data etc., Storage of mobile content on Cloud (not on device), sending device Notifications, introducing Analytics. Implementation of these aspects would have impact on how developers would build various tiers of the overall solution – UI, mobile service, integration, EBS specific data collection.

Development Strategy

Having considered these aspects and based on the architecture presented in the previous section, Fig 3 shows a project planning approach for various tasks involved. It is apparent here that using Oracle cloud platform and recommendations, enterprises could rapidly build EBS Mobility solutions by having different developer personas executing in parallel without any tight dependency on each other. For building a mobile app to fetch data from on-premises EBS, using Oracle Cloud Platform, the bare minimum skill based personas required in the team should comprise of

  • Mobile Developer: Creates Custom API endpoints and the UI which will invoke it. Also owns the major work on implementing the mobility platform aspects (security, notification etc.). Also work on configuring secure mobile app access, may be with the help of security/Identity and Access Management (IAM) expert. Depending on the amount of work items involved and the level of segregation, one or more mobile developers could work in parallel, like one expert in Hybrid App development (using JET) building UI focusing on Look-and-feel (LAF) and the other one building custom API end points and configurations for push Notifications, Storage collections, Offline synchronization and so forth.
  • Service Developer: Creates the backend service implementation invoked by the MCS custom API, which includes MCS connector configuration and API implementation code developed in Node.js. Also would configure identity propagation to backend system, may be with the help of IAM expert.
  • Integration Developer: Creates ICS connections with required systems (like EBS) and actual Integrations including data flow with attribute mappings and transformations.
  • EBS SME (Subject Matter Expert): As an expert of EBS, analyzes available EBS APIs or the (database) data model to devise the data extraction logic.

 

In addition to the personas listed above, please take into consideration, some of the other relevant roles prevalent in a traditional IT project. These are Enterprise Architect, Program/Release Manager, Project Manager and IAM expert.

 

mcs-app-dev-personas.JPG

 

                                            

                                                                           MCS application development personas

 

 

In the initial phase of the project, diligent Requirement Analysis should be carried out. It should lead to the development of an exact UI Prototype of the mobile app, covering the details of each screen and the overall flow. Once the UI prototype is ready, it would be known - what exact information is required from EBS and what should be design/structure of actual MCS custom API which the app should invoke to render relevant information on UI.

 

Therefore the following two activities can be executed in parallel:

 

  • EBS Integration Analysis: Based on EBS versions and multiple other factors (refer Architecture section), EBS SME would have finalized on the EBS integration approach – Integrated SOA Gateway for Web Services based integration approach or direct database access. Based on the UI prototype details, if the information required from EBS is not available thru the OOTB APIs, then the EBS SME would have to develop and deploy EBS custom integration interfaces for gathering required data (as described here) or develop custom stored procedures and deploy them to EBS DB schema. These activities are a part of EBS Integration Configuration.
  • MCS Custom API definition: Based on UI requirements and the information required, it would be clear which screen default view, which swipe, which button click should render what data, and hence what should be the input and output expected from each of the corresponding MCS custom API invocations. Therefore, mobile developer can move forward defining the custom API skeleton and all the corresponding REST endpoints.

 

With the REST endpoints concluded with the MCS custom API design, Mobile Developer can go ahead and build the entire UI using the relevant client technology, without any dependency on another development activity in the project.Once the EBS Integration ready, mostly the Integration Developer (if having access to on-premises systems) or even the EBS SME can carry out the ICS Agent installation, directly on the EBS or another machine having proper connectivity with EBS. Thereafter, Integration Developer would proceed with ICS Integration Development. It will include creation of ICS connections and actual integrations - EBS as target and REST as source, with attribute mappings for request and response data flows. ICS has adapters for leveraging both approaches for EBS integration– Web Services based, using Integrated SOA Gateway (for EBS R12 onwards) or direct Database access invoking stored procedures.

With ICS integration available to fetch data from EBS, using the corresponding REST end point, it can be invoked by MCS using the MCS REST connector configuration. In addition to making this configuration in MCS, service developer would eventually develop the Custom API implementation, building the logic in Node.js, to invoke the corresponding MCS REST connector. With all the pieces finally ready, Integration testing of the end-to-end flow can be carried out to deliver a quality app

 

Based on the development strategy mentioned in this post, the implementation details have been covered in second part of this blog.

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

In this post, we’ll look at why Connector APIs are required in spite of having the option to implement external service integration using Custom APIs directly. In the process, we’ll discover some useful features of Oracle Mobile Cloud Service

 

 

Custom & Connector APIs

 

In addition to the core Platform APIs, Oracle Mobile Cloud Service provides Custom APIs. At a high level, using Custom APIs is a two-step process

 

  • Declarative interface design: defining the service contract
  • Concrete implementation: server side business logic to satisfy the interfaces

 

Custom API implementations can be built using using Node.js. It makes it possible to implement almost any server side business logic, including integration with heterogeneous external services.

 

Oracle MCS also supports Connector APIs. Here are some of its important characteristics

 

  • They provide a layer of abstraction when it comes to integrating with heterogeneous systems and platforms on the cloud as well as on-premises
  • Form a bridge between the Custom API and the target system
  • Cannot be accessed by mobile clients directly i.e. they Connector API endpoints are designed to be invoked by Custom APIs only

 

 

custom-and-conn-apis.PNG

                                        Custom & Connector APIs: the big picture

 

 

Let’s look at some of the most compelling features of Oracle MCS Connector APIs

 

Ease of Security configuration

 

Imagine you have a REST web service which can be accessed by providing service account credentials using HTTP Basic (over SSL of course). Here are some of things which you would have to think about if you plan on integrating with this REST service directly using the Custom API

 

  • Come up with a secured way of storing & managing the service account security credentials
  • Revamp your implementation if the security requirements of the target service changes (e.g. from HTTP Basic to OAuth)
  • Dive into an enhancement development cycle when you are required to integrate with multiple services, each having different security constraints

 

Thanks to the Connector API, you can bank on the following capabilities

 

Support for heterogeneous security policies

 

Different flavors of the Connector API support different policies. These are quite extensive in nature and are best referred from the official product documentation for the latest information in terms of the precise security policy applicable for your use case

 

Declarative & flexible security configuration

 

Oracle MCS allows UI based security configuration for external services which you want to integrate with. You can do so by visiting the ‘Security’ navigation link in the Connector API wizard page. Here is a pointer from the official documentation

 

selecting-security-policy.PNG

 

                                Enforcing HTTP Basic over SSL policy for a REST Connector

 

Secure & centralized management of credentials/certificates

 

Credentials for the external service can be configured in a secure manner, using a simple storage mechanism based on a unique identifier (referred to as the csf-key). You can choose to create a new CSF key or select an existing one. Behind the scenes, this leverages the Credential Store Framework in OPSS

 

csf-key.PNG

 

                                             Creating a new CSF key to store credentials

 

You can referthis section from the product documentation for more details on how to work with CSF Keys and Certificates

 

Logging & diagnostics

 

Oracle MCS provides rich logging capabilities by default – both for Custom as well as Connector APIs. There is no extra ‘logging’ code which you would to write (although you are free to add application specific logs within Custom API implementations). Listed below are examples of the few of the which come bundled with Connector APIs

 

Troubleshooting Connector API issues

 

This can prove invaluable when things do not work as expected e.g. the below screenshot depicts a scenario where the invocation failed due to an issue in the Connector API integration layer

 

log-1.PNG

 

                                        Troubleshooting a Connector API issue

 

 

Follow this link to the product documentation to get started. Please note that one can also debug Custom APIs in a similar fashion. Oracle MCS provides the stack traces of your custom Node.js code in case of exceptional scenarios.

 

Request tracking

 

Each request to a Mobile Backend in Oracle MCS is assigned a unique identifier which is ‘attached’ to all the components involved in the request e.g. you can track the entire flow, starting with the API call from your mobile app to a custom API associated with a Mobile Backend, all the way to Connector API and the end target system

 

log-2.PNG

 

                                   Inbound call to the Custom API (in Mobile Backend)

 

 

log-3.PNG

 

                                        Call from the Custom API to the Connector API

 

 

log-4.PNG

 

                                   Outbound call from the Connector API to the external system

 

 

 

For more on logging and diagnostic features, refer the Oracle Mobile Cloud Service documentation

 

 

Ease of testing

 

Typically, there are several components involved in the process of creating a full-fledged mobile-ready API using Oracle MCS. In such a scenario, it’s very important for these components to be testable in isolation. The Connector API provides you the capability to test it out before wiring it to the Custom API

 

For example, let’s look at a REST Connector API

 

test-1.PNG

                                             Configuring a REST Connector API

 

 

test-2.PNG

 

                                                                 Testing a HTTP GET request

 

 

test-3.PNG

 

                                                                                               Complete URL for the remote REST endpoint

 

 

test-4.PNG

 

                                                                                                              Endpoint invocation result

 

 

For detailed insight on testing your REST Connector API, please visit this link from the official documentation

 

 

Beauty of SOAP connector APIs

 

There are a couple of interesting options available exclusively while working with the SOAP Connector APIs

 

SOAP to REST for free

 

Here is the basic premise of SOAP Connector APIs

 

  • Oracle MCS consumes a SOAP WSDL (provided by the user)
  • WSDL port operations are auto-magically converted to REST endpoints (for the Custom APIs to invoke)

 

 

soap-1.PNG

 

                                                       WSDL port operations

 

soap-2.PNG

 

                                                  SOAP operation to REST endpoint

 

 

 

Note: You’re free to use custom names for the default endpoint name which MCS auto-generates, as well as the default operation names in the WSDL

 

Bi-directional transformations between XML & JSON

 

MCS takes care of the following

 

  • Transformation of inbound calls (from mobile apps to custom API to SOAP Connector, all the way to SOAP target) from JSON to XML
  • Conversion of  XML response (outbound calls) from SOAP target to JSON (for consumption by the Custom API and followed by  the mobile app)

 

You can also continue to use XML as your payload format by tweaking some of the request parameters For further details, refer the official product documentation

 

Declarative enforcement of other policies

HTTP Rules

 

These are specific to REST Connector APIs and are nothing but one or more HTTP parameters (Header or Query) provided at design time (static configuration) e.g. this is perfect for storing API keys which will be automatically added to your HTTP request. You can choose to apply them at a Resource or HTTP method level

 

rules.PNG

 

                                                                Configuring Rules

 

 

Timeouts

 

This facility is common to REST and SOAP Connector APIs and is used for static configuration of connectivity SLA. Think about a scenario where the external service is unavailable. You definitely do not want your connection request to hang or stall indefinitely since it can hamper the performance of your mobile app and create bottlenecks.

 

timeouts.PNG

 

                                                       Configuring connectivity SLAs

 

 

Conclusion

 

I hope this gives you a starting point to explore Connector APIs further. As always, the choice can often depend on your specific use case, but more often than not, you stand to gain a lot by leveraging the Oracle Mobile Cloud Service Connector APIs.

 

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.