Skip navigation

This blog will demonstrate how to get started with a simple MongoDB based application

 

  • Run it on Oracle Application Container cloud
  • Unit test and CI/CD using Oracle Developer cloud
  • Our MongoDB instance will run in a Docker container on Oracle Container cloud

 

 

 

Application

The sample project is relatively simple

 

  • Its uses JPA to define the data layer along with Hibernate OGM
  • Fongo (in-memory Mongo DB) is used for unit testing
  • Jersey (the JAX-RS implementation) is used to provide a REST interface

 

You can check out the project here

 

MongoDB, Hibernate OGM

MongoDB is an open source, document-based, distributed database.. More information here. Hibernate OGM is a framework which helps you use JPA (Java Persistence API) to work with NoSQL stores instead of RDBMS (which JPA was designed for)

 

  • It has support for a variety of NoSQL stores (document, column, key-value, graph)
  • NoSQL databases it supports include MongoDB (as demonstrated in this blog), Neo4j, Redis, Cassandra etc.

 

More details here

 

In this application

 

  • We define our entities and data operations (create, read) using plain old JPA
  • Hibernate OGM is used to speak JPA with MongoDB using the native Mongo DB Java driver behind the scenes. We do not interact with/write code on top of the Java driver explicitly

 

Here is a snippet from the persistence.xml which gives you an idea of the Hibernate OGM related configuration

 

Setup

Let's configure/setup our Cloud services and get the application up and running...

MongoDB on Oracle Container Cloud

 

 

 

 

Oracle Developer Cloud

You would need to configure Developer Cloud for the Continuous Build as well as Deployment process. You can refer to previous blogs for the same (some of the details specific to this example will be highlighted here)

 

References

 

Make sure you setup Oracle Developer Cloud to provide JUnit results

 

Provide Oracle Application Container Cloud (configuration) descriptor

 

As a part of the Deployment configuration, we will provide the deployment.json details to Oracle Developer Cloud - in this case, it's specifically for setting up the MongoDB co-ordinates in the form of environment variables. Oracle Developer cloud will deal with the intricacies of the deployment to Oracle Application Container Cloud

 

 

JUnit results in Oracle Developer Cloud

 

From the build logs

 

 

From the test reports

 

 

Deployment confirmation in Oracle Developer Cloud

 

 

Post-deployment status in Application Container Cloud

 

Note that the environment variables were seeded during deployment

 

 

Test the application

  • We use cURL to interact with our application REST endpoints, and
  • Robomongo as a (thick) client to verify data in Mongo DB

 

Check the URL for the ACCS application first

 

Add employee(s)

 

curl -X POST https://my-accs-app/employees -d 42:abhirockzz
curl -X POST https://my-accs-app/employees -d 43:john
curl -X POST https://my-accs-app/employees -d 44:jane

 

The request payload is ':' delimited string with employee ID and name

 

Get employee(s)

 

You will get back a XML payload in response

 

curl -X GET https://my-accs-app/employees - all employees
curl -X GET https://my-accs-app/employees/44 - specific employee (by ID)

 

 

Let's peek into MongoDB as well

 

  • mongotest is the database
  • EMPLOYEES is the MongoDB collection (equivalent to @Table in JPA)

 

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

It's time to take Java EE 8 for a spin and try out Glassfish 5 builds on Docker using Oracle Container Cloud. Java EE specifications covered

 

  • Server Sent Events in JAX-RS 2.1 (JSR 370) - new in Java EE 8
  • Asynchronous Events in CDI 2.0 (JSR 365) - new in Java EE 8
  • Websocket 1.1 (JSR 356) - part of the existing Java EE 7 specification

 

 

Application

 

Here is a quick summary of what's going on

 

  • A Java EE scheduler triggers asynchronous CDI events (fireAsync())
    • These CDI events are qualified (using a custom Qualifier)
    • It also uses a custom java.util.concurrent.Executor (based on the Java EE Concurrency Utility ManagedExecutorService) – thanks to the NotificationOptions supported by the CDI API
  • Two (asynchronous) CDI observers (@ObservesAsync) – a JAX-RS SSE broadcaster and a Websocket endpoint
  • SSE & Websocket endpoints cater to their respective clients

 

Notice the asynchronous events running in Managed Executor service thread

action-2.jpg?w=768

You can choose to let things run in the default (container) chosen thread

 

cdi-2-async-events-in-action.jpg?w=768

 

Build the Docker images

 

Please note that I have used my personal Docker Hub account (abhirockzz) as the registry. Feel free to use any Docker registry of your choice

 

git clone https://github.com/abhirockzz/cdi-async-events.git
mvn clean install
docker build -t abhirockzz/gf5-nightly -f Dockerfile_gf5_nightly .
docker build -t abhirockzz/gf5-cdi-example -f Dockerfile_app .

 

Push it to a registry

 

docker push abhirockzz/gf5-cdi-example

 

Run in Oracle Container Cloud

 

Create a service

 

 

 

Deploy it

 

 

You see this once the container (and the application) start..

 

 

 

Drill down into the (Docker) container and check the IP for the host where it's running and note it down

Test it

 

Make use of the Host IP you just noted down

 

http://<occs_host_ip>:8080/cdi-async-events/events/subscribe - You should see a continuous stream of (SSE) events

 

68747470733a2f2f61626869726f636b7a7a2e66696c65732e776f726470726573732e636f6d2f323031372f30362f7373652d6f75747075742e6a7067

 

Pick a Websocket client and use it connect to the Websocket endpoint ws://<occs_host_ip>:8080/cdi-async-events/

 

You will see the same event stream... this time, delivered by a Websocket endpoint

 

 

 

You can try this with multiple clients - for both SSE and Websocket

 

Enjoy Java EE 8 and Glassfish !

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

This blog walks through an example of how to create a test pipeline which incorporates unit as well as integration testing - in the Cloud. What's critical to note is the fact that the cloud service instances (for testing) are started on-demand and then stopped/terminated after test execution

  • Treat infrastructure as code and control it within our pipeline
  • Pay for what you use = cost control

 

We will be leveraging the following Oracle Cloud services

  • Oracle Developer Cloud
  • Oracle Database Cloud
  • Oracle Application Container Cloud

 

 

 

 

 

Oracle Developer Cloud: key enablers

The following capabilities play a critical role

 

The below mentioned features are available within the Build module

 

  • Integration with Oracle PaaS Service Manager (PSM): It's possible to add a PSMcli build step that invokes Oracle PaaS Service Manager command line interface (CLI) commands when the build runs. More details in the documentation
  • Integration with SQLcl: This makes it possible to invoke SQL statements on an Oracle Database when the build runs. Details here

 

Application

The sample application uses JAX-RS (Jersey impementation) to expose data over REST and JPA as the ORM solution to interact with Oracle Database Cloud service (more on Github)

 

 

Here is the test setup

 

Tests

 

Unit

There two different unit tests in the application which use Maven Surefire plugin

  • Using In-memory/embedded (Derby) database: this is invoked using mvn test
  • Using (remote) Oracle Database Cloud service instance: this test is activated by using a specific profile in the pom.xml and its executed using mvn test –Pdbcs-test

 

 

Extract from pom.xml

 

 

 

Integration

In addition to the unit test, we have an integration test layer which is handled using the Maven Failsafe plugin

 

 

Its invoked by mvn integration-test or mvn verify

 

 

Packaging

It's handled using Maven Shade plugin (fat JAR) and Maven assembly plugin (to create a zip file with the ACCS manifest.json)

 

Developer Cloud service configuration

 

Setup

Before we dive into the details, let’s get a high level overview of how you can set this up

 

Project & code repository creation

Please refer to the Project & code repository creation section in the Tracking JUnit test results in Developer Cloud service blog or check the product documentation for more details

 

Configure source code in Git repository

Push the project from your local system to your Developer Cloud Git repo you just created. We will do this via command line and all you need is Git client installed on your local machine. You can use Git or any other tool of your choice

 

cd <project_folder> //where you unzipped the source code  
git init  
git remote add origin <developer_cloud_git_repo>  
//e.g. https://john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git//john.doe@developer.us.oraclecloud.com/developer007-foodomain/s/developer007-foodomain-project_2009/scm/sample.git   
git add .  
git commit -m "first commit"  
git push -u origin master  //Please enter the password for your Oracle Developer Cloud account when prompted

 

Once this is done, we can now start configuring our Build

 

  • The pipeline is divided into multiple phases each of which corresponds to a Build
  • These individual phases/builds are then stitched together to create an end-to-end test flow. Let’s explore each phase and its corresponding build configuration

 

Phases

 

Unit test: part I

The JPA logic is tested using the embedded Derby database. It links to the Git repo where we pushed the code and also connects to the Oracle Maven repository

 

 

 

 

The build step invokes Maven

 

 

The post-build step

  • Invokes the next job in the pipeline
  • Archives the test results and enables JUnit test reports availability

 

 

 

 

 

Bootstrap Oracle Database Cloud service

 

  • This phase leverages the PSMcli to first start the Oracle Database Cloud  service and then,
  • SQLcl to create the table and load it up with test data. It is invoked by the previous job

 

 

Please note that the PSM command is asynchronous in nature and returns a Job ID which you can further use (within a shell script) in order to poll the status of the job

 

Here is an example of a such a script

 

VALUE=`psm dbcs stop --service-name test`

echo $VALUE

#Split on ':' which contains the Job ID on the right side of :
OIFS=$IFS
IFS=':'
JSONDATA=${VALUE}

#trying to skip over the left side of : to get the JobID
COUNTER=0
for X in $JSONDATA
do
  if [ "$COUNTER" -eq 1 ]
  then
  #clean string, removing leading white space and tab
  X=$(echo $X |sed -e 's/^[ \t]*//')
  JOBID=$X
  else
  COUNTER=$(($COUNTER+1))
  fi
done

echo "Job ID is "$JOBID

#Repeat check until SUCCEED is in the status
PSMSTATUS=-1
while [ $PSMSTATUS -ne 0 ]; do 

CHECKSTATUS=`psm dbcs operation-status --job-id $JOBID`


  if [[ $CHECKSTATUS == *"SUCCEED"* ]]
  then
  PSMSTATUS=0
    echo "PSM operation Succeeded!"
  else 
  echo "Waiting for PSM operation to complete"
  fi
  sleep 60
done

 

 

Here is the SQLcl configuration which populates the Oracle Database Cloud service table

 

 

 

 

 

Unit test: part II

  • Runs test against the Oracle Database Cloud service instance which we just bootstrapped
  • Triggers application deployment (to Oracle Application Container Cloud)
  • and, like the previous job, this too links to the Git repo and connects to the Oracle Maven repository

 

Certain values for the test code are passed in as parameters

 

 

 

The build step involves invocation of a specific (Maven) profile defined in the pom.xml

 

 

 

The post build section does a bunch of things

  • Invokes the next job in the pipeline
  • Archives the deployment artifact (in this case, a ZIP file for ACCS)
  • Archives the test results and enables test reports availability
  • Invocation of the Deployment step to Application Container Cloud

 

 

 

 

Integration test

Now that we have executed the unit tests and our application is deployed, its now time to execute the integration test against the live application. In this case we test the REST API exposed by our application

 

 

 

Build step invokes Maven goal

 

 

We use the HTTPS proxy in order to access external URL (the ACC application in this case) from within the Oracle Developer Cloud build machines

 

Post build section invokes two subsequent jobs (both of them can run in parallel) as well the test result archive

 

 

 

 

Tear Down

 

  • PSMcli is used to stop the ACCS application and runs in parallel with another job which uses SQLcl to clean up the data in Oracle Database Cloud (drop the table)
  • After that, the final tear down job is invoke, which shuts down the Oracle Database Cloud service instance (again, using PSMcli)

 

 

 

 

 

 

 

Finally, shut down the Oracle Database Cloud service instance

 

 

 

Total recall...

 

  • Split pipeline into phases and implement them using a Build job - the choice of granularity is up to you e.g. you can invoke PSMcli and SQLcl steps in the same job
  • Treat infrastructure (cloud services) as code and manage them from within your pipeline - Developer Cloud makes it easy to across the entire Oracle PaaS platform by PSMcli integration

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

With Oracle Developer Cloud Service, you can integrate your existing Jenkins or Hudson setup - whether they are on-premise or cloud based. Currently, there are three different integration points which are enabled using Webhooks. Let’s look at each of these

 

Jenkins Build notifications

This is made possible by an inbound Webhook which accepts build notifications from a remote Jenkins server

 

Configuration summary

  • Create a Webhook in Developer Cloud (type: Jenkins - Notification Plugin)
  • Configure your external Jenkins to use the URL provided in the Developer Cloud Service Webhook configuration

 

Here is snapshot of the configuration in Oracle Developer Cloud

 

 

This is how the resulting Activity Stream looks like in Oracle Developer Cloud. Clicking on the hyperlinks available in the Activity Stream will redirect you to artifacts in the remote Jenkins instance e.g. build, commit, git repository etc.

 

 

 

You can refer to this documentation section for more details

 

Jenkins Build Trigger integration

You can configure an outbound Webhook which triggers a build on a remote Hudson or Jenkins build server when a Git push occurs in the selected repository in Developer Cloud

 

Configuration summary

  • Configure external Jenkins to allow remote invocation of builds
  • Create a Webhook of type Hudson/Jenkins - Build Trigger
    • Provide basic info, configure authentication and trigger

 

Here is snapshot of the configuration in Oracle Developer Cloud

 

 

 

 

You can refer to this documentation section for more details.

 

Jenkins Git Plugin integration

This is another outbound Webhook which can notify another Hudson or Jenkins build job in response to a Git push in Developer Cloud service. The difference between this and previous Webhook is that this will trigger builds of all the jobs configured for the same Git repository (in Developer Cloud service) as sent in the Webhook payload

 

Configuration summary

  • Create a Webhook of type Hudson/Jenkins Git Plugin
  • Provide the Git repository details as a part of the external Jenkins configuration and activate SCM polling

 

 

You can refer to this documentation section for more details.

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

In this blog we will look at

 

 

 

Application

The application is a simple one which fetches the price of a stock from the cache. It demonstrates other features (in addition to basic caching) such as

  • Cache loader – if the key (stock name) does not exist in the cache (since it was never searched for or has expired), the cache loader logic kicks in and fetches the price using a REST call to an endpoint
  • Serializer – Allows us to work with our domain object (Ticker) and takes care of the transformation logic
  • Expiry – A cache-level expiry is enforced after which the entry is purged from the cache
  • Metrics – get common metrics such as cache size, hits, misses etc.

 

Code

Let’s look at some code snippets for our application and each of the features mentioned above

 

Project is available on Github

 

Cache operations

This example exposes the get cache operation over a REST endpoint implemented using Jersey (JAX-RS API)

 

 

Cache Loader

PriceLoader.java contains the logic to fetch the price from an external source

 

 

Serializer

TickerSerializer.java Converts to and from Ticker.java and its String representation

 

 

 

 

Expiry

Purges the cache entry when this threshold is hit and causes the cache loader invocation is the expired entry is looked up (get) again

 

 

Metrics

Many cache metrics can be extracted – common ones are exposed over a REST endpoint

Some of the metrics are global and other are not. Please refer to the CacheMetrics javadoc for details

 

 

Setup

 

Oracle Application Container Cloud

The only setup required is to create the Cache. It’s very simple and can be done quickly using the documentation.

 

Please make sure that the name of the cache is the same as one used in the code and configuration (Developer Cloud) i.e test-cache. If not, please update the references

 

Oracle Developer Cloud

You would need to configure Developer Cloud for the build as well as Continuous Deployment process. You can refer to previous blogs for the same - some of the details specific to this example will be highlighted here

 

References

 

Provide Oracle App Container Cloud (configuration) descriptors

 

  • The manifest,json provided here will override the one in your zip file (if any) - its not compulsory to provide it here
  • Providing the deployment.json details is compulsory (in this CI/CD scenario ) since it cannot be included in the zip file

 

 

 

Deployment confirmation in Developer Cloud

 

 

 

Status in Application Container Cloud

 

Application URL has been highlighted

 

 

 

 

 

Test the application

Check price

Invoke a HTTP GET (use curl or browser) to the REST endpoint (check the application URL) e.g. https://acc-cache-dcs-domain007.apaas.us6.oraclecloud.com/price/ORCL

 

 

If you try fetching the price of the stock after the expiry (default is 5 seconds), you should see a change in the time attribute (and the price as well - if it has actually changed)

 

Check cache metrics

 

Invoke a HTTP GET (use curl or browser) to the REST endpoint (check the application URL) e.g. https://acc-cache-dcs-domain007.apaas.us6.oraclecloud.com/metrics

 

 

Test the CI/CD flow

Make some code changes and push them to the Developer Cloud service Git repo. This should

  • Automatically trigger the build, which once successful will
  • Automatically trigger the deployment process, and
  • Redeploy the new application version to Application Container Cloud

 

Additional reading/references

 

 

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Introduction

An application programming interface (API) is an interface to a service at an endpoint that provides controlled access to a business process or data. Businesses today are treating APIs as a primary product and are adopting the “API First” strategy for increased efficiency, revenue, partner contribution and customer engagement. These companies want to expose their core business values as APIs to partners to bring in more revenue but at the same time secure their core data and processes which enables faster service delivery with low cost.

 

APIs, Microservices and Cloud

Microservices is an architectural style that is increasingly being used for building cloud native applications, where each application is built as a set of services. These services communicate with one another through contractually agreed upon interfaces – APIs. It is an alternative architecture for building applications which provides a better way of decoupling components within an application boundary.

 

The number and granularity of APIs that a microservice-based application exposes demands for robust API management that involves creating, publishing APIs, monitoring the life cycle and enforcing usage policies in a secure and scalable environment.

 

Also, with more and more enterprises of all sizes leveraging cloud platforms to build innovative applications, effective API management in the cloud and/or on-premises is pivotal to meet their business and customer needs.

 

Oracle API Platform Cloud Service

Oracle’s API Platform Cloud Service (API platform CS) provides an integrated platform to build, expose and monitor APIs to backend services including capabilities like security enforcement, message routing, track usage…etc. This blog will introduce you to the various capabilities offered by Oracle API Platform Cloud Service with the help of a simple use case. The following diagram illustrates the architecture overview of Oracle’s API Platform CS.

 

apics_diagram.png

 

API Platform Cloud Service offers a centralized API design with distributed API runtime which makes it easy to manage, secure, and publicize services for application developers by offering innovative solutions.

There are several components in the Oracle API Platform Cloud Service - Cloud Service Console, the Gateway, the Management Portal, and the Developer Portal. Following is a short description of each of these components:

  • API Platform Cloud Service Console: Provision new service instances, start and stop service instances, initiate backups, and perform other life cycle management tasks.
  • Gateway: This is the security and access control runtime layer for APIs. Each API is deployed to a gateway node from the Management Portal or via the REST API.
  • Management Portal: This is used to create and manage APIs, deploy APIs to gateways, and manage gateways, and create and manage applications. You can also manage and Deploy APIs and manage gateways with the REST API.
  • Developer Portal: Application developers subscribe to APIs and get the necessary information to invoke them from this portal.

 

Personas of API life cycle

  • API Designer/API Product Manager - Collects customer requirements, documents the API and gets agreement with the consumer on the design of the API
  • API Manager/Implementer - Creates, tests, deploys, monitors and manages APIs, apply policies supporting the design and ensuring security
  • Gateway Manager - Deploys and configures gateway nodes, reviews and approves API deployment requests, monitors and manages the gateways
  • API Consumer - Application developer who consumes APIs to meet the requirements of an application. Searches the API catalog to identify existing APIs, registers desired APIs with application.

 

Sample Use case

In this blog we will build a simple API called Book Store that exposes functions of an online book store.  This blog emphasizes on the API implementation specific features of API Platform CS. To keep things simple we will mock one simple function – list books, which returns a list of books along with their titles, authors and expose it as an API. This blog does not discuss about all the features of Oracle API Platform Cloud Service. Please refer here for comprehensive documentation.

 

Note: The steps defined in the subsequent sections of this blog assume that you have access to an instance of Oracle API platform Cloud service with Gateway configurations and appropriate user grants to be able to implement and deploy APIs.

 
Create an API

In this section you create an entry for an API you want to manage in the API Platform CS Management Portal.

1) Sign in to the Management Portal as a user with the API Manager role

Management portal login page.png

2) Click on the “Create API” button to create a new API by providing name, version and description

Create API.png

3) The newly created Book Store API should be listed under the APIs page as shown below:

BookStore_API.png

Register Application to an API

API consumers register their applications to APIs they want to use. Application developers use the Developer Portal to register to APIs while API managers use Developer Portal and the Management Portal. In this section you register an application to the BookStore API.

 

1) From the APIs page, click the API you want to register an application to and Click on Registrations tab. Click on Register Application to register an application to this API.

Register_Application.png

2) The following Register Application page comes up listing all the existing applications from which you can choose or you can create a new application.

Register_Application_1.png

3) In this case, click on the Create an application link to create a new application and provide the details as in the below screenshot. Click Register button to register the Books App application with Book Store API

Register_Application_2.png

4) Once the application is registered with the API, it should be displayed under the “Registrations” tab as follows. Also notice that you can suspend this registration or de-register this application by clicking the respective buttons that appear when you hover on the application name. You can also approve or reject a developer’s request to register their application to an API from this page.

Register_Application_3.png

5) Each application that is registered is issued an App key which can be sent along the request to ensure that access to the API is granted only to the registered applications. Click on the Applications tab and click on the Books App application to view the application details along with the App key. You can also re-issue the App key by clicking on the Reissue key button.

Application_Key.png

Implement the API

Now that we have created an API and registered an application that can access the API, in this section we implement the API by applying policies to configure the request and response flows.

 

Click on the BookStore API to start implementing the API. The following page comes up with API Implementation activity highlighted.

API_Implementation.png

As a first step of API implementation, we configure the API endpoints. Endpoints are locations at which an API sends or receives requests or responses. APIs in API Platform CS have two endpoints in the request flow:

  1. API Request
  2. Service Request

 

Configure API Request URL

The API Request URL is the endpoint at which the gateway will receive requests from users or applications for your API

1) When you hover on to the API request section, you will see an “Edit” button using which you can configure the API request URL.

API_Request_URL_1.png

2) Click Next to configure the URL as follows. In the API Endpoint URL field, provide the endpoint URL for the Book Store API, apply and save the changes.

API_Request_URL_2.png

In this case we have specified /bookstore/books to be the relative URI. You can also choose the protocol to be HTTP or HTTPS or both.

 

Create a backend service

We need to create a backend service that would process the requests forwarded by the bookstore/books API. As mentioned earlier, we create a mock implementation of this service using Apiary. Oracle Apiary provides you with the ability to design APIs using either API Blueprint or Swagger 2.0. Please refer to http://apiary.io to learn more about Oracle Apiary, its features and to register for a free account.

 

Note: This task assumes that you have already registered with Apiary and have valid access credentials to login into Apiary.

 

1) Navigate to http://apiary.io  and Sign In using your account

apiary_login.png

2) Create a new API by clicking on Create New API project as follows, you can choose to design your API using API Blueprint or with Swagger. In this case we use API Blueprint, click on Create API button to create the Book Store API.

apiary_new_api.png

3) The API editor opens with a sample API definition which can be edited to define the API for /books as follows:

For the sake of simplicity, just replace the content in the left window with the following text. We have mocked the implementation of /bookstore/books service by providing two book entries.

 

FORMAT: 1A

HOST: http://bookstore.apiblueprint.org/

# BookStoreAPI

Bookstore is a simple API allowing consumers to view all the books along with their title and author.

## Books Collection [/bookstore/books]

### List All Books [GET]

+ Response 200 (application/json)

    {

        [

            {

                "Title": "Thus Spoke Zarathustra",

                "Author": "Friedrich Nietzsche"

            } ,

            {

                "Title": "The Fountainhead",

                "Author": "Ayan Rand"

            }

        ]

      }

 

4) When you click on Save button, you should see the right side window updated accordingly, based on the content you just provided.

apiary_bookstore_api.png

5) When you click on the List All Books link, you will see the following page with a mock server URL (https://private-2dd84-bookstoreapi.apiary-mock.com/bookstore/books , note that this URL would be different when you try to execute this example) for the /bookstore/books API service implementation

apiary_bookstore_api_1.png

6) Click on the Try button to invoke the mock service URL and validate the output. You will see a HTTP 200 response with the following output on the right side window. This confirms that the mock service URL is returning the book entries in response.

apiary_bookstore_api_2.png

 

Configure Service Request URL

The service request is the URL at which your backend service receives requests. The gateway routes the request to this URL when a request meets all policy conditions to invoke your service.

 

1) Click on the “Edit” button you see when you hover on the Service Request section on the API Implementation page

service_request_url.png

2) Enter the policy name and provide description and click on Next as shown below

service_request_url_2.png

3) In the Backend service URL input field, provide the mock server URL that was noted in step #5 in the above section as follows. Apply and Save the changes.

service_request_url_3.png

Apply Policies

You can apply policies to an API to secure, throttle, route, or log requests sent to it. Requests can be rejected depending on the policies applied, if they do not meet criteria specified for each policy.

 

1) The API Implementation page lists all the policies currently supported as follows, you can apply any policy by hovering onto the policy name and clicking on the Apply button.

Policy_list.png

 

Configuring all policies is beyond the scope of this blog, we apply couple of security and traffic management polices to the BookStore API to illustrate how to manage APIs by applying the policies. Please refer to Applying Policies section of API Platform CS documentation for more details.

 

Note: Policies in the request flow can be used to secure, throttle, route, manipulate, or log requests before they reach the backend service while polices in the response flow manipulate and log responses before they reach the requester.

 

2) Let us say we want to restrict the BookStore API to be consumed only by a specific application, a key validation policy can be applied on the request flow which ensures that requests from unregistered (anonymous) applications are rejected.

  • As discussed in Register Application to an API section above applications can be registered to an API and a unique App key is generated and assigned for each application.
  • These keys can be distributed to clients when they register to use an API on the Developer Portal.
  • At runtime, if this key is not present in the given header or query parameter, or if the application is not registered, the request is rejected.

To apply this policy, hover on the key validation policy under Security and click on Apply button. In the policy configuration page, give a name to the policy and you can specify the order in which this policy has to be triggered by selecting the policy from "Place after the following policy" drop down. Currently we have API Request policy that is already configured.

Key_validation_1.png

When you click the Next button you can specify the key delivery approach. The application key can be passed in header or as a query parameter. In this case we choose Header and specify the key name as “api-key”, click on Apply and save the changes. At runtime, the request is parsed for this key name and if found its value is validated against the registered application’s App key value. The request would be processed only if the values match else they would be rejected.

Key_validation_2.png

3) Let us apply another policy to restrict the number of requests our BookStore API can take within a specific time period. An API rate limiting policy can be used to limit the total number of requests an API allows over a time period that you specify, this time period can be defined in seconds, minutes, hours, days, weeks, or months.

 

To configure this policy, hover on the API Rate Limiting policy under Traffic Management and click on Apply button, in the resulting page provide a policy name and specify the order in which this policy should be triggered, in this case we want this policy to be triggered after the key validation policy.

API_Rate_Limiting_1.png

Click Next to configure the time period and the number of requests. We want the Gateway to reject requests for this API if they exceed 5 per minute.

API_Rate_Limiting_2.png

Note: Other traffic management related policies like API throttling can be implemented to delay request processing if they exceed the set threshold. Please refer to the API platform CS documentation for more details.

The API implementation should look like below after configuring the above policies:

API_Implementation_1.png

 

Deploy the API

In this section you will deploy the BookStore API to a gateway and activate the API. To deploy an endpoint, API Managers must have the Manage API or Deploy API grant for the API in addition to the Deploy to Gateway or Request Deployment to Gateway grant for a gateway.

 

Note:  This task assumes that the gateway nodes are configured and the user has the required grants to be able to deploy the API to the gateway. Please refer to Managing Gateways section for more details on configuring Gateways and their topology and Grants section for more details on granting users access to resources.

 

1) To deploy the API to the gateway, click on the Deployments icon just below the API Implementation.

deploy_api_1.png

2) Click on the Deploy API button, the resulting page lists all the gateways configured and allow you to choose a gateway onto which you want to deploy this API to. You can also choose the Initial deployment state of this API

deploy_api_2.png

Please note that the gateways can be configured anywhere - on oracle cloud , or third party cloud or on-premise.

 

3) When you click on the Deploy button, a request for API deployment is submitted and once the deployment is successful, the Deployments page would look as follows showing the Gateway Load Balancer URL which is your endpoint for sending the API requests.

deploy_api_3.png

 

Invoke the API

Now that you have successfully implemented your API and deployed the API to the gateway, you can send requests to the API and validate if the policies work as intended. 

You can use Postman or any other REST client to send requests to the API. In this case we use Postman to invoke the API.

 

Scenario # 1

Open Postman and initiate a GET request to the Load Balancer URL that has been shown on the Deployments page.

The request to the API fails with error – 401 (Unauthorized access), this is because the key validation policy got triggered and was looking for a header called “api-key”, which we have not set while submitting the request.

invoke_api_1.png

Scenario # 2

Add a request header with key as “api-key” and provide the App key of the registered BooksApp as value and submit the request. This will return a couple of book entries which we have mocked as part of the API service implementation in Apiary.

invoke_api_2.png

Scenario # 3

Submit 5 requests to this API within a minute time period; you will see the response from the API retrieving the book entries. When the request for the API is made for the 6th time within 1 minute, the API Rate limit policy that we have configured gets triggered and rejects the request as shown below:

invoke_api_3.png

The requests submitted after sometime (when the invocation rate comes to acceptable limit) are accepted and processed as usual again until any policy execution fails.

 

Once the API has been tested, you can publish this API to development portal from where developers can discover the API and register apps for consuming the APIs. Also API Platform Cloud Service Management Portal provides Analytics around who is using your API, how APIs are being used, and how many requests are rejected along with other metrics like request volumes…etc. Discussion on these aspects is beyond the scope of this blog.

 

Conclusion

This blog discussed about the concepts of API management and its importance in the context of Microservices and cloud native applications. It provides an overview of Oracle API Platform Cloud Service and briefly discussed about its key components. Using a simple use case we illustrated how APIs can be created, configured, deployed, consumed and monitored using Oracle API platform Cloud service. This blog is limited to discuss specific aspects of Oracle API platform CS, please refer to Oracle API platform CS documentation for further details.

 

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.