by Marcelo Parisi
Introduction
One of the major challenges that companies face in adopting a cloud computing platform is the secure provisioning of services in the cloud. Oracle API Gateway (OAG) 11g can be a very powerful tool in this sense, since it focuses on service protection, with authentication mechanisms, message encryption, and security/policy functionalities.
In this article, we will see how to create a cloud-based OAG infrastructure, with high-availability and scalability support. Both high-availability and scalability operations will be covered here. We’ll be using virtual machines (VMs) and storage concepts, along with OAG and Oracle Traffic Director (OTD). While a physical load balancer will also be necessary, its configuration is beyond the scope of this article.
The service infrastructure—Oracle SOA Suite, Oracle Service Bus or any other kind of service provider environment that needs to be exposed in a secure manner through the environment we’ll be building—will also not be covered in this article.
This article assumes a Network File System (NFS) v4 and Network Information Service/Lightweight Directory Service Protocol (NIS/LDAP) compliant environment. If you don’t support it, the article will indicate the changes so that you can run on a NFSv3 environment without NIS/LDAP.
There is no capacity planning or sizing work done on this article. The number of CPUs, memory and filesystem size are all just for demonstration purposes and should be revisited in a production environment.
OAG and OTD documentation should always be consulted. This document is not intended to replace any of the product’s official documentation.
Finally, please note that OTD is supported only in Exalogic environments.
Infrastructure Architecture
In this article, we’re going to build a brand new infrastructure from scratch to support this environment. We’ll consider two VMs for OTD and, initially, three VMs for OAG, one of them for administration purposes only. The environment infrastructure architecture will resemble the architecture in Figure 1, below:

Figure 1
As you can see, we have high availability on both the OTD layer and the OAG layer. Both layers are scalable either horizontally or vertically. This article discusses scalability only on the OAG layer.
We’re going to create five VMs—three for OAG, with Oracle Linux 5.6; two with Oracle Linux 6.6 for OTD. I suggest using VM Templates or cloning to make this task easier. The VMs’ configuration should resemble the table in Figure 2, below:

Figure 2
On the storage side, to hold this environment we’re going to create six filesystems that will be shared accross the tools. Make sure your VMs have all the packages necessary to run NFS mounts. The storage configuration should resemble the table in Figure 3, below:

Figure 3
On the network side, we’ll be using two virtual network interfaces on each VM, as Figure 2 shows. One of the virtual network interfaces is for the storage network and administrative traffic, and the other one is for the public network. Figure 4, below, depicts network and virtual IP (VIP) configuration:

Figure 4
Some other VIPs must be mapped in order to use the services we are going to provide, as shown in Figure 5, below:

Figure 5
Software Requirements
As mentioned above, the base operating system for this environment will be Oracle Linux. The following downloads must be performed:
Environment Preparation
-
If you don’t have NIS/LDAP running, create an Oracle user and group on all VMs with the same User ID (UID) and Group ID (GID). For example, on vmoag01 it would look like this:
[root@vmoag01 ~]# groupadd -g 54321 oinstall
[root@vmoag01 ~]# useradd -g 54321 -u 54321 -m oracle
-
Create a directory to handle the environment, and give write permissions to the Oracle user. These steps should be done on all VMs. As an example, on vmotd02, it would look like this:
[root@vmotd02 ~]# mkdir /u0
1
[root@vmotd02 ~]# chown oracle:oinstall /u01
-
As Oracle user, create the commons directory to handle the installation files. This directory should be created with the Oracle user created before, and should be created the same way all over the environment. For example, on vmoagadm, it should look like this:
[oracle@vmoagadm ~]$ mkdir -p /u01/oracle/commons
-
Create the directory structure for OAG. On vmoagadm, vmoag01 and vmoag02, perform these steps:
`[oracle@vmoagadm ~]$ mkdir -p /u01/oracle/logs`
`[oracle@vmoagadm ~]$ mkdir -p /u01/oracle/oag`
-
Map shared storage to those directories. Edit /etc/fstab as root and add the lines below (assuming my storage is on SN01.priv.parisi.spo) on vmoagadm, vmoag01 and vmoag02:
SN01.priv.parisi.spo:/export/COMMONS /u01/oracle/commons nfs
rw,bg,hard,nointr,async,noatime,tcp,vers=4 0 0
SN01.priv.parisi.spo:/export/OAG/oag /u01/oracle/oag nfs
rw,bg,hard,nointr,async,noatime,tcp,vers=4 0 0
SN01.priv.parisi.spo:/export/OAG/logs /u01/oracle/logs nfs
rw,bg,hard,nointr,async,noatime,tcp,vers=4 0 0
-
Mount the filesystems on all three VMs, with the following command as root:
[root@vmoagadm ~]# mount –a
[root@vmoagadm ~]# df -h
-
To make sure the OAG instances communicate between them through the private network, add the lines below to /etc/hosts on vmoagadm, vmoag01 and vmoag02:
172.16.14.22 vmoagadm
172.16.14.23 vmoag01
172.16.14.24 vmoag02
This addition is necessary because, when installing, an instance announces itself to the Node Manager (and Node Manager back to the instance) with its host name. Since the environment resolves only fully qualified domain names (FQDNs), we need to add those lines to the VMs’ hosts so they can reach the instances through the private network.
-
On vmotd01 and vmotd02, perform the steps below with Oracle user to create the directory structure to handle the OTD environment:
[oracle@vmotd01 ~]$ mkdir -p /u01/oracle/logs
[oracle@vmotd01 ~]$ mkdir -p /u01/oracle/otd
[oracle@vmotd01 ~]$ mkdir -p /u01/oracle/middleware
-
Map the shared storage to those directories. Edit /etc/fstab as root and add the lines below (assuming storage is still on SN01.priv.parisi.spo) on vmotd01 and vmotd02:
SN01.priv.parisi.spo:/export/COMMONS /u01/oracle/commons nfs rw,bg,hard,nointr,async,noatime,tcp,vers=4 0 0
SN01.priv.parisi.spo:/export/OTD/otd /u01/oracle/otd nfs rw,bg,hard,nointr,async,noatime,tcp,vers=4 0 0
SN01.priv.parisi.spo:/export/OTD/logs /u01/oracle/logs nfs rw,bg,hard,nointr,async,noatime,tcp,vers=4 0 0
SN01.priv.parisi.spo:/export/OTD/middleware /u01/oracle/middleware nfs rw,bg,hard,nointr,async,noatime,tcp,vers=4 0 0
-
Mount the filesystems on all three VMs, with the following command as root:
`[root@vmotd01 ~]# mount –a`
`[root@vmotd01 ~]# df -h`
Note: The commons directory is shared accross all environments. Use the parameters wsize and rsize in fstab to fine-tune your NFS performance. Try to use jumbo frames on your network interface by changing the maximum transmission unit (MTU). If your environment is not ready for NFSv4, on fstab file change vers=4 to vers=3 for all NFS mount points.
Remember: to have a NFSv4 working, your environment must have NIS or LDAP service in place.
- To run OTD, install the packages below on vmotd01 and vmotd02:
`compat-libcap1.x86_64`
`compat-libstdc++-33.x86_64`
`compat-libstdc++-33.i686`
`libgcc.i686`
`libgcc.x86_64`
`glibc.i686 glibc.x86_64`
`libstdc++-devel.i686`
`libstdc++-devel.i686`
`libstdc++.x86_64`
`libstdc++-devel.x86_64`
`sysstat.x86_64`
`gcc.x86_64`
`gcc-c++.x86_64`
`libaio.i686`
`libaio.x86_64`
`libaio-devel.i686`
`libaio-devel.x86_64`
`glibc-devel.i686`
`glibc-devel.x86_64`
- Finally, move the downloaded files below to /u01/oracle/commons:
ofm\_oag\_linux\_11.1.2.3.0\_disk1\_1of1.zip
ofm\_otd\_linux\_11.1.1.7.0\_64\_disk1\_1of1.zip
The infrastructure is now ready to receive the new environment.
Installing OAG: vmoagadm
To install OAG on the admin virtual machine, as Oracle user, first uncompress the file:
[oracle@vmoagadm ~]$ mkdir -p /u01/oracle/commons/OAG
[oracle@vmoagadm ~]$ cd /u01/oracle/commons/OAG/
[oracle@vmoagadm OAG]$ unzip ../ofm_oag_linux_11.1.2.3.0_disk1_1of1.zip
Run the installer file:
[oracle@vmoagadm OAG]$ cd Linux/64bit/
[oracle@vmoagadm 64bit]$ chmod +x OAG_11.1.2.3.0-linux-x86-64-installer.run
[oracle@vmoagadm 64bit]$ ./OAG_11.1.2.3.0-linux-x86-64-installer.run --mode unattended --enable-components apigateway,analytics,policystudio,configurationstudio,apitester --prefix /u01/oracle/oag/vmoagadm --configureGatewayQuestion 0
After installation is complete, vmoagadm will be on /u01/oracle/oag/vmoagadm. To create a group for our instances in the web admin interface, go to https://vmoagadm.priv.parisi.spo:8090/. The default username is admin, and the default password is changeme. Go to the topology corner, click in Menu, and select Create New Group, as shown in Figure 6, below:

Figure 6
Name the new group oagcluster:

Figure 7
The topology should now resemble Figure 8, below:

Figure 8
Installing OAG: vmoag01
To install OAG on the first node and create an OAG instance inside the oagcluster group, run the installer with the following parameters:
[oracle@vmoag01 ~]$ cd /u01/oracle/commons/OAG/Linux/64bit/
[oracle@vmoag01 64bit]$ ./OAG_11.1.2.3.0-linux-x86-64-installer.run --mode unattended --enable-components apigateway --prefix /u01/oracle/oag/vmoag01 --firstInNewDomain 0 --rnmConnectionUrl [https://vmoagadm.priv.parisi.spo:8090](https://vmoagadm.priv.parisi.spo:8090/) --gwName vmoag01inst1 --gwGroup oagcluster --gwMgmtPort 8085 --gwServicesPort 8080
The new instance will be called vmoag01inst1. The binary files will be located at /u01/oracle/oag/vmoag01. The instance management port is 8085; the instance traffic port is 8080.
Check the instance status by going to http://vmoag01.pub.parisi.spo:8080/healthcheck:

Figure 9
Installing OAG: vmoag02
To install OAG on the second node and create our OAG instance inside oagcluster in our topology, we need to run the installer with the following parameters:
[oracle@vmoag02 ~]$ cd /u01/oracle/commons/OAG/Linux/64bit/
[oracle@vmoag02 64bit]$ ./OAG_11.1.2.3.0-linux-x86-64-installer.run --mode unattended --enable-components apigateway --prefix /u01/oracle/oag/vmoag02 --firstInNewDomain 0 --rnmConnectionUrl [https://vmoagadm.priv.parisi.spo:8090](https://vmoagadm.priv.parisi.spo:8090/) --gwName vmoag02inst1 --gwGroup oagcluster --gwMgmtPort 8085 --gwServicesPort 8080
The new instance will be called vmoag02inst1. The binary files will be located at /u01/oracle/oag/vmoag02. The instance management port is 8085; the instance traffic port is 8080.
Check the instance status by going to http://vmoag02.pub.parisi.spo:8080/healthcheck:

Figure 10
Checking OAG Infrastructure
To confirm that the installation was fine, connect to the administration interface again and check the topology. The console is at https://vmoagadm.priv.parisi.spo:8090/. The default login is admin; the default password is changeme.
The topology should look like this:

Figure 11
Installing OTD Binaries
We’ll install OTD binaries once only on the shared filesystem that will be used by both VMs. To install OTD binaries, on vmotd01, as Oracle user, first uncompress the file:
[oracle@vmotd01 ~]$ mkdir -p /u01/oracle/commons/OTD
[oracle@vmotd01 ~]$ cd /u01/oracle/commons/OTD/
[oracle@vmotd01 OTD]$ unzip ../ofm_otd_linux_11.1.1.7.0_64_disk1_1of1.zip
Now run the installer file:
[oracle@vmotd01 OTD]$ cd Disk1/
[oracle@vmotd01 Disk1]$ chmod ./runInstaller
Only one step needs attention in the installation process.
Installation should be done on /u01/oracle/middleware/11.1.1.7.0/trafficdirector_Home_1, as shown in Figure 12 below:

Figure 12
Creating OTD Administration Nodes
After installing OTD binaries, set up the administration nodes to create the environment. To create the administration node on vmotd01 and start it, perform the commands below (you’ll be asked for an admin password: please remember it as you’ll need to use it again):
[oracle@vmotd01 ~]$ cd /u01/oracle/middleware/11.1.1.7.0/trafficdirector_Home_1/bin/
[oracle@vmotd01 bin]$ ./tadm configure-server --user=otd_admin --port=8989 --host=vmotd01.priv.parisi.spo --instance-home=<<span style="color:red;">/u01/oracle/otd/vmotd01
[oracle@vmotd01 bin]$ cd /u01/oracle/otd/vmotd01/admin-server/bin/
[oracle@vmotd01 bin]$ ./startserv
The administration node will be created at /u01/oracle/ort/vmotd01, with port 8989 and hostname vmotd01.priv.parisi.spo. The administration user will be otd_admin. If you need to access the web admin interface, go to https://vmotd01.priv.parisi.spo:8989/.
To create and start the administration node on vmotd02, perform the following commands (you should use the admin password created in the earler step):
[oracle@vmotd02 ~]$ cd /u01/oracle/middleware/11.1.1.7.0/trafficdirector_Home_1/bin/
[oracle@vmotd02 bin]$ ./tadm configure-server --user=otd_admin --port=8989 --host=vmotd01.priv.parisi.spo --admin-node --node-port=8989 --instance-home=/u01/oracle/otd/vmotd02 --node-host=vmotd02.priv.parisi.spo
[oracle@vmotd02 bin]$ cd /u01/oracle/otd/vmotd02/admin-server/bin/
[oracle@vmotd02 bin]$ ./startserv
The administration node will be created at /u01/oracle/ort/vmotd02, with port 8989 and hostname vmotd02.priv.parisi.spo.
Setting Up OTD
Set up OTD using the command line tool. It is possible to perform all these steps on the web admin tool, but the configuration process is faster when the command line tool is used. To set up OTD, first get into its command line tool. To do that, run the following commands on vmotd01:
[oracle@vmotd01 ~]$ cd /u01/oracle/middleware/11.1.1.7.0/trafficdirector_Home_1/bin/
[oracle@vmotd01 bin]$ ./tadm --user=otd_admin --host=vmotd01.priv.parisi.spo --port=8989
Inside the command line tool, create a configuration file for our OTD environment. By default, when creating the configuration, OTD also creates an http-listener, an origin-server-pool and a virtual server. They’ll be deleted later. But, for now, to create the default-config, run the following command:
tadm> create-config --listener-port=8090 --server-name=services.parisi.spo --origin-server=vmotd01.pub.parisi.spo:8080,vmotd02.pub.parisi.spo:8080 default-config
Now create an origin-server-pool pointing to the OAG instances. Set its load balancing algorithm as “round robin”:
tadm> create-origin-server-pool --config=default-config --type=http --origin-server=vmoag01.pub.parisi.spo:8080,vmoag02.pub.parisi.spo:8080 oag-server-pool
tadm> set-origin-server-pool-prop --config=default-config --origin-server-pool=oag-server-pool load-distribution=round-robin
Now we’re going to adjust some probing parameters used by OTD to check if the Oracle API Gateway is live. As mentioned, OAG provides a healthcheck service; set up OTD to check for that service by changing the HTTP method, the URL of the service and the content expected in the results:
tadm> set-health-check-prop --config=default-config --origin-server-pool=oag-server-pool request-method=GET
tadm> set-health-check-prop --config=default-config --origin-server-pool=oag-server-pool request-uri=/healthcheck
tadm> set-health-check-prop --config=default-config --origin-server-pool=oag-server-pool response-body-match=<status>ok</status>
After adjusting the probing parameters, create a new http-listener on the OTD configuration, to listen to the requests on port 8080:
tadm> create-http-listener --config=default-config --listener-port=8080 --server-name=services.parisi.spo --default-virtual-server-name=default-config services-listener
Now, create a virtual-server on the OTD configuration, which is responsible for the domain name services.parisi.spo, the domain name that will point to the OAG environment. After that, point http-listeners to the created virtual-server:
tadm> create-virtual-server --config=default-config --http-listener=services-listener --host-pattern=services.parisi.spo --log-file=/u01/oracle/logs/services.parisi.spo.log --origin-server-pool=oag-server-pool services.parisi.spo
tadm> set-http-listener-prop --config=default-config --http-listener=services-listener default-virtual-server-name=services.parisi.spo
tadm> set-virtual-server-prop --config=default-config --vs=default-config http-listener-name=services-listener
tadm> set-http-listener-prop --config=default-config --http-listener=http-listener-1 default-virtual-server-name=services.parisi.spo
Next, enable logging in, both for OTD itself, and for the new virtual-server. to help troubleshoot:
tadm> enable-access-log --config=default-config --vs=services.parisi.spo --file=/u01/oracle/logs/services.parisi.spo-access.log
tadm> set-log-prop --config=default-config log-file=/u01/oracle/logs/otd-error.log
tadm> enable-access-log --config=default-config --file=/u01/oracle/logs/otd-access.log
Now, delete everything created by the create-config command that won’t be used:
tadm> delete-virtual-server --config=default-config default-config
tadm> delete-origin-server-pool --config=default-config origin-server-pool-1
tadm> delete-http-listener --config=default-config http-listener-1
To accept traffic, create OTD instances. As described in the architecture, there will be two instances:
tadm> create-instance --config=default-config vmotd01.priv.parisi.spo
tadm> create-instance --config=default-config vmotd02.priv.parisi.spo
Finally, start the OTD instances:
tadm> start-instance --config=default-config vmotd01.priv.parisi.spo
tadm> start-instance --config=default-config vmotd02.priv.parisi.spo
Virtualizing Services
To expose services through OAG, virtualize them inside the topology. To demonstrate that, I’ll use a dummy service I have running in my environment, but you can use any service you have.
To virtualize our service, we’ll open OAG Policy Studio. You can launch OAG Policy Studio from vmoagadm. The tool is installed at /u01/oracle/oag/vmoagadm/oagpolicystudio. Run OAG Policy Studio pointing to vmoagadm.priv.parisi.spo on port 8090 using SSL. Default login is admin and default password is changeme. Figure 13 gives an example:

Figure 13
Edit the Active Configuration for oagcluster as shown in Figure 14, below:

Figure 14
Once inside OAG Policy Studio, click on Business Services in the OAG tree on the left side and click on Virtualise a Service, as shown in Figure 15:

Figure 15
The WSDL URL for the WebService should go through the internal Load Balancer/OTD/Oracle HTTP Server, as in Figure 16:

Figure 16
Now, select a user name for the imported WSDL and a Comment, as shown in Figure 17:

Figure 17
Next, select the WSDL Operations that will be virtualized:

Figure 18
In the following screen, WS-Policy Options, just click Next, as shown in Figure 19. (This is related to the service security and policy side of OAG, and won’t be covered here.)

Figure 19
Finally, select where to expose the virtualized service. We’ll select Default Services, as Figure 20 shows:

Figure 20
You should perform these steps for each service that is to be exposed through OAG. In my case, I have two virtualized services for demonstration purposes.
Testing
To test our environment, I suggest you use a load testing tool, pointing to services.parisi.spo, which is the host for our physical load balancer fronting OTD.
I have set up a simple load test tool to send transactions to the load balancer. You can use API Explorer (provided with OAG) to generate some sample request XMLs for testing purposes.
I won’t get into details about performing the tests—just do it the same way you test any other service. The main difference here is that you need to point the test to the balancer. Below, we’ll discuss the results of the tests I’ve run in the environment we’ve built so far.
As shown in Figure 21, in 10 minutes, I could send ~127k transations to my load balancer—and it has hit our OAG environment:

Figure 21
Those transactions were handled by our oagcluster, as shown in Figure 22:

Figure 22
Figure 23 demonstrates that vmoag01inst1, our first OAG instance, has handled ~63k transactions, which is approximately half of our transaction total:

Figure 23
Finally, Figure 24 shows that our second instance, vmoag02inst1**,** has also handled ~63k transactions, which is aproximately half of the transactions, meaning we have good load balancing within our cluster:

Figure 24
If one of the instances goes down, OTD will detect it, and all the traffic will be redirected to the other instances available. This procedure should be transparent, as shown in Figure 25 below, with vmoag02inst1 turned off, and no failures or exceptions:

Figure 25
Scaling Out
Scaling out the environment means scaling it by adding more computers or VMs to the environment. OAG is highly scalable and, in fact, this architecture we’ve built is very simple to scale.
We’ll create a new VM, based on vmoag01, which we’ll call vmoag03. It needs to have the same configuration so we won’t impact the SLA of our services; that means 2 vCPUs, 2 vNICs, 1536 MB of RAM and 20GB of disk. ThisVM, like the others supporting OAG, should have Oracle Linux 5.6.
For networking, we’ll have vmoag03.priv.parisi.spo as 172.16.14.25 and vmoag03.pub.parisi.spo as 10.0.0.25.
First, we need to review our environment preparation chapter and prepare this VM. After we have everything in place, including groups needed, users needed and shared storage mounted, we must add the following entries to its /etc/hosts:
172.16.14.22 vmoagadm
172.16.14.23 vmoag01
172.16.14.24 vmoag02
172.16.14.25 vmoag03
We’ll also need to add this line to /etc/hosts on vmoagadm, vmoag01 and vmoag02:
172.16.14.25 vmoag03
Now, we’ll execute OAG installer on vmoag03:
[oracle@vmoag03 ~]$ cd /u01/oracle/commons/OAG/Linux/64bit/
[oracle@vmoag03 64bit]$ ./OAG_11.1.2.3.0-linux-x86-64-installer.run --mode unattended --enable-components apigateway --prefix /u01/oracle/oag/vmoag03 --firstInNewDomain 0 --rnmConnectionUrl [https://vmoagadm.priv.parisi.spo:8090](https://vmoagadm.priv.parisi.spo:8090/) --gwName vmoag03inst1 --gwGroup oagcluster --gwMgmtPort 8085 --gwServicesPort 8080
The new instance is called vmoag03inst1 and the binary files are located at /u01/oracle/oag/vmoag03. The management port is 8085 and the instance traffic port is 8080.
After that, we’ll check our topology on the OAG web admin interface by going to https://vmoagadm.priv.parisi.spo:8090/ in the web browser.

Figure 26
As seen in Figure 26, the instance is already there, but there is no traffic coming to it. Now we just need to add the instance to our OTD origin-server-pool. To do that, we’ll first execute the command line tool on vmotd01:
[oracle@vmotd01 ~]$ cd /u01/oracle/middleware/11.1.1.7.0/trafficdirector_Home_1/bin/
[oracle@vmotd01 bin]$ ./tadm --user=otd_admin --host=vmotd01.priv.parisi.spo --port=8989
Inside the command line tool, let’s add our new instance to the oag-server-pool:
tadm> create-origin-server --config=default-config --origin-server-pool=oag-server-pool vmoag03.pub.parisi.spo:8080
tadm> deploy-config default-config
We should see traffic already coming to our new instance, as shown in Figure 27, below:

Figure 27
We’re done! We now have a scaled environment. Note that I’ve scaled this environment without stopping the testing tools; in other words, I scaled it with transactions running through, as if it were a live production environment, and it was completely transparent. No failures or exceptions were seen.
Scaling Up
Scaling up the environment means adding more OAG instances to the VMs we already have. This procedure should be even simpler than scaling out.
The table in Figure 28 shows the new instances that will be created:

Figure 28
To achieve this configuration, we need to create the instances through OAG’s web admin interface, available at https://vmoagadm.priv.parisi.spo:8090/. To create vmoag01inst2, click on oagcluster and select New Gateway Server, as in Figure 29, below:

Figure 29
Now we just need to fill in the fields for vmoag01inst2, as in Figure 30:

Figure 30
For vmoag02inst2, it should look like Figure 31:

Figure 31
Finally, for vmoag03inst2 it should resemble Figure 32:

Figure 32
To start the instances, just click in the instance and select Start, as in Figure 33:

Figure 33
After starting all the instances created, our topology should resemble Figure 34:

Figure 34
As a last step, we just need to update OTD to send traffic to those new instances. To do that, we’ll log into vmotd01 and run command line tool:
[oracle@vmotd01 ~]$ cd /u01/oracle/middleware/11.1.1.7.0/trafficdirector_Home_1/bin/
[oracle@vmotd01 bin]$ ./tadm --user=otd_admin --host=vmotd01.priv.parisi.spo --port=8989
Now we update our oag-server-pool with the new instances, and we should see traffic coming to them:
tadm> create-origin-server --config=default-config --origin-server-pool=oag-server-pool vmoag01.pub.parisi.spo:9090
tadm> create-origin-server --config=default-config --origin-server-pool=oag-server-pool vmoag02.pub.parisi.spo:9090 tadm> create-origin-server --config=default-config --origin-server-pool=oag-server-pool vmoag03.pub.parisi.spo:9090 tadm> deploy-config default-config
As we can see in Figure 35, the traffic is already coming to one of our new instances:

Figure 35
And again, as with the scale out process, this process was done on the fly, with transactions going through as if it were a production environment. No errors or exceptions were raised.
Conclusion
Companies have been facing huge challenges—notably, security—in the effort to adapt to the new Cloud Computing paradigm. OAG is a robust tool for service provisioning and security in a Cloud environment, but building, creating and managing an OAG environment can turn into a set of really complex activities.
OAG is a highly scalable, high-availability tool. This article has documented a very simple way to build a complete OAG environment from scratch, and has demonstrated that the environment thus built is fault tolerant. Finally, the article has shown that the environment can be scaled right away, both horizontally and vertically, and with transactions going through, without raising any failures or exceptions on the running environment.
Note: This document is not an Oracle official documentation and is not intended to replace any of product’s official documentation. This document is intended to be a simple guideline for building an environment. Offical product documentation should always be reviewed.
About the Author
Marcelo Parisi is an Oracle IT Architecture Certified Specialist working as a Senior Consultant for Oracle Consulting Services in Brazil, where he is a member of the Tech Infrastructure team. He works as an infrastructure architect and consultant for the Oracle Fusion Middleware products family. His main roles includes architecture design, implementation and performance tuning of FMW infrastructure. Marcelo's fields of expertise include Oracle Reference Architecture, Oracle WebLogic Server, Oracle SOA Suite and Oracle WebCenter.
https://www.twitter.com/feitnomore
https://www.linkedin.com/in/marceloparisi
https://www.facebook.com/marcelo.f.pari