Skip to Main Content

Cloud Platform

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Oracle Cloud Integration Part 3: Cloud to On-Premises [Tech Article]

gugalnikovApr 7 2016 — edited Oct 24 2017

Part 3 in the Oracle Cloud Integration article series by Oracle ACE Director Joel Perez and Oracle ACE Arturo Viveros focuses on cloud to on-premise integration and the implications and considerations of establishing communication between cloud-based applications and an on-premise environment.


By Joel PerezACED.gif and Arturo Viverosace.gif

  • Part 3: Cloud to On-Premises

On-premises to cloud integration is at the top of the list of most organizations looking to establish a cloud-driven IT strategy. Moreover, the lack of clarity and/or knowledge about solving such scenarios can become a strong inhibiting factor on the prolonged adoption of cloud-based solutions.

image1-23.jpg

There are several implications when attempting to establish communication--which will often be bi-directional--between cloud-based applications and an on-premises environment, where the “core” of the IT ecosystem commonly resides. So, before going any further, let’s take a look at the most relevant of these.

Security

Security is always the prime concern when we are talking about a public cloud. Information will be managed and protected according to the technological elements at hand, so before selecting a cloud provider, it is paramount to identify the sensitive data and the risks and conditions from which we wish to protect it.

The nature of a company’s information can be classified as follows:

  • Critical: Indispensable for the business’s operation
  • Valuable: An important asset to the organization
  • Sensitive: Only authorized access must be allowed, probably even protected by law

Information security is usually an enterprise-wide discipline that touches upon availability, communication, risk management, integrity, confidentiality, compliance and more. When correctly applied, such security will ensure that the organization’s information will always comply with the following essential characteristics:

  • Confidentiality: Disallows information delivery or broadcasting to unauthorized recipients, whether those are persons or systems. Access will be granted only to well-identified entities that possess an appropriate clearance level.
  • Integrity: Protects data from unauthorized modifications. Information must be delivered in a consistent and true manner, without any tampering or alteration by third persons or processes.
  • Availability: Information must be available to be accessed every time it is required by any party, person or system with the right level of authorization.

In order to guarantee information availability at the highest possible rate, systems and services must be prepared to prevent and/or recover from denial-of-service attacks.

With all of this in mind, it is important to note that when integrating on-premises systems with the cloud, organizations are actually extending their existing trust boundaries in order to accommodate the information flow:

image2-25.jpeg

Once again, this becomes particularly relevant when dealing with public clouds, where the multi-tenancy trait, which is so distinctive of this cloud deployment model, can potentially cause the overlap of multiple trust boundaries pertaining to different organizations. In such a scenario, additional security measures and design patterns must be put in place to prevent critical security breaches, which can eventually compromise cloud consumers.

SLAs and Consistency

When extending an organization’s technological solutions portfolio by integrating cloud-based applications with on-premises systems, we’ll probably have to deal with a set of distinct challenges regarding service level agreements and the consistency of our platforms. It's important to do so in order to avoid stumbling into some of the most common pitfalls of cloud integration:

  • Most transactions will happen at completely different speeds in one universe and another, due to a number of factors such as: infrastructure and processing power, throughput, resource pooling, “elasticity” of cloud solutions as opposed to the “rigid” and more deterministic nature of on-premises platforms, underlying technology, etc.
  • SLAs and thresholds are usually very detailed and increasingly aggressive for cloud-based solutions. On the other hand, traditional on-premises applications don’t always put a lot of emphasis on (or can completely overlook) SLAs, and general performance requirements are commonly moderate to relaxed. When designing and developing a cloud-to-on-premises integration, this differences must be accounted for to avoid unattainable expectations or total lack of insight into the way the system will work, which may lead to unexpected results.
  • Change management is highly automated in the cloud, including constant cycles for upgrading, patching and improving the environment. This is just not the case for the vast majority of on-premises systems, which are set to evolve at a much slower pace, if any. Obviating the implementation of sound change management strategies to keep the environment consistent, and/or failing to identify and remove negative types of coupling while designing integrations, can lead to disastrous outcomes related to this particular challenge.
  • Availability and resiliency may also be totally disparaged when it comes to comparing cloud-based and on-premises solutions. This is usually seen as a benefit of incorporating cloud computing into integrations; however, architects must have a clear understanding of these elements in order to take proper advantage of them.

image3-28.jpg

Having these issues addressed at design time is key for the integrations to work correctly and produce the desired results, as well as to ensure the safety and stability of both ends and the information flows they will rely upon.

Monitoring/Traceability

By providing integration between the cloud and our on-premises environment, we are also opening the floodgates for huge flows of information exchanged both ways, which need to be supervised and measured:

image4-31.jpg

As stated in this series' first chapter, one of cloud computing’s primary characteristics should always be “measured usage” of resources. In many cases, cloud subscriptions will follow a pay-as-you-go charge-back model, which makes it even more important to be able to quantify processing volumes and resource consumption per unit of time.

Now, after having taken a look and understanding the concerns, let's turn our heads towards the Oracle stack and see if we can find some adequate tools and capabilities to work with the kind of scenario we’ve been describing:

image5.jpg

It’s becoming clear that we may need to dig into many different paradigms, including BPM, SOA, API Management, and Data Integration, in order to come up with the right mix for our cloud integration architecture. That’s not to say that the design should be intricate and complicated just for the sake of it; simplicity is always better, and here’s where a set of well-structured cloud integration design patterns will come in handy.

The picture above is probably too generic and there are some important pieces missing from it, so let’s take a look at more detailed example diagrams:

image6.jpg

image7.jpg

In the last two images we can begin to appreciate some of the working pieces of an architecture that is set up to deal with most of the typical concerns revised earlier: Identity Management, Authorization, Endpoint Security, Asset Management, Bulk Loads and much more. The Enterprise Gateway for example, could be a valuable piece of the puzzle as a first line of defense that can also deal with:

  • Protocol bridging
  • XML processing
  • Security policy enforcement:
  • Usage monitoring
  • Resource c
  • “API” publishing
  • Preventing and reacting to several kinds of attacks, such as:

For increasingly complex scenarios that require process/service orchestration, middleware technology such as Oracle SOA Suite can provide the necessary capabilities on the on-premises side, where most of these activities will be taking place. Additionally, different monitoring and/or governance tools (e.g., EM Cloud Control, API Platform, etc.) can also be leveraged in order to provide the required level of insight and control into the designed cloud integration solution.

This is not to say that in every case the customer will require a full stack of Oracle technology in order to construct a successful integration. The idea is rather to explore and identify the many options available and map them correctly to practical concerns and design patterns so we can come up with robust, efficient and scalable solutions

With such purpose in mind, the following study case, very similar to the one discussed in the last section of this series, will attempt to depict and solve a scenario requiring cloud-to-on-premises integration:

  • “XYZ Corp,” in line with its aggressive cloud strategy, has been acquiring a number of subscriptions on the Oracle Cloud.
  • The most used application at the moment is Oracle Taleo, which has allowed XYZ to streamline and centrally manage their talent and human resources.
  • XYZ has a long-running Oracle EBS implementation (on premises), where the RH module supports a number of administrative tasks (e.g., hirings and any other personnel moves).
  • XYZ also uses Oracle SOA Suite for some on-premises integrations between the multiple systems present in its back-end.

The integration scenario to be resolved is the following:

  1. Since Oracle EBS Human Resources Management System (HRMS) has traditionally been used as the company’s RH system, every personnel move is captured, authorized and managed through it.
  2. Every time any of these moves opens up a job post, it has to be registered in Taleo in order to start the recruitment process, which includes online application, filtering, evaluation and candidate selection.
  3. Once the candidate has been selected, the business process turns back to EBS, where the new employee has to be loaded into the system. Once this happens, it will trigger other activities and business processes necessary to complete the employee’s onboarding.
  4. Currently, XYZ depends on file-based integration for this process, where a human operator exports data from Taleo using the native client and puts the files into a shared physical location, which EBS polls on predetermined intervals. Upon errors and/or omissions, which tend to be frequent, a service order is created so an EBS operator can be assigned to manually resolve the situation.
  5. The process is evidently not efficient, even when it runs without error and, in case of a failure, a lot of time will be lost before the problems are identified and fixed. Further, the job postings on Taleo usually lack adequate timing and information and can easily be mixed up.

How do we solve this problem? Let’s take a detailed look at a proposed solution:

image7-40.png

First of all, security will be addressed by the implementation of an SSL/TLS tunnel between XYZ and Oracle’s Public Cloud. All communications will be encrypted and any system that participates in the transaction will require appropriate SSL certificates to identify itself.

image8-43.jpg

Both the cloud provider and the cloud consumer have their own security measures in place, which will not be compromised by the security context we will establish among them.

image9-46.jpg

It’s important to note that the architecture depicted above will comply with the following characteristics:

  • Confidentiality: by encrypting the messages
  • Integrity: using digital signatures
  • Authentication: through X.509 certs and tokens

After bypassing the security layer, we can deliver the information to Oracle SOA Suite (OSB + BPEL/SCA), which will allow us to virtualize services and orchestrate information flows in a dynamic and scalable way by leveraging data model transformation capabilities and adapters.

image11-52.jpg

This architecture will also have the added benefit of avoiding negative types of coupling by adopting a service-oriented approach based on proven design principles:

image10-49.jpg

Finally: we’ve got a solution that should work efficiently and that makes use of the existing customer infrastructure and licensing.

Conclusion

The final solution here is simplified because of the architectural decisions made beforehand. As it is, the implementation should be mostly straightforward with the bigger challenges and risks out of the way.

Just as in the last chapter, when we incorporate cloud-based elements into an organization’s architecture, it is important to avoid at all costs negative types of coupling, which can eventually lead to nightmare scenarios.

There are a lot of options for achieving this kind of integration, even within the Oracle stack. For example, in some cases the ICS on-premises agent would be suitable, especially if there’s a ready-made adapter for the cloud application(s) we are trying to integrate. Another example would be on the use case we just reviewed, where if cloud-to-on-premises integrations should succeed, proliferate and grow in complexity and importance, the company would probably be better off replacing the web server (OHS) in the DMZ with a more powerful element (e.g., API Gateway), even though that would imply additional licensing.

Again, it all comes back to good architectural and design choices, as well as to the controlled and continuous evolution of the integration platform.

About the Authors

Arturo Viveros is an outstanding Mexican professional currently based in Oslo, Norway, with 11 years of experience in the development, design, architecture and delivery of IT Projects for a variety of industries. He is also a regular speaker in technology conferences, both in Mexico and abroad. He is an Oracle ACE and works as principal architect in Sysco Middleware. Arturo is also part of the coordinating committee for ORAMEX (Oracle User Group in Mexico) and has recently achieved the Oracle SOA Certified IT Architect certification as well as the Cloud Certified Architect and SOA Certified Architect grades from Arcitura Inc. He is a certified trainer authorized to deliver the SOA School and Cloud School modules both in English and in Spanish. Arturo is also a regular contributor to SOA Magazine, Service Technology Magazine, the Oracle Technology Network.

Joel Perez is an Expert DBA (Oracle ACE Director, OCM Cloud Admin.and OCM11g ) with over 15 years of real world experience with Oracle technologies, specializing in the design and implementation of solutions: High Availability, Disaster Recovery, Upgrades, Replication, Tuning, Cloud and all areas related to Oracle Databases. As an international consultant he has served clients and participated in conferences and activities in more than 50 countries on 5 continents. A prolific writer, Joel has published technical articles for OTN in Spanish and Portugese, and is a regular speaker at Oracle events worldwide, including OTN LAD (Latin America), OTN MENA (Middle East & Africa), OTN APAC (Asian Pacific), DTCC China, and more. Recognized as a pioneer in Oracle technology, Joel was the first Latin American awarded “OTN Expert of the Year” (in 2003), and was one of the first to be awarded Oracle ACE status (2004). Joel was also one of the first OCP Database Cloud Administrators (2013), and, in the biggest professional achievement in his career, was honored as the one of the first “OCM Database Cloud Administrators” in the world. Currently Joel works for Yunhe Enmo (Beijing) Technology Co., Ltd.


This article represents the expertise, findings, and opinions of the authors. It has been published by Oracle in this space as part of a larger effort to encourage the exchange of such information within this Community, and to promote evaluation and commentary by peers. This article has not been reviewed by the relevant Oracle product team for compliance with Oracle's standards and practices, and its publication should not be interpreted as an endorsement by Oracle of the statements expressed therein.


Comments

Mike Kutz

The upgrade to a 2007+ file format?

XLS is a different format than XLSX.

They are both different than CSV.

Many times I have seen CSV files with the XLS/XLSX file extension.  This is wrong.  This is Sooo wrong.  Please ensure this is not the case.

I don't use the Data Load Wizard.  But I heard that it now supports the XLSX format.

XLS format should no longer be in use.  As such, I doubt the Data Load Wizard will ever support XLS format.

Workaround

Save the file as an XLSX file and try again.

MK

Tomek

File is XLSX.

Mike Kutz

Is it really an XLSX file? can you change the extension to ZIP and open it? (this question is a habit of mine.  I've seen too many people claim their CSV file is an XLSX file)

Does the XLSX file actually contain the correct worksheet name that the Data Load Service is looking for?

Is it spelled correctly?  (don't laugh - I once had someone misspell RAT)

Case matches? (I'm not sure if this is the case.  As I've said, I don't use Data Load Service)

Without trailing spaces?

What does the debug log say?

Sanjay Sikder

@Tomek,

File is XLSX sure? You can try again by changing the file extension. or Please re-edit the file and save it in the same format and try to upload and then upload the file.  I also had a similar problem once.

Tomek
Answer

Found the solution...

By opening it in a special repair mode and saving it, I can then open it with Apex.

pastedImage_1.png

This is the link that talks about it

Marked as Answer by Tomek · Sep 27 2020

On Data Load Definition > Data Profile > Advanced found Default Sheet Name as sheet2.xml, changed to sheet1.xml and error disappear. I did never change that property to sheet2.xml.

1 - 6

Post Details

Added on Apr 7 2016
1 comment
4,367 views