There has been one well known and widely used security feature that was not available on Exadata, at least not until earlier in 2016.

 

it may worth a separate discussion on why InfiniBand (IB) partitioning was not available on Exadata until 2016, but regardless of the reasons it is now possible to have IB partitioning enabled on any previously deployed Exadata machine. IB partitioning is also available for the new Exadata deployments and a number of improvements and bug fixes were made in that area of Oracle Exadata Deployment Assistant (OEDA).

Minimum Exadata storage software requirements are still applicable of course, so you may still need to patch your Exadata machine(s) first

 

Here is why IB partitioning is important in many cases and why you may want to consider it seriously.

With IB partitioning it is now possible to prevent Exadata Oracle RAC nodes from one cluster from communicating via InfiniBand (IB) fabric (used for Exadata RAC clusters interconnect) with the nodes belonging to any other Oracle RAC clusters on the same Exadata machine, or even with any remote RAC nodes on another IB "daisy-chained" Exadata machines.

 

Exadata machines since their early incarnations going back to 2008+ era were fast, extremely fast actually, yet not extremely cheap in terms of hardware, software and service costs.

As a result of extreme performance at somewhat premium price Exadata machines were often times , well .. shared ... between different lines of business within the same company or even between several external customers. Such sharing had been handled with either several Oracle RAC clusters or with several ASM disk-groups created for the same cluster on either Bare Metal or Virtualized Exadata machines.

This approach is indeed providing certain level of isolation although all cluster node to cluster node and cluster node to the storage cells communication had been occurring over the same Infiniband (IB) partition with the same (default !) IB partition key that was shared by ALL Exadata machines in the world !

The good news with the new IB partitioning capability is that cluster nodes (BM and VM) could be forced to only communicate with the nodes from the same RAC cluster and/or a dedicated set of the storage cells.

What makes it even better is that this new "world order" could be equally enforced on any previously deployed Exadata as well as any new Exadata deployment as of April 2016.

 

Once IB partitioning is configured and enforced the RAC nodes could only communicate to the members of the same IB partition, without even being aware of any other nodes on the same Exadata machine.

Same is true for the "compute nodes-to-storage cells" IB communication, once storage IB partition keys are are created and assigned the compute nodes belonging to any set of storage IB keys will be allowed to communicate only with the storage cells that were assigned the same IB P-key.

 

This is clearly the concept that Facebook would not tolerate , but when it comes to Exadata cluster nodes cluster communication it makes perfect sense indeed.

This major Exadata security enhancements that was made rather silently, yet it may need be considered seriously for the Exadata environments where data from multiple business lines / customers could be co-hosted.

 

Below is a quote from the MOS document that describes this "walled English garden" approach to cluster interconnect communication quite well:

"Every Node within the infiniBand fabric has a partition key table which may be viewed under /sys/class/infiniband/mlx4_0/ports/[1-2]/pkeys. Every Queue Pair(QP) of the node has an index (P_Key) associated with it that maps to an entry in that table. Whenever a packet is sent from the QP’s send queue, the indexed P_Key is attached with it. Whenever a packet is received on the QP’s receive queue, the indexed P_Key is compared with that of the incoming packet. If it does not match, the packet is silently discarded. The receiving Channel Adapter does not know it arrived and the sending Channel Adapter gets no acknowledgement as well that it was received. The sent packet simply gets manifested as a lost packet. It is only when the P_Key of the incoming packet matches the indexed P_Key of the QP’s receive queue, a handshake is made and the packet is accepted and an acknowledgment is sent to the sending channel adapter. This is how only members of the same partition are able to communicate with each other and not with hosts that are not members of that partition (which means those hosts that does not have that P_Key in their partition table)."