Skip navigation

SlavaUrbanovich

October 2016 Previous month Next month

Update on Dec 01, 2016:

Oracle has provided a solution for the "Dirty COW" (CVE-2016-5195) on Exadata - which is delivered as a complete patch 12.1.2.3.3.161109

It is important to apply 12.1.2.3.3 with the 161109 build date as there were earlier build dates for this release and they do not resolve the "Dirty COW" vulnerability.

 

Exadata 12.1.2.3.3 release and patch (24441458) (Doc ID 2181366.1)

 

VersionPatch

Notes

See Note 1270094.1 for additional fixes that address critical issues.

12.1.2.3.3Patch 24441458 - Storage server and InfiniBand switch software (12.1.2.3.3.161109)
Patch 24669306 - Database server bare metal / domU ULN exadata_dbserver_12.1.2.3.3_x86_64_base OL6 channel ISO image (12.1.2.3.3.161109)
Patch 24669307 - Database server dom0 ULN exadata_dbserver_dom0_12.1.2.3.3_x86_64_base OVM3 channel ISO image (12.1.2.3.3.161109)

Recommended

Supplemental README Note 2181366.1

12.1.2.3.3 was updated from 12.1.2.3.3.161013 to 12.1.2.3.3.161109 to include important fixes.  See the fix list in patch 24441458 for details.

===================

Original article below

===================

Since almost all Exadata, Exalogic and ZDLRA machines in the  world are running on Oracle Linux they could be vulnerable to the CVE-2016-5195 which exploits access elevation during Copy On Write operations - hence the Dirty COW nickname.

 

This vulnerability became widely known in mid-October 2016 and according to the sources like Risk Assessment | Ars Technica UK  the current state of the appropriate patch development is as following:

"The underlying bug was patched this week by the maintainers of the official Linux kernel. Downstream distributors are in the process of releasing updates that incorporate the fix. Red Hat has classified the vulnerability as "important.""

 

The dangers of this vulnerability is also described on Risk Assessment | Ars Technica UK  as following

"As their names describe, privilege-escalation or privilege-elevation vulnerabilities allow attackers with only limited access to a targeted computer to gain much greater control. The exploits can be used against Web hosting providers that provide shell access, so that one customer can attack other customers or even service administrators. Privilege-escalation exploits can also be combined with attacks that target other vulnerabilities. A SQL injection weakness in a website, for instance, often allows attackers to run malicious code only as an untrusted user. Combined with an escalation exploit, however, such attacks can often achieve highly coveted root status."

 

Oracle has already categorized this vulnerability as of October 21, 2016 and RPMs that include the fix for this CVE have been released as well.

Please refer to this page to get up to the minute update (and the patched RPMs) for this vulnerability linux.oracle.com | CVE-2016-5195

Depending on your Exadata bundle patch level the Oracle Linux kernel version would either 6 (hopefully ) or 5, so please look for the appropriate errata links on the page above.

 

Update on 10/27:

The main page that tracks this vulnerability linux.oracle.com | CVE-2016-5195 is getting more updates pretty much daily now.

Updated RPMs that are mentioned there you could be found at https://oss.oracle.com/sources/  under the Oracle 5 or Oracle 6 links.

 

Additionally, for the current Exadata, Exalogic and ZDLRA patch levels , which assumes OEL 6, the following MOS document could be used as wellff

 

Oracle Linux 6: Reference Index of Security Vulnerability Bug fixes, CVE IDs and Oracle Linux Errata (Doc ID 2112930.1)

...

Customers may find status of fixes for CVEs for Oracle Linux through our Unbreakable Linux Network (ULN). Please refer to Oracle Support Document 1593465.1 "Unbreakable Linux Network (ULN) Administrative Features for Errata and CVEs"

This listing is sorted by the date of publication by Oracle.

 

 

Errata Date Component CVE ID Errata
25-Oct-2016Kernel-2.6.32CVE-2016-5195 (dirty COW)ELSA-2016-2105
21-Oct-2016Kernel-UEK-2.6.39CVE-2016-5195 (dirty COW)ELSA-2016-3634
21-Oct-2016Kernel-UEK-3.8.13CVE-2016-5195 (dirty COW)ELSA-2016-3633
21-Oct-2016Kernel-UEK-4.1.12CVE-2016-5195 (dirty COW)ELSA-2016-3632

...

There has been one well known and widely used security feature that was not available on Exadata, at least not until earlier in 2016.

 

it may worth a separate discussion on why InfiniBand (IB) partitioning was not available on Exadata until 2016, but regardless of the reasons it is now possible to have IB partitioning enabled on any previously deployed Exadata machine. IB partitioning is also available for the new Exadata deployments and a number of improvements and bug fixes were made in that area of Oracle Exadata Deployment Assistant (OEDA).

Minimum Exadata storage software requirements are still applicable of course, so you may still need to patch your Exadata machine(s) first

 

Here is why IB partitioning is important in many cases and why you may want to consider it seriously.

With IB partitioning it is now possible to prevent Exadata Oracle RAC nodes from one cluster from communicating via InfiniBand (IB) fabric (used for Exadata RAC clusters interconnect) with the nodes belonging to any other Oracle RAC clusters on the same Exadata machine, or even with any remote RAC nodes on another IB "daisy-chained" Exadata machines.

 

Exadata machines since their early incarnations going back to 2008+ era were fast, extremely fast actually, yet not extremely cheap in terms of hardware, software and service costs.

As a result of extreme performance at somewhat premium price Exadata machines were often times , well .. shared ... between different lines of business within the same company or even between several external customers. Such sharing had been handled with either several Oracle RAC clusters or with several ASM disk-groups created for the same cluster on either Bare Metal or Virtualized Exadata machines.

This approach is indeed providing certain level of isolation although all cluster node to cluster node and cluster node to the storage cells communication had been occurring over the same Infiniband (IB) partition with the same (default !) IB partition key that was shared by ALL Exadata machines in the world !

The good news with the new IB partitioning capability is that cluster nodes (BM and VM) could be forced to only communicate with the nodes from the same RAC cluster and/or a dedicated set of the storage cells.

What makes it even better is that this new "world order" could be equally enforced on any previously deployed Exadata as well as any new Exadata deployment as of April 2016.

 

Once IB partitioning is configured and enforced the RAC nodes could only communicate to the members of the same IB partition, without even being aware of any other nodes on the same Exadata machine.

Same is true for the "compute nodes-to-storage cells" IB communication, once storage IB partition keys are are created and assigned the compute nodes belonging to any set of storage IB keys will be allowed to communicate only with the storage cells that were assigned the same IB P-key.

 

This is clearly the concept that Facebook would not tolerate , but when it comes to Exadata cluster nodes cluster communication it makes perfect sense indeed.

This major Exadata security enhancements that was made rather silently, yet it may need be considered seriously for the Exadata environments where data from multiple business lines / customers could be co-hosted.

 

Below is a quote from the MOS document that describes this "walled English garden" approach to cluster interconnect communication quite well:

"Every Node within the infiniBand fabric has a partition key table which may be viewed under /sys/class/infiniband/mlx4_0/ports/[1-2]/pkeys. Every Queue Pair(QP) of the node has an index (P_Key) associated with it that maps to an entry in that table. Whenever a packet is sent from the QP’s send queue, the indexed P_Key is attached with it. Whenever a packet is received on the QP’s receive queue, the indexed P_Key is compared with that of the incoming packet. If it does not match, the packet is silently discarded. The receiving Channel Adapter does not know it arrived and the sending Channel Adapter gets no acknowledgement as well that it was received. The sent packet simply gets manifested as a lost packet. It is only when the P_Key of the incoming packet matches the indexed P_Key of the QP’s receive queue, a handshake is made and the packet is accepted and an acknowledgment is sent to the sending channel adapter. This is how only members of the same partition are able to communicate with each other and not with hosts that are not members of that partition (which means those hosts that does not have that P_Key in their partition table)."