Skip navigation
ANNOUNCEMENT: is currently Read only due to planned upgrade until 29-Sep-2020 9:30 AM Pacific Time. Any changes made during Read only mode will be lost and will need to be re-entered when the application is back read/write.

Since the dawn of database technology the Achilles heel or key limiting factor has been disk IO. In order to address this issue traditional spinning disk drive vendors historically offered 10K and even 15K RPM models. But more speed was needed. Then came Solid State Disks or SSDs, where depending upon the underlying technology (SLC vs. MLC vs. TLC) the speed increase vs. spinning disk was from between 10X to 100X. However once again more speed was needed. Then hardware vendors offered SSD technology connected directly to the PCI express (PCIe) bus which offers yet another 2X to 4X improvement. Finally we now have the nonvolatile memory express (NVMe) host controller interface specification which from the ground up was designed for SSD technology. This technology offers yet another 2X to 4X speedup. Thus we now have IO technology that’s at least 40X faster than spinning disks. Nonetheless memory is still 100X faster than that – so in-memory database technology offers DBAs the currently fastest possible method for accessing data from within the database. But just how good is it really?


For testing I used a star schema design data warehouse. I kept it small so that I could guarantee that the entire database would fit within the memory I had available. My dimension tables ranged from 5,000 to 180,000 rows. My fact table was 34 million rows – tiny by todays data warehousing norms, but sufficient for testing purposes. The query was a typical multi dimension to fact join with inline views, subqueries, complex filters and group operations. I tested three scenarios:


  • Star Schema table design using traditional b-tree indexes with typical execution plan
  • Star Schema table design using bitmap indexes with star transformation execution plan
  • Star Schema table design using no  indexes with all facts and dimensions 100% in-memory


In addition I performed these three test cases on both Oracle 12c R1 and 12c R2 since the latter includes significant improvements to in-memory. The results are shown below in Figure 1.



Figure 1: Performance Comparison Results


The results are quite astounding on multiple levels. Of course we see the major (6X in this case) benefit of using star transformation execution which relies upon bitmap instead of b-tree indexes. Now look at the in-memory results – we see yet another major reduction of 12X faster than the star transformation and an astounding 69X vs. the traditional design. But there’s still more – look now at Figure 2 comparing in-memory improvements for Oracle 12c R2.



Figure 2: Oracle 12c R2 in-memory Improvement


As you can see Oracle 12c R2 has delivered about 4.5X better performance than 12c R1. That translates into 54X faster than the star transformation and a flabbergasting 309X vs. the traditional design. Oh and by the way the in-memory stores data in columnar format so I also saw a 3.85X compression factor so that what took 1 GB just for the row format not including indexes took just 25 MB of memory. So even on this meager setup with just 4GB of the Oracle SGA allocated for in-memory I could easily handle over a billion row table. So Oracle in-memory is not just fast, it’s highly efficient as well.