4 Replies Latest reply on Feb 8, 2012 6:07 PM by 912595

    Challenges  after exadata migration

      Challenges after exadata migration

      We are planning to move our OLTP db and DW db in one exadata x2 quarter rack in 2 node cluster. i want to know after migration of data, what are challenges the application is going face in exadata?
      Whether the entire schema's index strategy needs to be revised? What will be behaviour of SQL queries in OLTP and DW ? whether all the sql statement need to be revised?
      Will it be easy go?

      Please advise me with key steps what need to be taken care..
        • 1. Re: Challenges  after exadata migration
          I would say that you would need to revisit your partitioning strategy if you want to take advantage of EHCC (assuming you are using partitioning). Indexing will definitely need to be revisited since the current indexing implementation could negate the use of cell offloading, which is a huge portion of the performance benefits of Exadata. The desire to ensure offloading is full table scans and full index scans, dictated by direct reads.

          In addition, because you plan on using the X-2 QR to host both an OLTP and a DW database, you probably will need to look at IORM and DBRM to ensure that the resources of each environment is managed appropriately and doesn't negatively impact each other.
          • 2. Re: Challenges  after exadata migration
            Thanks Bill. Any advise on flash area, how should i plan? Some tables in OLTP are write and response intensive, can i put those tables in flash area?
            • 3. Re: Challenges  after exadata migration
              I think the key point is just that, planning. It is all about knowing the data and how the applications are going to use that data. By design the Exadata Smart Flash Cache is designed to act as what a lot of folks are calling "2nd Tier Caching" in the sense that as data is aged out of the buffer cache it is moved into the ESFC thus eliminating a need to perform a disk read. Exadata software has been written to be intelligent about what to store in ESFC and what not to store.

              That said, if there are tables that you are finding (through tracing, dbms_xplan, etc.) are not being stored in ESFC, then yes you can pin them manually. The other option is to reconfigure the ESFC to act as a flash disk. You could then create an grid disk and subsequently an ASM disk group on that flash disk to be used to store a tablespace and the table(s). This is probably not be my first choice as there is a lot more planning involved and I believe a higher risk (such as when the tablespace grows to fill the space occupied).
              • 4. Re: Challenges  after exadata migration

                Most import which need to be considered before migration.

                1) Your Exadata disk groups should have 4 MB. So you should choose best migration method where target asm diskgroup size remains 4MB. Otherwise you cann't benifit out of exadata
                2) Your Large object extent should be multiple of 4mb to properly layout on AU boundaries.
                3) Use cell_partition_large_extents to enable large partition table to automatically fin in AU boundaries.
                4) Use CELL_FLASH_CACHE Keep for most volatile tables.
                5) Use IORM to divide load between oltp and dw
                6) Make use of EHCC to archive tables and large tables to save space and increase performance.
                7) Indexes can still be usable in OLTP env. So you need to check for critical quires that index is going to benifit or not. You can make index invisible and then check the exeuction plan so see its impact.
                9) Make data compress during direct path load.
                10) I hope partitioning is already if so then it can make remarkable performance improvement. If not then understand the business logic and built quires which can do partition pruning.

                Please take all these consideration w.r.t applciation.