5 Replies Latest reply: Mar 21, 2013 5:32 AM by DavidGreenfield RSS

    CPU Performance Issue

    892844
      We are facing a performance issue related to cube refresh. Even a small cube consumes around 70% of the CPU while refreshing.

      Details of the cube: This cube has 10 dimensions and 46 straight forward measures (mix of sum and avg as aggregation algorithm). No compression. Cube is partitioned (48 partitions). Main source of the data is a materialized view which is partitioned in the same way as the cube.

      Data Volume: 1200 records in the source to be processed daily (almost evenly distributed across the partition)

      Cube is refreshed using: DBMS_CUBE.BUILD(<<cube_name>>,'SS',true,5,false,true,false);

      Environment - Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 (on AIX 6.1 64 bit), AWM - awm11.2.0.2.0A

      Hardware Configuration
      4.2 GHz, 3.5 CPU Core 32 GB RAM

      Has anyone faced similar kind of issue? Is there any server level / database level parameter that needs tweaking to control this behaviour?
        • 1. Re: CPU Performance Issue
          DavidGreenfield
          I'm not sure that 70% CPU use is itself a sign of a problem, but you presumably checked this because the build was too slow.

          Did you disable compression because you wanted a mix of sum and average aggregation? Uncompressed cubes are generally much more expensive to calculate than compressed cubes, especially with large numbers of dimensions.

          It is technically possible to mix aggregation methods using a compressed cubes, but this is not supported by AWM. So my advice would be to split your one cube into two compressed cubes if possible -- one for the SUM measures and one for the AVG measures. They can always be joined back together in SQL.
          • 2. Re: CPU Performance Issue
            Nasar-Oracle
            If you are creating a separate cube for AVG measures (as David suggested), then see if you can use the method that I explained in earlier post where I did Averaging of data using SUM aggregation Re: Cube Refresh Performance Issue

            SUM aggregation is faster.

            Measures can always be joined by creating Calculated Measures (in AWM) or in your SQL queries.

            For database parameter settings, check out these 3 discussions.

            Olap Page Pool Cache

            Re: Cube Build Time

            Re: OLAP Cube comparison


            .



            .
            • 3. Re: CPU Performance Issue
              DavidGreenfield
              Here is another trick to combine sum and average measures in the same cube. It relies on the AGGCOUNT function in OLAP DML, which you can look up in the reference guide.

              Suppose that you have a cube with two measures, A and B, and that you want to aggregate A using SUM and aggregate B using AVG.

              Step 1: Make the cube be compressed and aggregate all measures (A and B) using SUM.

              If you do this using AWM, then the solve specification should include a MAINTAIN COUNT clause. To double check, look at the user_cubes view for your cube (TEST in my example).
              select consistent_solve_spec from user_cubes where cube_name = 'TEST';
              
              CONSISTENT_SOLVE_SPEC
              --------------------------------------------------------------------------------
              SOLVE
              (
                SUM
                  MAINTAIN COUNT
                  ALLOW OVERFLOW
                  ALLOW DIVISION BY ZERO
                  IGNORE NULLS OVER ALL
              )
              You can hand edit the XML for the cube if this hasn't happened. Here is what you want to see in the XML for the cube.
                  <ConsistentSolve>
                    <![CDATA[SOLVE 
              (
                SUM
                  MAINTAIN COUNT
                   OVER ALL
              )]]>
              Don't worry about the slight difference in syntax -- this is due to different printing routines in the client Java and the server c code.

              Step 2: Verify that the cube's VARIABLE has the WITH AGGCOUNT option.

              My cube is named TEST, so the variable is named TEST_STORED, and my cube is dimensioned by TIME and PRODUCT. You can run this in the OLAP Worksheet.
              dsc test_stored
              
              DEFINE TEST_STORED VARIABLE LOCKDFN NUMBER WITH NULLTRACKING WITH AGGCOUNT CHANGETRACKING <TEST_PRT_TEMPLATE <TEST_MEASURE_DIM TIME PRODUCT>>
              Step 3: Define a new Calculated Measure, B_AVG, in the cube to get the average for measure B.

              Select "OLAP DML Expression" as the "Calculation Type" and enter the following expression. Obviously you need to adjust for the cube and measure names. I am putting new lines into this for readability.
              QUAL(
                NVL2(AGGCOUNT(TEST_STORED), TEST_STORED / AGGCOUNT(TEST_STORED), TEST_STORED),
               TEST_MEASURE_DIM 'B')
              Step 4: Specify the $LOOP_VAR on the new measure

              Execute the following in the OLAP Worksheet. (Again, correct for measure and cube names.) It instructs the server to loop the cube sparsely. If you don't do this, you will get dense looping and poor query performance. You only need to do this once (per calculated measure). If you save the AW to XML after this, then the LOOP_VAR value will be saved in the XML itself.
              call set_property('$LOOP_VAR' 'TEST' 'B_AVG' 'TEST_STORED')
              For reporting purposes you should look at measures A and B_AVG.
              • 4. Re: CPU Performance Issue
                892844
                Guys - Thanks very much for your quick responses!!

                Actually, its not with this cube alone, there are other cubes which has got compression as well but still eat up a lot of CPU. Its not the question of time taken for cube refresh but hardware team is a bit worried on eating up the CPU!! The utilization goes to 100% when other process also run along (or some of the cubes alone) with this cube refresh.

                We would need to understand if the hardware is not capable of handling this load or if there is any server level parameter to be tweaked ..

                do we have any benchmark harware configuration? any ideas?

                thanks again!
                • 5. Re: CPU Performance Issue
                  DavidGreenfield
                  Please contact me directly (david.greenfield at oracle.com) and I will put you in touch with our OLAP DBA expert. He will be able to give you recommendations based on your requirements.