This content has been marked as final. Show 5 replies
I'm not sure that 70% CPU use is itself a sign of a problem, but you presumably checked this because the build was too slow.
Did you disable compression because you wanted a mix of sum and average aggregation? Uncompressed cubes are generally much more expensive to calculate than compressed cubes, especially with large numbers of dimensions.
It is technically possible to mix aggregation methods using a compressed cubes, but this is not supported by AWM. So my advice would be to split your one cube into two compressed cubes if possible -- one for the SUM measures and one for the AVG measures. They can always be joined back together in SQL.
If you are creating a separate cube for AVG measures (as David suggested), then see if you can use the method that I explained in earlier post where I did Averaging of data using SUM aggregation Re: Cube Refresh Performance Issue
SUM aggregation is faster.
Measures can always be joined by creating Calculated Measures (in AWM) or in your SQL queries.
For database parameter settings, check out these 3 discussions.
Olap Page Pool Cache
Re: Cube Build Time
Re: OLAP Cube comparison
Here is another trick to combine sum and average measures in the same cube. It relies on the AGGCOUNT function in OLAP DML, which you can look up in the reference guide.
Suppose that you have a cube with two measures, A and B, and that you want to aggregate A using SUM and aggregate B using AVG.
Step 1: Make the cube be compressed and aggregate all measures (A and B) using SUM.
If you do this using AWM, then the solve specification should include a MAINTAIN COUNT clause. To double check, look at the user_cubes view for your cube (TEST in my example).
You can hand edit the XML for the cube if this hasn't happened. Here is what you want to see in the XML for the cube.
select consistent_solve_spec from user_cubes where cube_name = 'TEST'; CONSISTENT_SOLVE_SPEC -------------------------------------------------------------------------------- SOLVE ( SUM MAINTAIN COUNT ALLOW OVERFLOW ALLOW DIVISION BY ZERO IGNORE NULLS OVER ALL )
Don't worry about the slight difference in syntax -- this is due to different printing routines in the client Java and the server c code.
<ConsistentSolve> <![CDATA[SOLVE ( SUM MAINTAIN COUNT OVER ALL )]]>
Step 2: Verify that the cube's VARIABLE has the WITH AGGCOUNT option.
My cube is named TEST, so the variable is named TEST_STORED, and my cube is dimensioned by TIME and PRODUCT. You can run this in the OLAP Worksheet.
Step 3: Define a new Calculated Measure, B_AVG, in the cube to get the average for measure B.
dsc test_stored DEFINE TEST_STORED VARIABLE LOCKDFN NUMBER WITH NULLTRACKING WITH AGGCOUNT CHANGETRACKING <TEST_PRT_TEMPLATE <TEST_MEASURE_DIM TIME PRODUCT>>
Select "OLAP DML Expression" as the "Calculation Type" and enter the following expression. Obviously you need to adjust for the cube and measure names. I am putting new lines into this for readability.
Step 4: Specify the $LOOP_VAR on the new measure
QUAL( NVL2(AGGCOUNT(TEST_STORED), TEST_STORED / AGGCOUNT(TEST_STORED), TEST_STORED), TEST_MEASURE_DIM 'B')
Execute the following in the OLAP Worksheet. (Again, correct for measure and cube names.) It instructs the server to loop the cube sparsely. If you don't do this, you will get dense looping and poor query performance. You only need to do this once (per calculated measure). If you save the AW to XML after this, then the LOOP_VAR value will be saved in the XML itself.
For reporting purposes you should look at measures A and B_AVG.
call set_property('$LOOP_VAR' 'TEST' 'B_AVG' 'TEST_STORED')
Guys - Thanks very much for your quick responses!!
Actually, its not with this cube alone, there are other cubes which has got compression as well but still eat up a lot of CPU. Its not the question of time taken for cube refresh but hardware team is a bit worried on eating up the CPU!! The utilization goes to 100% when other process also run along (or some of the cubes alone) with this cube refresh.
We would need to understand if the hardware is not capable of handling this load or if there is any server level parameter to be tweaked ..
do we have any benchmark harware configuration? any ideas?