2 Replies Latest reply on Nov 8, 2017 8:15 AM by Robert Angel

    PBCS Aggregation and Calculation

    user12087508

      We have a PBCS cube with around 24GB of data. Our biggest dimension is project dimension that is

      -Project

          Project_A( Base dimension 15,787 total members all levels)

          Project B (887 all levels with level 0 shared)

         Project C (7839 all levels with level 0 shared)

         Project C (16344 all levels with level 0 shared)

       

      We keep getting the answer that because our project dimension is so big our aggregations and calculations will take a really long time. In my mind, as most of the dimensions are shared it should not be terrible long. It is a multi currency cube with below mentioned structure:

       

      - Account (dense) 1819 members

      - Period (dense) 133

      - Level(DENSE) 44

      -Site(SPARSE) 1102

      -Department(SPARSE) 1224

      -Project(SPARSE) 40857

      -Years(SPARSE) 4

      -Version(SPARSE) 10

      -Scenario(SPARSE) 25

      -Currency(SPARSE) 50

       

      Is this a really big cube? I have seen much bigger on-premise planning cubes however new to pbcs so wanted to know the opinions.

       

      Thank you

        • 1. Re: PBCS Aggregation and Calculation
          USER1211

          What is the true block size? (you provided dense member count but did not state if they stored or total members)

          How many years are you aggregating each time?

          Are you aggregating across multiple currencies?

          If you had only one main hierarchy in your Project dimension and ran your consolidation for a single year one currency how long does it take?

          Have you looked at using an ASO to store the consolidated data ?

          • 2. Re: PBCS Aggregation and Calculation
            Robert Angel

            Also, if data is input rather than loaded have you looked into strategies to aggregate subsets of the database based on the members in the form on save?

             

            And to continue 1211's thought, if the dense members are all stored then this will yield a very large block size which could be a part of your problem, see the Essbase administrators guide for guidance on optimal block size.

             

            Also, your calculations will only take a long time if you are calculating a large subset of your database in one go, again if you use strategies based around your forms to only calculate based on input this will aid considerably.

             

            In terms of 'big cube' Essbase can be scaled up to 2 groups of sparse dimensions, where the sparse dimensions are each smaller than 2^52 combinations. Cubes bordering on this scale will still calculate quickly on smaller volumes of data. i.e. the sparse dimensions are indexes to the blocks, so accessing data will be extremely rapid, provided you are not trying to calculate massive subsets of the database in one go.

             

            Imagine you have a massive sky scraper and a dedicated cleaning and maintenance team. They can access any room in the building quickly due to rapid access elevators and well designed corridor access. As a team they can clean / maintain any room you like very fast. Just don't seriously expect them to do the entire building in one go, or you will be waiting a long time...

             

            Essbase Hybrid Aggregation Mode & BSO Limits | EPM Adventures

             

            Essbase Users: Block Size vs Calculation Performance

             

            Optimal Size Block Storage (BSO)

             

            https://docs.oracle.com/cloud/farel9/financialscs_gs/FADAG/dcaoptcs.html