Forum Stats

  • 3,733,802 Users
  • 2,246,823 Discussions
  • 7,856,879 Comments

Discussions

Relative speed of second Consolidate All With Data

User_MB094
User_MB094 Member Posts: 23 Blue Ribbon

Hi All,

We have a rollup that needs to be consolidated twice because some parent entity balances need to be written to a few corporate base entities.

After loading data, the second Consolidate All with Data is much, much faster than the first Consolidate All with Data. This has never struck me as strange, but I was wondering about the actual mechanics of why it is so much faster. If we are running a CAWD the second time, then the rules should be running on all entities again and impacting the entities all the way up the hierarchy.

Is the second consolidation's relative speed because the vast majority of balances are not changing, and the parent entity balances do not actually have to be written again? i.e. the base entities are re-calculated, but when those balances are rolled up to a parent entity, HFM "sees" that the parent entity balances are unchanged, and does not actually take the time to overwrite the balances that are already there?

Any insight would be appreciated.

Jim

Best Answer

  • Jeo123
    Jeo123 Member Posts: 508 Gold Badge
    Accepted Answer

    The most likely reason is because records are held in ram after they're pulled from the database. This makes any subsequent actions on those records significantly faster, however there's a limit to how much can be held in memory. After a while or after too many records are requested, an algorithm called FreeLRU is run(you can see this in system messages if you're curious). This releases the Least Recently Used records from memory and if they're needed again, would need to be pulled from the database.

    For that reason, when benchmarking, I usually try to do comparison using a freshly started datasource since the initial pull would be all from disk.

    Also just a little side suggestion if you haven't looked into it yet, you might want to consider an impact status rule to target the corporate entities. this would let you get away with just running a Consolidate on the second pass and not have to recalculate all the base entities. We use this approach when we calculate our tax amount in our forceast scenarios because our tax department gives us a rate and we apply that to the forecast income for the total company. The actual amount needs to be recorded at the base entity but is based on the consolidated results. So we include a check. If the tax number equals the expected tax number, we're good, but if not, it sends an impact status to the base entity so that only the entity where this is recorded gets impacted.

    We then run a Consolidate so that the one entity runs the calculate routine, records the correct tax number based on the new income, and consolidates up the hierarchy. It's a lot faster than reconsolidating all 2000 entities with an All With Data.

    Not sure if you can use that approach for your situation, but figured I'd share.

Answers

  • Jeo123
    Jeo123 Member Posts: 508 Gold Badge
    Accepted Answer

    The most likely reason is because records are held in ram after they're pulled from the database. This makes any subsequent actions on those records significantly faster, however there's a limit to how much can be held in memory. After a while or after too many records are requested, an algorithm called FreeLRU is run(you can see this in system messages if you're curious). This releases the Least Recently Used records from memory and if they're needed again, would need to be pulled from the database.

    For that reason, when benchmarking, I usually try to do comparison using a freshly started datasource since the initial pull would be all from disk.

    Also just a little side suggestion if you haven't looked into it yet, you might want to consider an impact status rule to target the corporate entities. this would let you get away with just running a Consolidate on the second pass and not have to recalculate all the base entities. We use this approach when we calculate our tax amount in our forceast scenarios because our tax department gives us a rate and we apply that to the forecast income for the total company. The actual amount needs to be recorded at the base entity but is based on the consolidated results. So we include a check. If the tax number equals the expected tax number, we're good, but if not, it sends an impact status to the base entity so that only the entity where this is recorded gets impacted.

    We then run a Consolidate so that the one entity runs the calculate routine, records the correct tax number based on the new income, and consolidates up the hierarchy. It's a lot faster than reconsolidating all 2000 entities with an All With Data.

    Not sure if you can use that approach for your situation, but figured I'd share.

  • User_MB094
    User_MB094 Member Posts: 23 Blue Ribbon

    Jeo,

    Thanks very much for your reply - that those records have already been pulled into RAM makes sense to me.

    Also appreciate the suggestion for the using impact status to target the second consolidation and not need to run a CAWD. I will give it a try!


    Jim

Sign In or Register to comment.