Forum Stats

  • 3,838,803 Users
  • 2,262,400 Discussions
  • 7,900,759 Comments

Discussions

Consolidation Tuning for message - contains at least 100000 records

Mikkis
Mikkis Member Posts: 223 Blue Ribbon
edited Jul 9, 2018 2:38PM in Financial Consolidation

Hi

Which consolidation setting change has most effect (based on your experience) for the issue where top parent is taking 30 sec to consolidate? We have 4 such parents and those are taking up to 2 min to consolidate while the rest of 500 entities take 4-5 min for one period. Few things I observed

1. For some entities I see message ".....contains at least 100000 records"

2. CPU is using only 1 processor out of 16 when it reaches each of those 4 top entities. I think it is normal as each one is parent of other entity and will wait until child consolidates

3. Only 40% utilization in RAM for whole consolidation out of 48 GB RAM

I tried various permutations by changing consolidation settings but didn't see much improvement. so hoping someone would help using the above info. I have done tuning already on the rules and achieved 90% improvement so not much to do there.

Thanks in advance

Mikkis

Tagged:

Answers

  • Chandra Bhojan-Oracle
    Chandra Bhojan-Oracle Posts: 198 Employee
    edited Jul 6, 2018 9:17AM

    Below are the 3 settings which would help you to get better performance if it is tuned according to the RAM size and there are few more as well.

    MaxNumDataRecordsInRAM

    MaxDataCacheSizeinMB

    MinDataCacheSizeinMB

    Please see the below Webcasr which would help better.

    ADVISOR WEBCAST: HFM Performance Tuning in 11.1.2.4 held on January 30, 2018 (Doc ID 2339020.1)

    Thanks,

    Chandra

  • Mikkis
    Mikkis Member Posts: 223 Blue Ribbon
    edited Jul 6, 2018 11:16AM

    Fantastic Chandra. I will take a look a your video and see if I can use the suggestions from that video.

  • CBarbieri
    CBarbieri Member Posts: 1,011 Gold Trophy
    edited Jul 9, 2018 2:20PM

    1) For the 100,000 records message: this is just a message, not really a warning. It goes back to the days of 32 bit HFM and was a warning that the application might crash if it runs out of memory. HFM 4.0 introduced a paging mechanism that ensured the application would not crash. HFM 11 introduced 64 but which effectively eliminated the possibility of running out of memory. You should just monitor the messages to look for runaway rules that would be evident by sudden, significant increases in the record count. I would worry if you have subcubes exceeding 500,000 or 1 million records, but if the data is legitimate, you can always assign more memory by increasing the settings Chandra shared.

    2) Is this 11.1.2.4 or earlier? 11.1.2.4 is far better at using multiple threads/cores through the whole consolidation process.

    3) Memory is rarely the bottleneck, and rarely the cause for performance problems. HFM rules, application design, database tuning, and infrastructure are the leading contributors to performance, in that order. I would say however, that using 60% of 48 GB of RAM seems like an extreme amount of data, so there is something about your application design that generates a lot of records. Most HFM applications will never use more than 8 GB of RAM. I have seen ones that use more than 40 GB, but that's like saying I have seen race cars exceed 220 MPH. Most drivers will never see this.

    - Chris

  • Mikkis
    Mikkis Member Posts: 223 Blue Ribbon
    edited Jul 9, 2018 2:38PM

    Thanks Chris for your inputs. We are using 11.1.2.4, which is much faster than 11.1.2.1 version when it comes to consolidations. I further investigated on RAM utilization and though I put 40%, the HFM process is only taking 10-12GB RAM, which may be on the high end but I guess we can live with that. Performance improvement I have seen so far is mainly due to rules. Wanted to see if I can do any tuning on Infra side, just for the sake of it.

    Mikkis

This discussion has been closed.