1 person found this helpful
I had a new look at my paper on the subject. Indeed, I read also the sentence "Aggregating level0 data input does not overwrite upper level data input.". I had to think very hard why I included this, because this is not correct. As far as I remember, this was inserted to reduce the complexity in the example.
In real we had the situation like you do, but for the sake of the paper, I simplified the process.
As far as I remember, did we not load the level0 data file to set all data level0 members to 0 and then aggregate them. Instead we exported the level-0 data and copied the outline. There we loaded the data and aggregated it with AGGMISSING ON. Exported all data in column format and and loaded it with flip-sign on the original one.All intersections with correct aggregates will be 0 the other are inputs. From this onward you can follow the process in step 5. to get rid of the 0 values.
To capture the upper level input without underlying data, you need to redo the process as described in the paper.
I hope you do not have too many cubes to "treat" this way.
When you run the process does it spike the cpu on the ESSSVR process or the essmsh process? What is the total number of combinations you are pulling in the mdx statement? I have noticed that 22.214.171.124 has a much greater capacity for large sets in MDX.
Also, it might be helpful to see your MDX statement.
Thanks for all this, I see how your method could work in our case, but since we are really struggling to get data out of the cube (that was the original issue in this thread), it's going to be almost impossible to apply it. It's a neat method and I'll make sure to keep it in mind if I face the same issue on a smaller cube
What we decided is to have a copy of that large cube partitioned at every level with the new cube for historical years (only historical data is poorly aggreggated/has adjustments at every level). Not the prettiest design for a brand new cube, but it seemed the best approach overall.
The problem is clearly the part writing to the text file. The MDX statement runs in only 2 minutes as I can see in the log, then it's struggling to get it out in a text file, and a few minutes after the MDX statement is completed, when it should be writing to the text file we get the memory allocation error.
Try decreasing the ASO database Pending Cache Size
To do this:
1. In Essbase Administration Services, right click on the application.
2. Choose Edit Properties
3. Under the General tab modify the Pending Cache Size limit.
4. Stop and start the application for the changes to take effect.
I appreciate that you are trying to help, but did you even read the question in OP? We're talking about a BSO database.