>>I could not get the BSO data to aggregate on the fly after I logged into the ASO cube to perform the retrieve.
^^^This is what I observed. It would sometimes (actually, most of the time) aggregate -- other times it would. Super, super weird and no real rhyme or reason to it. Or maybe the better explanation is that none of us could explain it.
I think that in a *batch* environment, with some (what "some" means is not totally clear to me, but I'll bet Glenn-my-big-brother-but-not-really can weigh in on this) time for ASO to do whatever ASO is doing (and maybe with a forced physical or logical merge of the slices), the BSO to ASO transparent partition approach works really well. It would be sort of interesting to know what the status was of all of the slices when you didn't get the results you expected.
Again, I am so not an expert on this it is sort of funny, but what you outline would seem to be a really reasonable approach. Pity it seems to be so fraught with caveats.
You can definitely get the BSO level-zero data aggregated on-the-fly with an ASO transparent partition. You may, as Evgeniy noted above, have accidentally partitioned all levels, e.g. with @DESCENDANTS or similar. You must only partition the level-zero members for this to work (although technically you can also get away with an upper level in a dense BSO dimension that is dynamically calculated).
The problem I hit with this technique (in 18.104.22.168, anyway) is that, as Glenn said, there is potential for an awful lot of data to be needed from BSO for any given retrieve - and there seems to be a hard limit in the product (see MAXREQUESTGRIDSIZE) that I'm not sure can be worked around.
Fwiw, the BSO to ASO back to BSO system I described worked (mostly) in 22.214.171.124 but was quite problematic in 126.96.36.199. I didn't work on (quite luckily for all involved I suspect) that aspect of the project, but the partition back to BSO of upper level members (it was a pretty cool idea) was largely abandoned because of those issues. Hmm, I wonder if playing with MAX_REQEST_GRID_SIZE and MAX_RESPONSE_GRID_SIZE would have helped.
Message was edited by: CL Clarity, hopefully was added.
You are making this way too difficult.
As Cameron said why do you think it will be faster for data to get sent to BSO then moved to ASO where it still has to be added as a slice ???
Let alone now making the trip back to BSO?
The answer is to investigate why sends to ASO are so slow. The answer is simple. Your sends are of REPLACEMENT data. ASO however must first convert this data to an incremental slice versus what was originally there before. To do this it must query the ASO cube. That is where your time is being lost.
I am willing to be your ASO cube is not in RAM. Is the DAT file bigger than the sum of the light blue portion of the memory tab in Resource monitor when all cubes except this one are loaded (and have been queried at least once) ?
Also do you have aggregations on your aso cube? I bet if you cleared aggregations all of the sends would be much faster. This will also make the dat file smaller.
Finally if you design your ASO cube well you will not need the aggregations to make your reports faster.
More memory and clear aggregations and clean up your design - hmmm sounds good to me. A lot easier than keeping those two cubes in sync and the partitions the same.
How do you design your ASO cube for faster performance? well there is this book you can buy ...
It's a side issue Dan, but I'm not convinced that aggregations make much difference to a spreadsheet send, except 'indirectly' if they significantly blow out the .dat file.
As I'm sure you know, Essbase doesn't reprocess the existing aggregate views if incremental data is loaded (although it does if you don't create a slice). It seems to me that Essbase has an algorithm for deciding whether aggregate views need to be created on an incremental slice - I've noticed that some incremental loads increase the number of aggregate views and some don't. For a spreadsheet-type send (a few thousand input cells) I'd be surprised if views are built, and if they are I'd be surprised if they take very long. Although I'm happy to be proved wrong. Maybe if the new slice is auto-merged with larger existing slices...
Anyway, I agree getting the whole thing into memory would be nice.
No it does not reprocess the original aggregate views but it does have to build aggregate slices. You can see that when you do a query database slices it lists the view_id and the slice_id and there will be corresponding ones slice_ids for the various view_ids.
And I agree building an aggregate of a small slice is not a problem. But when you have lots of users submitting lots of new slices. And when you have small slices that get combined into the existing slices (remember the 50k cell number (at least I think it was 50k or was it 500k?) then you do have an issue because during the defacto slice merge I believe the cube goes dark and queries are frozen till after the merge. So lots of users lots of sends that is where the problem comes and I believe it gets compounded with aggregate slices merging on the fly.
Neat, I didn't know that the 'query database' correlated slice_ids with view_ids.
Shutting off the cube during a 'background' merge is painful, because you really have no control or idea when it might happen. It'd be be nice if that feature could be disabled.
This was our experience in 188.8.131.52 recently. We could not bring across a large enough area to do a high level retrieve in an ASO cube. Went with BSO which actually wasn't an issue in our scenario.