This content has been marked as final. Show 29 replies
dear experts, thank you very much for your attention to my problem
Jadu2, yes clearance is partial, i clear only current month
- clear data about 4 hours and 10 min*
- build dimensions less than 1 minute*
- load data less than 1 minute*
- calc all about 6 hours*
garycris, thank you for your suggestion, i should test it
You clear the current month. Is your time (months) dense or sparse? I'll guess dense and because of that you are hitting every block in the database . If you get a little creative, you might be able to get that part down to a few minutes. (change to sparse or export level zero data, clear the database, reload then clear the month, or export level zero clear the database and exclude the current month in a load rule to load the data back in. Since you use a calc all, I'll bet aggregating the level zero data up will be quicker than your current time.
As a test, copy the cube, export level zero data then clear your test cube, load the level zero data back in and run your calc all. I'm going to guess your calc will be much faster than what you currently have.
I would say go the ASO route, but since you are on 7.1.2 it harder than if you were on 11X.
yes Glenn's suggestion will suffice.
When you are clearing data partially and running load and calculation there's more chance of database getting fragmented.
Also as per you timings, I guess you were clearing all levels of data for current month (month being dense i guess).
Creating a block & loading data is feasible for essbase when compared to read existing blocks and load data in particular cells.
So its better either you change time to sparse, so that when you clear data of that month, it will clear all block pertaining to that month.
The other approach will be, export lev-0, clear cube, load lev-0, dim build, load current month data, and perform roll up calc.
this will be a much better option when compared to running partial batches.
in either cases you dont need to perform calc all, as I mentioned before you can identify non aggregating dimensions and remove them from calculation as they dont need to rollup.
But I dont think having 12 dims will make you eligible to migrate cube to ASO. You need to analyze further on these grounds whether ASO will suit you requirements and also whether your structure will meet ASO specifications...
Yes, Time dimension is dense
As a test, copy the cube, export level zero data then clear your test cube, load the level zero data back in and run your calc all. I'm going to guess your calc will be much faster than what you currently have. - i did this, CALC ALL operation takes about 1 hours 10 min (or 5 times faster than in my daily loading)
thank you very much, dear gurus, first i'll tag Time dimension as sparse and try to load database
^^^ Did you ever get this resolved, i.e., you have a database that calculated in an acceptable time, then something happened, and now it's slow? Yes, I know, you now have an aternate way of calculating the database and it provides a better calc time, but that didn't get to the root cause of what presumably was a change in behavior. I'd personally want to know what caused that change.
This problem appears about 1 week ago, can't understand what is the reason. I did restructure, i did re-export, no effect.
the result is:
Clear partial data: about 5 hours 20 minutes+
Buliding dimensions: about 5 minutes+
Import data: about 1 minute+
Calc all: about 2 hours+
Total time: ~ 7.5 hours, it's acceptable
So, what are the possible reasons?