11 Replies Latest reply on Jul 23, 2014 11:13 PM by DanPressman

    Essbase ASO Load Options


      We have an ASO cube. We clear data everyday and do a fresh load every morning. The data source is SQL and usually the SQL used pulls in 15 million rows everyday.  We load during off hours and the loads complete fairly quickly, 30 minutes for Dimension builds and data loading (again data loads contain 15 million rows including Missing and Zero).


      There was a modification made to the table and the SQL used (as per business requirement) and the number of rows went from 15 million to 100 million. The data load still completes but where it was previously taking 30 minutes it now takes 2 hours which I guess is resonable considering it took 30 minutes to load 15 million rows.


      The issue is if the load has to be kicked off during business hours it takes 2 hrs to complete. And as a result the cube is unavailable for those two hours(we kick all users out, disable connects during a data load). Am looking for options/suggestions/ideas of making this better incase the load needs to be kicked off mid day. Are the other ways in which the data load times can be reduced. We already have optimized what drivers to be used and stuff so am looking for ways from a cube design perspective and not an infrastructure perspective of how this whole scenario can be avoided/bettered.




        • 1. Re: Essbase ASO Load Options

          I could suggest an option which we have implemented


          We always have a backup application on which we perform all our build & load process.Users will not be able to use this application


          Once the backup application process is completed, we disable connects, logoff users, and then swap the applications


          Let's say we have ASOSamp (Main Application)

          ASOSampB (Backup application)


          1. All the activities will happen on ASOSampB

          2. Disable connects, Logoff users, kill sessions

          3. Rename ASOSamp -> TMPASO

          4. Rename ASOSampB -> ASOSamp

          5. Rename TMPASO -> ASOSampB


          Our entire process takes around 2 hours and the actual downtime to users is 5-10 mins


          Hope it helps




          ORACLE | Essbase

          1 person found this helpful
          • 2. Re: Essbase ASO Load Options
            sunil k



            If I understand correctly, it is dimension build + data load times that you are trying to optimize and not just the data load.

            Technologically they are two different processes and have different strategies for optimization


            Dimension load is definite downtime for users and generally avoided in the main cubes. What Amarnath suggested is a very good way to minimize downtime for dimension build.



            I am not sure if you can carry out your data load independent of dimension build but if  you can,  try to load your database through files using buffer –


            This is what it would roughly look like

            -          - Create a text file from database [This may or may not be time saver depending on the network speed. You may continue to use direct SQL]

            -          - Load the datafile to buffer

            -          - Clear the data

            -          - Commit the buffer to cube


            You may be able to minimize the time for which data is cleared from the app.





            • 3. Re: Essbase ASO Load Options



              That sounds like a plan except for how would you then manage the provisioning for the application? Do you provision each time the application is renamed form ASOSampB to ASOSamp?



              • 4. Re: Essbase ASO Load Options

                Hi tedd,


                yes as Amarnath said , whenever rename or application is recreated you have to take the pain of reprovisioning




                • 5. Re: Essbase ASO Load Options

                  We never had a security issue as we have a virtual cube which will XREF the data from ASO and Planning application and we provisioned only that virtual cube so the user will not see any of those ASO applications


                  In your case, You can provisioned on both the applications with a Metadata filter and users will have the two runs of data (Current refresh and previous refresh). This has always helped us as that will help the users to understand what data has changed. (OR) You have to de-provision and re-provision again. But, I would suggest the former




                  ORACLE | Essbase

                  • 6. Re: Essbase ASO Load Options

                    Similar solution to above by   but MUCH FASTER !

                    Basically you start the same way: with a Work-in-Process cube where you do your metadata rebuilds and your data loads.  The trick is to ALWAYS name your databases the same - lets say "Fido" for now.  In general if the current APP (actually I refer to it as the "Published" APP)  is named Doggy then the WIP app would be named DoggyW.  Now INSTEAD of renaming the cubes using MAXL (which causes additional problems as the security follows the renamed cube) rename at the operating system level.


                    So after building the WIP cube Kick all users out of BOTH APPs and rename the subdirectories:


                        Ren Doggy\Fido                Doggy\Fido_hold

                        Ren DoggyW\Fido             Doggy\Fido

                    Doggy\Fido_hold to DoggyW\Fido                                  <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<This line was corrected 7/23/14 at 5:45 cdt thanks to the sharp eyes of Celvin Kattookaran and Teddd   !!!!




                    And start up Doggy - this results in less than 3 SECONDS of downtime for your users.


                    In fact in case there was an issue with the daily loads I put two more cubes in my rotation:


                        Ren Doggy\Fido               Doggy\Fido_hold

                        Ren DoggyW\Fido            Doggy\Fido

                        Ren Doggy2\Fido              Doggy\FidoW

                        Ren Doggy1\Fido              Doggy\Fido2

                        Ren Doggy\Fido_hold        Doggy\FidoW


                    Now you always have yesterday's and the day before cube to "retreat" to in case there were problems.


                    This works on Windows and UNIX for ASO and for BSO.  Essbase does not know it happened.  I have used it with every version since version 5.





                    There is one slight additional task required if you have partions on these cubes.  Move the fido.ddb (or is it fido.dbb???  I forget but one has the partition info in it) BACKWARDS:


                        Ren DoggyW\Fido\Fido.ddb                      DoggyW\Fido\Fido_hold.ddb

                        Ren Doggy\Fido\Fido.ddb_hold                  DoggyW\Fido\Fido.ddb

                        Ren DoggyW\Fido\Fido_hold.ddb               Doggy\Fido\Fido.ddb

                    1 person found this helpful
                    • 7. Re: Essbase ASO Load Options

                      ADDITIONAL SUGGESTION:

                      Speed up your data load by converting it to Essbase Native format (Such as seen in a non columnar export).


                      I showed some very simple SQL to be added above and below your current extract (Using SQL Server syntax) It reduced an extract file containing 1.34 BILLION rows ( 4 billion data cells) 20 dimensions total size 275 GB down to just 31.8 GB.  The load time took only 1/10th as long!


                      I presented this on slide #52  in this KScope 13 presentation ODTUG : ODTUG Technical Resources

                      1 person found this helpful
                      • 8. Re: Essbase ASO Load Options

                        Hi Dan


                        That was really a good solution. I stumbled upon your statement

                        "Speed up your data load by converting it to Essbase Native format (Such as seen in a non columnar export)"

                        How do we convert it to non column format? Considering we have a staging environment which will does all the transformation and then load the data to the ASO cube




                        ORACLE | Essbase

                        • 9. Re: Essbase ASO Load Options

                          Amarnath you did not read the last line of the post: I presented this on slide #52  in this KScope 13 presentation ODTUG : ODTUG Technical Resources   just click the link and look at slide #52

                          • 10. Re: Essbase ASO Load Options

                            Thanks for the suggestions Dan. But if I do this, in the end I will be left with one cube, Doggy\Fido. If I have to do the same exercise the next day I will have to manually create the DogyW\Fido cube. Beats the point of getting the below script automated.

                            DanPressman wrote:


                                Ren Doggy\Fido                Doggy\Fido_hold

                                Ren DoggyW\Fido             Doggy\Fido

                                Ren Doggy\Fido_hold         Doggy\Fido



                            • 11. Re: Essbase ASO Load Options

                              Teddd you and Celvin Kattookaran caught my typo!!!  It should be as follows:


                                  Ren Doggy\Fido                Doggy\Fido_hold

                                  Ren DoggyW\Fido             Doggy\Fido

                                  Ren Doggy\Fido_hold         DoggyW\Fido

                              Celvin asked a great additional question (privately) that should be useful to everyone:

                                   What happens to the metadata that I built yesterday, DoggyW where we did metadata and data load y'day is now Doggy.


                                   Do you suggest copying (OS) Doggy to DoggyW and then update metadata on that one everyday or a MaxL copy?


                              You always do all of your MaxL work in DoggyW metadata rebuilds and data.  If you are only using the two application rotation (Doggy and DoggyW) then I like to leave DoggyW as it was before the rebuild and reload so I can retreat to it if a problem was found with the new day's load.  (In fact in some cases I have built the whole renaming process into a second procedure which can be rexecuted after some senior user has checked the cube.  So DoggyW is rebuilt and Reloaded but not "published" automatically - the second process does that).

                              In any case DoggyW metadata and data are one day old.  If your process does NOT rebuild metadata from scratch but assumes it starts with the prior metadata state then you MUST start your rebuild with a copy of the Doggy/Fido.OTL to DoggyW/Fido.OTL.

                              Similarly for your data.  If your data load is incremental DoggyW is one day older than the current data Published in Doggy.  So if your load is incremental then you will either have to repeat the load of yesterdays data before you load today's incremental data into DoggyW or export it and load from the published Doggy into DoggyW.

                              You could prepare DoggyW for the next update immediately after the Publish rename but as I said earlier I like to keep it around to retreat to in case of problems.  Frankly that is one of the reasons I like the 4 cube round robin approach I mentioned above with Doggy  Doggy1  Doggy2  and DoggyW   (I refer to them generically as Published, one day old, 2 days old and WIP).  This allows you to do any prep work right after the publish process so you are all ready to go when the next update request comes in.

                              Of course intelligent use of incremental data and slices really does obviate the requirement for much of this.  When you throw in the new "Renegade Members" capabilities you will never have a problem with downtime - that is if you can live without any new accounts/departments etc during the daytime.  They will just go to the Renegade member created on the fly.