we want to move level 0 data from a BSO planning cube to an ASO reporting cube immediately after calculations have been executed. We would need @XWRITE for an ASO target. Unfortunately this is not possible.
So we want to use CDF_RunExe.jar with the class com.oracle.essbase.cdf.RunMAXL to run MaxL from a calc script or BR. With this, it would be possible to execute a MaxL script from a calc script and thus replicate data via partitioning. Unfortunately, it doesn't work.
The calc script is executed successfully, but the MaxL script is not executed. In the Essbase log, we get the following entry:
Operating System: SunOS
Executing null null null null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[Tue Jan 3 10:16:07 2012]Local/ESSBASE0///62/Info(1051160)
Received Validate Login Session request
Maybe permission settings are not set correct for CDF_RunExe.jar? I am sure that it can be only a small mistake.
I only found this CDF_RunExe.jar and actually you don't need to register a CDF for it. You just call RUNJAVA in a calc script:
public static void main(com.hyperion.essbase.calculator.Context ctx,
Executes an Essbase MAXL script on the server.
ctx - The Essbase calculator context. This is passed automatically by the Essbase calculator.
args - The name of the MAXL script.
RUNJAVA com.oracle.essbase.cdf.RunMAXL myScript;
As I have written above, I tried this out. If you have a code for a CDF to run a maxl script via calc script, then it would be great if you share it with me!
Thank you for sending me the link, but the functions listed there (export, financial, date, data, email alert, etc.) do not suit to my case. I have to execute a partitioning or run MaxL via calc script.
How to replicate a partition from a Planning form.
-> How to run MaxL from a calc script or business rule.
We are getting ready to do real time agging using a round trip from Planning to ASO (replicated) and back (transparent). The partition definitions are a bit hairy ... think carefully about what is pushed BSO to ASO (L0 of all dimensions. No overlap from different sources. Leave out accounts, etc. that you don't care to aggregate) and what is pulled back on retrieves (Everything that isn't pushed).
We've done proof of concept of this with a VERY large and complex Planning app (wrkforce, currency, big data set). It is really exciting to see real time full aggregation.
I solved the puzzle using Dragos Matachescu's HyperionRuntime class and @JRuntime_exec CDF working. Complete instructions, source code, and documentation are available at his site http://www.dragosmatachescu.com/
HyperionRuntime.z_exec method returns a string of all the chatter from the shell (nice, but useless). In a calc script, the calculation needs a number. So I used Dragos's @JString_StringToInt CDF to do that. The number's value actually doesn't matter, I just need the exec function to return any numerical value for the calc script to validate.
Next, I figured out that I need to calculate a single cell in order to make the function run only one time. Without a really narrow fix, it
will thrash away running the essmsh script over and over and over. Fix on a single L0 member in each dimension but one, then calculate any
single member of the remaining dimension in the calc block. Just pick any single cell that you don't care about. I chose
Actual->Final->BegBalance->No (Project, Geo, Entity,Account)->HSP_InputValue->Year.
So now I have a PushPart.csc that looks like this:
, "No Project"
, "No Geography"
, "No Entity"
, "No Account"
The first argument to @JRuntime_exec is an array that calls the shell and, in this case, I passed the .mxl script on the command line.
The second argument is an array of instructions to be executed in the shell. If I hadn't used the script, I could have passed each line to maxl. Use @LIST to build an array and use "/n" to end each line. I just used "exit;/n" in order to make sure that the shell closes. (i.e. Insurance if the exit statement were left out of the .mxl script).
The third argument is a separator for the strings returned from the shell. The return is pretty much ignored / lost in the calc scipt, so instead I SPOOL ON TO <logfile> in the mxl script to have visibility to what is happening in the shell.
The last argument is the directory to start the shell in.
The MaxL script logs in, updates the replicated partition, and updates the aggregation in the ASO cube. It runs in about 2 seconds.
I attached the calc script to a form and set it to Run on Save. There is a transparent partition back to the BSO cube so, voila, we get real time aggregation by round tripping to ASO.
I will be doing some stress testing to check for contention issues. So far it looks really good.
Did you ever get this functionality working? I have been toying with something similar, except that I am not leveraging partitioning. I am scraping a planning form, passing the variables to a report script, or into a DataExport, and than loading the data into an ASO model, all leveraging CDFs.
The issue I have is that this works fine for one person, but as soon as you have multiple individuals submitting data, it will break since the export file is the same name for all users. I need to pass varibles into the filename so that I can make each file unique, or a filenumber.
For the push to ASO, I needed the data to be exported to a dynamic file name (incrementing serial number), this was how I solved that problem. The technique is pretty complicated, but in a nutshell you need Dragos Matachescu's CDF packages, a File Counter account in your cube, two subvars &TRANSFER_NUM, &TRANSFER_COMMAND, a calc script (like below), a load rule for the ASO cube, and two MaxL scripts (Set_Transfer_File.mxl and Load_ASO_ALL.mxl).
Users call this functionality up to 2200 times per day. Most "push" operations complete in 5 to 30 seconds with 750KB to 5MB of data per push. Large push operations (such as agALL.csc below) take 1-4 minutes with 1.5 to 4GB of data in these large push operations. These times are for fully aggregated data in the ASO cube. A transparent partition makes it available immediately in the BSO / Planning app.
CALC SCRIPT or CALC MGR BUS RULE:_
/* *** Get next value of File Counter, run Set_Transfer_File.mxl (sets two subvars), and
increment File Counter in the database ******** */
, "No Geography"
, "No Entity"
, "No Project"
, "No Year"
Details of the two MaxL scripts are left to the reader.
is used to set two subvars:
&TRANSFER_NUM needs an integer in double quotes like +"437"+.
&TRANSFER_COMMAND needs a line of code like DATAEXPORT "File" "," "D:\Data\Transfer\ACT_WRKFORCE_437.txt"
Cameron Lackpour has some blog posts on setting subvars with quotation marks.
does a partial clear of the ASO cube, loads the text file to a buffer, and commits the buffer (with slices). The order of events is funny, I was not able to get it to work correctly when I loaded the buffer before the partial clear. It would be a bit better for users if we could do it in that order.
The partial clear is the toughest part (fussy MDX crossjoins!). Although the cubes are quite large by BSO standards, for ASO they are tiny. As a result, I did not need to explicitly build the aggregate after loading.
There are several more issues like how to handle @XREF functionality, how back-calcs of ratios are done, and how to know if your BSO and ASO outlines are aligned. Shameless plug: I will be presenting this topic at Kaleidoscope 13 in June 2013. Two sessions on Wed - one sesion on real-time aggs in Planning, second on pairing BSO to ASO. Similar concepts but different slants. Slides will be available from ODTUG.