You don't mention what version of Smart View you are on, as the answers might change based on version be see my answers below
just installed the latest Smart view (Essbase) software (Server and user) and while testing have noticed a few things that I hope someone can explain to me
1) it appears that once you have logged into a spreadsheet using SV (Essbase), once saved, the spreadsheet remembers the connection even when closed. (i noticed this when trying to change a spreadsheet already saved using one database, to a different one.) Am I correct?
Yes you are correct. It remembers the connection, If you want to delete the sheet connection, on the Smart View ribbon, select the Sheet info Icon, On it you can delete the connections for a sheet or the entire workbook
2) I also noticed (we are using Office 2010) that excel seems to crash alot more now that SV is attached. Is this a known issue?
I've not had that experience, my Smart View seems to be pretty stable. I do get issues with Power point and automation privileges
3) When sending data to an ASO cube, when you change the numbers in the spreadsheet, the data grid turns orange - why?
To tell you data in the cells has changed. It is similar to what Planning does
4) Previous ASO cubes queued users when they were querying or sending data. Does the latest version ASO also queue? or does it allow concurrent requests like the BSO architecture.
As far as I know there is no difference in how the ASO submit works. Since it is sending slices of data it seems to be slower than BSO submits
5) When sending data to an ASO cube, it appears that now it only sends any net changes that you have made, previous ASO data submissions sent the whole sheet of data whether it had changed or not - is this correct?
I don't think this is correct. The way ASO submits work is to create slices of data that contain the differences between the original data and new data. It would not make sense to create a slice with a 0 value. When submitting data to ASO cubes, you should merge the slices when you can.
Hope someone can help me here
Thanks as always
not designing aggregations. When you do a submit to an ASO cube it creates what is called a slice which is an add-on to the original cube data. You might want to read about them in the database admin guide. There is a command in both MAxL and from EAS to merge slices. It takes the existing slices and incorporates them back into the primary database. The number of slices can build up fast and I don;t know if there is a performance hit from that.
I do know that before you can create aggregations, you need to merge the slices
but I am not sure whether merging the slices will help. What we do now is for our reporting group we run a maxl command "execute aggregate build on database ACTUALS.REPORTS using view_file myview" . My planners load our ASO cubes using excel (thus then creating many slices). We have not merged anything to date, but they are still able to query their data and consolidate the information, so I am not sure what merging the slices will do?
I do suggest you read the DBAG on slices. I would also be interested to know how many slices you have. You can find the info in EAS in database->Properties->Statistics Number of incremental slices.
Typically when you try to create an aggregation with slices present, it gives you an error telling you you need to merge the slices first.
Yeah, this was my thought too. The slices *must* be getting merged somewhere, because otherwise that aggregation command would fail. A restructure implicitly merges, so perhaps they're getting away with it because of a batch restructure, or maybe a routine export / rebuild / import.
That said, once the aggregate views are created they stick around (even as slices are added) unless explicitly dropped, so failure of that aggregation command might not have immediately obvious consequences to end-users.
TimG and GlennS,
Could the merge of slices be happening automatically as per the DBAG?
"When the data for a new data slice is in the data load buffer, Essbase scans the list of incremental data slices and considers whether to automatically merge them with the new slice. To qualify for an automatic merge, a slice must be smaller than 100,000 cells or smaller than two times the size of the new slice. Slices larger than 5,000,000 cells are never automatically merged. For example, if a new slice contains 300,000 cells, incremental data slices that contain 90,000 cells and 500,000 cells are automatically merged with the new cell. An incremental data slice that contains 700,000 is not."
It seems reasonable that sends of data would often be smaller than 100,000 cells although I have to say when I read the DBAG I am not sure if the above means that all sends are merged into additional one slice that is still not merged into the database or if in fact the data is merged into the main slice that is the database.
I did some testing with this a while back on 18.104.22.168 and a) it didn't follow the rules exactly as documented (I fell out of my chair in surprise) and b) I could only make it merge the incremental slices into fewer incremental slices, not into the 'primary' slice.
I also found a case where one single 'Send' operation created two incremental slices. Try and get your head around that...