This content has been marked as final. Show 7 replies
Hi Kash Mohammadi,
Thanks for your inputs..
Yes I have gone through the deployment guide as well. As per the notes - I have come up with my understanding of the sample directory structure. But as per the examples shared in the guides, the EPM Oracle Instance home directory appears to be on a dedicated disk and not shared across the servers..
So that leaves my questions unanswered. Pls let me know if I have missed out something though.. :)
Edited by: 908516 on Apr 17, 2012 11:05 AM
Automatic failover works along with a load balancer like Big-IP's F5. You create a pool for two (or more) OHS servers. Then each OHS server has two or more welogic servers. For Shared Services you need a cifs share for the import_export folder and one for the Reporting and Analysis filesystem store. Also, if you use FDM you will have to create a share for the two FDM servers. That is obviously the 10,000 feet view. You can't specify the same instance in a different active member. Also, Unix works if a) you don't plan to use HFM for reporting and b) if you don't use EPMA. There is a lot of detail and not an question that can be simply answered in a short forum response.
The standard deployment guide is based on windows 2008 r2 and I think the questions being raised are about the installation on unix.
As far as I am deploying to a shared drive on unix is one of the options and that is not forced, deplying to a shared location gives advantages such as only having to perform the installation once, each server configuration would have their own unique instance name.
Thanks for your inputs
- I am a bit familiar with F5's BIG/IP load-balancing methods - round-robin, least connections mode and dynamic ratio - while intelligently supporting session persistence.
- We can also manage load balancing via the WebLogic Admin console, and as you have noted by the OHS as well - which I am not familiar with...
This is a newbie question - wouldn't having 3 different agents managing load-balancing complicate things..? As the WebLogic server sits on top of the OHS, I guess they work together to provide load-balancing and configuring the WebLogic for clustering/load-balancing should affect the OHS configuration as well. Is this how it works at the high-level or is it more complicated?
The EPM System Configurator creates the required cluster and adds servers to the cluster when we deploy the Web applications in the final step of the configuration. So we need not manually configure WebLogic for clustering. But when and where does one configure load-balancing..?
Thanks again.. Essbase infrastructure is indeed a vast topic as it is interesting... :)