Had this posted on Network 54 and it was recommended that I post here as well.
During one of the lunch and learn sessions at KScope I texted in a question about multiple instances in production. It seemed there was some confusion to the question so I thought expanding the question with additional details here may provoke some additional conversation on the topic.
We are currently running a single essbase instance with active - passive clustering in production on 22.214.171.124 (considering 126.96.36.199 for next year). For example purposes lets say we have two business areas sharing the instance each with 50 applications for a total of 100 applications. Things run very smoothly most of the time, but from time to time during peak usages the single threaded nature of the security file can slow things down. My thought was to run two instances of Essbase, one for each business area, one active on server A and one active on server B. They would each have there own failover to the other server.
Instance 1 active - 50 applications
Instance 2 passive
Instance 1 passive
Instance 2 active - 50 applications
I have seen that you can setup multiple instances on a server, but have also seen that it is not recomended for production. Are they considering this scenario when they do not recommend it or are there other reasons. I am fine if both instances fail to the same server as we run all applications on one server now and have plenty of memory. Also, we would have separate file systems and a separate port for the second instance (our end users do not know the port bases on their log in today so that will not be an issue). Are there any concerns with this approach, has anyone else tried it? We have successfully done this in our development region, but that does not have the usage on it that production has. It also seems a waste to have all the processors and memory sitting on server B not doing anything - we are paying for it why not use it.
Some additional information that was requested.
This is in addition to DR and having failover is required based on internal audit/IT standards.
We have a NAS device and understand that is still a single point of failure and is likely a bottleneck, which is why we archive any modified databases each night along with packaging up rules and calc scripts.
The servers are UNIX with 48 cores and 256+ Gig of Ram.
About half or more of the databases require write back.
We have implemented similar solutions to take advantage of a server that otherwise be redundant for the majority of the time and it does work well, I have not personally done this type of configuration on unix using OPMN but have on windows using failover clusters, I feel OPMN is a bit poor in terms of management and lacks functionality so maybe worth considering Oracle clusterware depending on the flavour of Unix.
You could also look at look at virtualisation options as an alternative.
We have an instance of OPMN set up for each Essbase instance. In the next week or two we will be running a performance test in our test region. We will be performing 3 test runs. One test with a group of applications running one box/one instance, the second test will be one box/applications split between two instances and the third test will be two boxes/ with applications split between the two instances (one instance on each box). Will try to remember to post any interesting results here and on Network 54.