This discussion is archived
3 Replies Latest reply: Jun 13, 2012 12:08 AM by 807928 RSS

Shrinking Oracle DB Memory usage when failed over to secondary node?

lrp1 Newbie
Currently Being Moderated
We have a Sun Cluster pair of servers setup with Solaris 10.
We've setup two resource groups, with databases running on each server. They are configured to fail over so at in the normal operation,
Server A -- runs Database 1
Server B -- runs Database 2

However, if Server A goes down, Database 1 failover to Server B:
Server A -- down
Server B -- runs Database 1 & 2

Each DB is configured to take only half the resources of the memory, because we have to be able to run it concurrently with the other database if there is a failover scenario.

My question is, is there any way we could auto-restart the databases to run with lower memory settings? This way, we don't waste half the resources of the server unless there's a failover scenario.

Obviously there are tradeoffs to this approach, but I was trying to do my research on whether it's even possible with Sun Clustering. The configuration of the Database resource doesn't appear to have that kind of intelligence, so I assume I would have to hack it in somehow via scripts. Does anybody else have experience with load-balancing this way?
  • 1. Re: Shrinking Oracle DB Memory usage when failed over to secondary node?
    807928 Journeyer
    Currently Being Moderated
    There is no easy way to achieve this and please don't hack the existing HA-Oracle scripts as that would make them unsupported.

    Are you sure that the complexity and risk this adds is worth the performance gain you get from the extra SGA size? Remember that you'd need to ensure that the SGA sizes were increased back to the full size when the second DB was moved back to its original node. Achieving that reliably is also quite hard when you start to consider all the failure scenarios.

    If you really are sure you want to do this, then you'd probably be best writing a Generic Data Service (GDS) resource type that allowed a start and stop script to be created that changed these parameters and performed the second service restart. The DB resources would then need a dependency set against these resources. Having said that, I would not go down this route as I judge the risks and complexity are too high and you'd end up lowering your availability.

    Regards,

    Tim
    ---
  • 2. Re: Shrinking Oracle DB Memory usage when failed over to secondary node?
    lrp1 Newbie
    Currently Being Moderated
    TimRead wrote:
    Are you sure that the complexity and risk this adds is worth the performance gain you get from the extra SGA size? Remember that you'd need to ensure that the SGA sizes were increased back to the full size when the second DB was moved back to its original node. Achieving that reliably is also quite hard when you start to consider all the failure scenarios.

    If you really are sure you want to do this, then you'd probably be best writing a Generic Data Service (GDS) resource type that allowed a start and stop script to be created that changed these parameters and performed the second service restart. The DB resources would then need a dependency set against these resources. Having said that, I would not go down this route as I judge the risks and complexity are too high and you'd end up lowering your availability.
    We brainstormed much the same ideas as you did in coming up with a script which would
    - check whether it was on its primary node
    - check whether the other DB resource was online on the primary node
    - raise or lower it's SGA footprint accordingly
    - start the instance

    We were really hoping that there was some existing facility in the Resource Manager to do so without having to develop this kind of custom-resource start/stop script, and it would appear from your comments that there is none. Given your credentials on the subject, I do intend to take your advice with some levity. Thank you for the feedback, we'll head in a different direction.
  • 3. Re: Shrinking Oracle DB Memory usage when failed over to secondary node?
    807928 Journeyer
    Currently Being Moderated
    The other option, of course, is to consider using Oracle RAC to start with. You would then have instances running on both nodes all the time and could set the SGA sizes to fully utilize all the memory. Added to that, you'd remove the downtime associated with database failover.

    Regards,

    Tim
    ---

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points