CRS resource restarted after recovery of master node
Hi,
We are running CRS to provide HA on MySQL databases (with custom action scripts to start/stop/check/clean the resources). I've been testing the behaviour of a 4-node cluster, and it was performing the following test that one of the resources got restarted automatically:
The setup is the following: 3 databases running on the first node, 4 running on the second one, 1 running on the third one, and a fourth node as standby. The test consisted on fully loading their 10 GB InnoDB buffer pools to use almost all the memory in one node (48 GB), and then crashing 3 of the nodes to leave only the node with 4 databases alive. The behaviour was as expected: only the second node was alive, running 4 databases, so the rest could not start there because they couldn't allocate 10 GB of memory for the
We are running CRS to provide HA on MySQL databases (with custom action scripts to start/stop/check/clean the resources). I've been testing the behaviour of a 4-node cluster, and it was performing the following test that one of the resources got restarted automatically:
The setup is the following: 3 databases running on the first node, 4 running on the second one, 1 running on the third one, and a fourth node as standby. The test consisted on fully loading their 10 GB InnoDB buffer pools to use almost all the memory in one node (48 GB), and then crashing 3 of the nodes to leave only the node with 4 databases alive. The behaviour was as expected: only the second node was alive, running 4 databases, so the rest could not start there because they couldn't allocate 10 GB of memory for the
0