The way voting disk multiplexing is implemented forces you to have at least three voting disks. To avoid a single point of failure, your multiplexed voting disk should be located on physically independent storage devices with a predictable load well below saturation.
You can have up to 32 voting disks,
but use the following formula to determine the number of voting disks you should use: v = f*2+1, where v is the number of voting disks, and f is the number of disk failures you want to survive.
How come in above case each instance can access only one or two disks....as whole shared disks is not accessible for both nodes??
If you had an even number of voting disks (2) then it wouldn't help much if after/during a split each instance can still access only one voting disk