ASM Disks with MOUNT_STATUS=CLOSED After Adding Disks and Forgetting to run /etc/init.d/oracleasm sc
I have an issue of having 6 ASM disks that have been added (at least tied to) our DATA diskgroup, but the MOUNT_STATUS=CLOSED, HEADER_STATUS=MEMBER, and GROUP_NUMBER=0. I currently have an SR open to hopefully resolve the issue, but does anyone have a workaround for this situation?
I was able to perform the /etc/init.d/oracleasm scandisks command on the node that I had forgotten to run it on before, and was able to successfully add disks to a diskgroup, but I need to clean up/reuse the current 6 ASM disks that are currently unusable.
Also, if I plan on adding disks to a different cluster, and that cluster only uses 2 of 4 nodes (upgraded the os on the 2 up nodes), will I be able to successfully add the disks since I will only be running the scandisks on the 2 up nodes, or must I run it on all nodes, even though all Oracle processes are shutdown on the 2 down nodes?
I was able to perform the /etc/init.d/oracleasm scandisks command on the node that I had forgotten to run it on before, and was able to successfully add disks to a diskgroup, but I need to clean up/reuse the current 6 ASM disks that are currently unusable.
Also, if I plan on adding disks to a different cluster, and that cluster only uses 2 of 4 nodes (upgraded the os on the 2 up nodes), will I be able to successfully add the disks since I will only be running the scandisks on the 2 up nodes, or must I run it on all nodes, even though all Oracle processes are shutdown on the 2 down nodes?
0