I have two servers (X3-2 with RH6.3) connected on two 2540 with fiber. Multipath configuration is by default.
Each 2540 map one identical volume. On my server, with these two volumes, I build a LVM raid 1.
I have two controllers by storage and two fibers by controllers.
Multipath works well if I lose 1,2 or 3 fibers, or if I lose one controller.
But if I lose one 2540 completely (electrically or 2 controllers) I lose my datastore.
I lose the connection on the second 2540. I have no more access to my raid 1.
The goal is to keep the access to the storage if I lose one 2540.
I don't understand if my problem come from multipath configuration or from the LVM configuration.
Ok, so I think I lose the access to the raid because it was created by default : lvcreate -L 55G -m1 -n lv0 vg0
In this case, implicitly it is a mirror. As I have only two disks, the logs are on the raid. So If I lose a disk, my logical volume will be unusable.
mirror_log_fault_policy = "allocate"
mirror_image_fault_policy = "remove"
So I tried to rebuild my LV but this time specifying as option "raid1"
# lvcreate --type raid1 -L 55G -m1 -n lv0 vg0
But I have an error when I specify "raid1"
# device-mapper: reload ioctl on failed: Invalid argument
# Failed to activate new LV.