What is your output of the following command as root:
lvm lvs --segments -o +devices
IMO it is not a good idea to place a single file system across 2 physical disks without using RAID 0 and 1. You double the potential risk of a disk failure without mitigating it. How well will the file system survive when 1 of the 2 disks fail? Will the bits and pieces of the file system on the single working disk left still be useful in any way?
Rather add the new disk as a new file system and a mount. And use symbolic links on the root file system to place (non critical) contents on the new file system (such as mail, logs, etc).
1 person found this helpful
I would actually question whether or not more disks increase the risk of a disk failure. One disk can break as likely as one of two of more disks. The use of symbolic links instead of striping the filesystem protects from the complete loss of the enchilada if a volume member fails, but it does not reduce the risk of loosing data.
The device /dev/hda is an IDE device, which is not used in server or production environments. This is probably aDesktop PC system or old vmware virtual environment. As such I think there was actually no need to create all these partitions to separate filesystems on a single drive. Without these partitions, the user would not be in the situation of running out of root disk space.
I would actually question whether or not more disks increase the risk of a disk failure. One disk can break as likely as one of two of more disks.
Simple stats. Buying 2 lottery tickets instead of one, gives you 2 chances to win the lottery prize. Not 1. Even though the odds of winning per ticket remains unchanged.
2 disks buy you 2 tickets in The-Drive-Failure lottery.
Back in the 90's, BT (British Telecom) had a 80+ node OPS cluster build with Pyramid MPP hardware. They had a dedicated store of scsi disks for replacing failed disks - as there were disk failure fairly often due to the number of disks. (a Pryamid MPP chassis looked like a Xmas tree with all the scsi drive LEDs, and BT had several)
In my experience - one should rather expect a drive failure sooner, than later. And have some kind of contingency plan in place to recover from the failure.
The use of symbolic links instead of striping the filesystem protects from the complete loss of the enchilada if a volume member fails, but it does not reduce the risk of loosing data.
I would rather buy a single ticket for the drive failure lottery for a root drive, than 2 tickets in this case. And using symbolic links to "offload" non-critical files to the 2nd drive means that its lottery ticket prize is not a non-bootable server due to a toasted root drive.
Well, sorry I don't get it. Why should more drives increase the risk of a drive failure? Each drive stands for its own and has its own risk of failure that does not depend on any other drives. A far as damage concerns, yes, the more drives you combine, the more data you loose by just loosing a single drive, but the chance of a drive failure should be the same regarless of the number of drives. Raid 1 only works becaues it is rather unlikely that 2 drives or more will fail at the very same time. A lottery however is different because the more numbers you through in the same game, or the more games you play, the higher your chances, or if you don't play you can't loose.
Edit: I meant to say the chance of a particular drive to fail should be the same, regarless of the number of drives.
Well, sorry I don't get it.
Spend some time with Probability Theory and you would? (the odds for heads/tails per coin toss and the number of coin tosses are 2 different elements that each contributes)
I was looking at it from the perspective of a specific drive failing, which does not change, but the chance of any drive to fail increases the more drives you have. It his however not double the risk with two drives.