This content has been marked as final. Show 5 replies
Regarding OVM 2.x P2V.
I have yet to use OVM 3.0.3 P2V. FYI "lomount" is not included in the "xen-tools" package. Use of "kpartx" is
preferred method for loopback mount on OVM 3.0.3.
In general use of OVM P2V e.g. boot source host from OVM CDROM using "linux p2v" works well for simple Linux migrations.
Windows P2V migration is a bit more effort and there are better tools available e.g. VMware vCenter Converter. Performing a V2V migration of a VMware P2V migration if you really want OVM is also a possibility.
Limitations of OVM P2V migration.
1) No resizing of filesystems. The entire block device is migrated to a disk image file.
2) P2V is offline.
3) HVM only. Personally I prefer HVM for all my VM's, some may beg to differ.
In a nutshell OVM P2V boots up a HTTP server which serves up a "vm.cfg" file and
the block device. You download to your OVM host via "wget".
In general I have found using Knoppix more portable for migration of legacy HP, IBM and Dell Linux hosts. The ability to modify the network interface to force duplex is quite useful when you have issues with auto-negotiation. Knoppix is also quite forgiving for oddball legacy servers vs the JeOS that is OVM.
I also prefer to create the target disk image files on the OVM typically creating a "System.img" for "root" and "swap" and a "u01.img" for "/u01". Mounting the disk images from Dom0 is also employed e.g. mkdir root ; lomount -diskimage System.img -partition 1 root
The source filesystem is mounted from Knoppix and migrated using "tar" and SSH e.g.
mount /dev/sda1 /mnt/root ; cd /mnt/root ; tar czf - | ssh root@$OVM_HOST '( cd /OVS/$VM_NAME/root ; tar xvzpf - )'
Prior to starting the migrated VM you will have update "grub.conf" and "device.map". I also find it useful to clear out the "blkid.tab" cache. Rebuilding "initrd" is also recommended.
You can use combination of "linux p2v" and Knoppix migration e.g. "root" via "linux p2v" and "ssh | tar" for the data migration. It would also behoove you to verify interface is plumbed properly e.g. full duplex and Gbit. You could boot up DomU migrated VM from its "root" and then transfer the data vs using the Dom0 "lomount" option. Dom0 could get pegged doing this migration.
The largest "linux p2v" block device migration I have performed was ~500G from a legacy HP server with 100Mbit interface. Example
of "linux p2v" 270G disk image transfer from legacy HP server.
root@$OVM_HOST# tail -5 nohup.out
284519100K .......... .......... .......... .......... .......... 99% 2.54M 0s
284519150K .......... .......... .......... .. 100% 3.84M=16d24h
09:37:18 (206 KB/s) - `System-c0d1.img' saved 291347642880/291347642880
I believe the interface autonegotiated 10Mbit and half-duplex, "16d" == 16 days.
If I had used Knoppix I could have forced 100Mbit and full duplex.
Another advantage to the "ssh | tar" transfer is that only the contents of filesystem on the block device are transferred vs the
entire block device e.g. /u01 uses 40G of 50G block device only the 40G is transferred vs the entire 50G block device. If the legacy
system uses LVM for block devices then "ssh | tar" method will migrate to simple filesystem. There is no need for LVM on a VM.
Using LVM to manage block devices for a VM is another story.
The problem I seem to be experiencing is that when I boot to the VM server 2.2.2 media on the initial startup page where the instructions tell you to enter P2V, I get a message "could not find the kernel image" I just tried the VM server 2.2.2 media on a 64bit machine and had the same message. The target machines bios's are set to enable virtualization.
I tried researching the error message on Google with not much success. I need to convert a windows 7 machine to run in Oracle Virtual Box and have not had much success with this application.
I have tried VMware and it works fine although I could not export it into an OVF file and import it into Oracle. Anyway my company won't foot the bill for VMware as then have a 10 license minimum.
And Thanks for the reply!