For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!
Please follow this link.
looks interesting. I can't wait to let old ovm/xen die it's peaceful death.
I just wish oracle adopted a more recent ovirt version. lots of bugfixes are between oracle chosen version (4.3.6) and the current ovirt release (4.3.9). I hope they won't go the same way as they did with ovm (=very old xen version)
looks interesting. I can't wait to let old ovm/xen die it's peaceful death.I just wish oracle adopted a more recent ovirt version. lots of bugfixes are between oracle chosen version (4.3.6) and the current ovirt release (4.3.9). I hope they won't go the same way as they did with ovm (=very old xen version)
OLVM development is active and we're targeting both updated release of oVirt 4.3 as well as oVirt 4.4.
Thanks
Simon
thanks for creating the documentation. will this be same approach for OVM guest using physical emc luns, wondering if there are other approach for migration.
There's a dedicated document for the migration from OVM to OLVM; please, see:
Thanks for this Simon, would you expect something to replace the OVM templates that were available for deploying things like Oracle database but for OLVM, some Ansible playbooks perhaps? Not happy to be migrating but I think in the long run OLVM/Ovirt-kvm will be the better choice so I see it. This doc certainly appreciated.
We're working on Templates for OL-KVM, no ETA actually.
thanks for your reply Simon. do you mean "Migration and Physical Disks management" point in the document.
Since we are going use the same physical disk in OVM and KVM environment. will the disk name be same in the new environment? do you expect any data copy during the migration?
Regards,
Amar
thanks for your reply Simon. do you mean "Migration and Physical Disks management" point in the document. Since we are going use the same physical disk in OVM and KVM environment. will the disk name be same in the new environment? do you expect any data copy during the migration?Regards,Amar
If you follow the steps described on the document, your physical disks will be converted to virtual-ones on OLVM.
Hi Simon, we don't want to user virtual ones, we want to retain the physical disk for performance reasons. do we have procedure to use the same physical disks.
You should be able to present the same disk and get the same associated to your VM running on KVM.
That said, I would suggest you to evaluate the switch to virtual-disks that will also give you the option to get full VM snapshots.
It seems there is a symlink or issues with the /usr/libexec/qemu-kvm file, customer suggests to review the below procedure. https://www.ovirt.org/documentation/admin-guide/virt/console-client-resources.html
we are six months later, olvm is still 4.3.6 only, meanwhile ovirt is 4.3.10, rhev is already 4.3.11, ovirt 4.4 is already at 4.4.3 version. I understand you have a lots of work replacing "redhat/ovirt" with "oracle" in the source code, but please at least when you do finish and release, release an up to date version, not again some ancient version like you did with xen/ovm!
Simon, The document states "Fiber Channel / iSCSI storage domains are not suitable for direct VM import/migration". Why is this? Eric
Hi Stuart. Just read how use FC/iSCSI storage domain both virtualization system. OVM - use shared File System ( ocfs2) over storage. OLVM - use LVM volumes as disk images for VM. So can not be compatible. Regards, Nik
Hi Simon, I have just started the virt_v2v to migrate our oracle VM to kvm every time I got a timeout error, and finally I found that the time taken for ovirt_engine to create the disk image is 20 min on my site whereas the timeout is hard-coded to only 5 min in rhv-upload-plugin.py What I did as workaround is : after lunching virt_v2v, I quickly open the file /var/tmp/xxxx/rhv-upload-plugin.py and update the timeout to more value timeout = 5*60 I updated it to timeout = 90*60 although this fixed my problem, I have to do it each time I migrate a guest. Do you have a permanent solution for his ? here is the complete error I got:
[ 138.4] Copying disk 1/2 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.c5gUOS/nbdkit0.sock", "file.export": "/" } (raw) nbdkit: python[1]: error: /var/tmp/v2v.aoOHD7/rhv-upload-plugin.py: open: error: ['Traceback (most recent call last):\n', ' File "/var/tmp/v2v.aoOHD7/rhv-upload-plugin.py", line 192, in open\n raise RuntimeError("timed out waiting for disk to become unlocked")\n', 'RuntimeError: timed out waiting for disk to become unlocked\n'] qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.c5gUOS/nbdkit0.sock", "file.export": "/" }': Failed to read option reply: Unexpected end-of-file before all bytes were read
Hi, Testing this migration process we understand that destination must be preallocated . Do you have further advice/script to shrink LVM partitions ?
Monitoring storage array IOs during a migration, we notice a permanent **read** load on the destination. Why is that ? I mean performances are low. Source and destination are iSCSI volumes (Compelent Storage. Multi-tiers : SSD + 10K disks). We have 10Gb/s link, multipath (2 paths). On the **destination** there are Writes at 25MB/s together with Reads at 50MB/s. More reads than writes on a destination is unexpected.
Hi, just to point out that the option -oo rhv-direct=true is a life saver in terms of migration speed. (iSCSI)
Hi, Do you have any comparison how faster it is when using "-oo rhv-direct"? Is there any other impact of using "-oo rhv-direct" besides bypassing OLVM Engine?
I've (finally) managed to migrate on VM from OVM to OLVM with 1x100GB disk and operation (-oa preallocated) took ~1h30min.. Would it be much faster if I'd use "-oo rhv-direct"?
Thanks.
The transfer rate was awful before (something like a few MB/s..) . We opened a SR at Oracle and the support gave us that tip. This is Doc ID 2787926.1 if you have access. [quote] ...With the default virt-v2v options, all converted data will pass through the engine host before being uploaded to the Storage Domain: Oracle VM Server ---> Oracle KVM ---> OLVM Engine ---> Oracle KVM ---> Storage Domain ... By using the option "-oo rhv-direct" the disk upload will go directly from the Hypervisor to the Storage Domain without passing by the Engine. Oracle VM Server ---> Oracle KVM ---> Storage Domain ... [/quote] This is just unbelievable, isn't it ? I mean the default behaviour. moving a 70GB spare/200GB img file to a 200GB preallocated disk took 20 minutes. So it's ~ 210MB/s as regards the actual 70GB (600MB/s for the whole 200GB. I don't know if/how zeros are passing through) We're not saturating our 2x10Gb/s multipath links for sure... We rather hitting some limitations at the storage array. We'll check is there is still room for improvement, but we can live with for now.
Yes, seen that note, thanks. Maybe I'd give it another shot & compare timing if I use "-oo rhv-direct".
Also about "-oa preallocated" - I'd wish I could migrate VMs with sparse allocation.. Preallocated uses just too much repository.. :(
Other than that I have major problems with VMs on OLVM as they are losing network connectivity (pings are lost randomly and machines are not accessible.. )..
I've tested migration with "-oo rhv-direct" and without: In both cases the same VM @OVM with 100GB disk migrated to OLVM 4.3.10: A) virt-v2v ... "-oa preallocated" took 1h 20min B) virt-v2v ... "-oa preallocated -oo rhv-direct" took 16 min
..so, significant improvement in migration time. Why isn't "rhv-direct" there by default??
Hi Simon, what will be the impact on the running VMs on my OVM host when it is required to restart the xend service as a part of config change on xen host? Will they keep running and accessible or I need to shutdown them? service xend restart If they will not be accessible or required to be shut down then I will have to perform the service restart when I get the downtime window.
While executing that command there shouldn't be any issue for running VMs. My suggestion is to possibly try it on a test server or on an OVM server with no production VMs.
Hi, We're unable to migrate Windows 10 VMs Error " virt-v2v: error: inspection could not detect the source guest...No root device found in this operating system image." We did open a SR *3 month ago*. An internal bug has been created 28 days ago. (Note that migrating Windows Server 2019 is OK.)
Can you please share SR and bug number opened ?
Hi, Thanks for the quick reply ! SR 3-26920635121 ++++ Internal ++++++ BUG 33531102 https://bug.oraclecorp.com/pls/bug/webbug_edit.edit_info_top?rptno=33531102 - Unable to migrate any guest VM running windows 10-20H2 from OVM to OLVM.
It seems that we cannot reproduce the issue you're encountering. Suggested workaround (that should also come up on the Service request): - copy the Win10 virtual-disk from OVM to temporary storage - convert the virtual-disk from "raw" (OVM) to qcow2 (KVM) # qemu-img convert -f raw -O qcow2 image.img image.qcow2 - import the virtual-disk by OLVM - create one new VM by leveraging the virtual-disk imported
Hi, Thanks for your help. We' ll try that workaround. This is nevertheless flabbergasting that we are the only ones having that problem.
Hi, It works. Thanks. Worth to add that the disk is imported as "thin-provision". I'm nevertheless not sure if it make any difference in terms of space allocation than the virt-v2v conversion which imports a preallocated. Windows reports 57GB used out of 100GB. But OL KVM reports a virtual size of 100 GB and an actual size of 81 GB..:
Hi Simon, Just to update you here as I asked you that, what will happen to my running VMs if I restart the XEND service. (Oct 27, 2021 1:56PM) Your reply was correct...!! Today I did it on my production environment and nothing has happened to my running VMs. I could perform the required steps and restart the XEND services without any issues and without any downtime. Thanks a lot ....!!
Good to know, thanks for reporting back your feedback!
Back to the Windows 10 conversion / import workaround. It works, but uploading a disk by OLVM browser GUI is agonizingly slow. Is there another way ? or a "direct" option to turn ON somewhere, just like virt-v2v's "rhv-direct" ?
Hi, Having troubles migrating one specific VM from OVM to KVM.. I've successfully migrated few machines without any issues but now with this specific one it seems that the migration gets "stuck" on step "Converting Oracle Linux Server 7.6 to run on KVM" (usually this step is completed in a couple of seconds..). I've waited for +20min but no progress..then I've killed virt-v2v. Any ideas? :(
Full log: [root@kvmhost1 ~]# export LIBGUESTFS_BACKEND=direct [root@kvmhost1 ~]# virt-v2v -i libvirtxml source_VM.xml -o ovirt-upload -oc https://manager.uri/ovirt-engine/api -os repo01 -op /root/ovirt-admin-password -of raw -oo rhv-cluster="Default" -oo rhv-cafile=/root/ca.pem -oa preallocated -oo rhv-direct Exception AttributeError: "'module' object has no attribute 'dump_plugin'" in <module 'threading' from '/usr/lib64/python2.7/threading.pyc'> ignored [ 0.3] Opening the source -i libvirtxml source_VM.xml [ 0.3] Creating an overlay to protect the source from being modified [ 0.4] Opening the overlay [ 3.7] Inspecting the overlay [ 17.2] Checking for sufficient free disk space in the guest [ 17.2] Estimating space required on target for each disk [ 17.2] Converting Oracle Linux Server 7.6 to run on KVM
Thanks!
EDIT: Run the migration again...and again stuck on the same step but I've figured out that process still writes something into /tmp/v2vovlc74f16.qcow2 (system disk of a VM being migrated), until now ~150MB written..and still running.. :( Although it's weird as the other file /tmp/v2vovl24d9fd.qcow2 (other virtual disk of VM being migrated - and this disk is also bigger! ) is already completed with size of 2.1 M .. ?
hello, I am going to migrate Oracle VM, too. Someone suggests me to use Vinchin's solution. I wonder whether this could reduce my work and is it safe to use? Any suggestions would be appreciated.
Hi, I dont use it with Oracle VM. However we tested it rhv/ovirt. Its usefuly and stabil. I think it is good bakup solution for kvm based vm. You could test it.
Thanks for your reply. I'll try.
Is there an update for this for ovirt 4.4? I'm able to install oraclelinux-developer-release-el8 but there does not appear to be an ol8_developer_kvm_utils or qemu-block-curl
I'm checking with the engineering team. Will get back to you as soon as possible.
The package name for OL8 is "qemu-kvm-block-curl" and it's available by "kvm_utils" AppStream channel.
Oracle Learning center made a 6 min video on how to follow the Oracle Director's blog article on migrating OVM to OLVM. https://www.youtube.com/watch?v=DqUi9dOInts Sadly, it skips so much detail, and he goes so fast, it's not useful. Any chance Oracle has another video that actually carefully details the steps given at: https://blogs.oracle.com/scoter/post/how-to-migrate-oracle-vm-to-oracle-linux-kvm ? Havn't located one yet. (Also one of the issues I am concerned about with that blog is it requires "service xend restart" and a lot of folks might hit issues with the vm's doing that - eventually we have to move production.) Just for curiosity sake, I also attempted an OLVM Import of an OVA file that comes from OVM Export to Virtual Appliance function on a vm in OVM. OLVM Import "almost" imports it, even see it show up in my OLVM list of virtual machines for a few minutes, but it then disappears and vdsm.log on the KVM Host appears to show LVM issue, even through OLVM Events logs shows the disk got added successfully. This OVA file method would really be far simpler, if only it could be made to work fully between OVM and OLVM. (Note: When performing an OVM Export to Virtual Appliance you then find the "package.ova" file under the OVM Host /OVS/Repositories/... filesystem, and that is what I copied over to the KVM Host for import in OLVM.) See OLVM Import Oracle document (Doc ID 2535963.1) Also started to look at (Doc ID 2624531.1) which appears to be a python script version (using virt-v2v) of some of what the director's blog discusses (?), but it's written in Python2 and needs Python3 updates.
If you are using the director's blog method, the virt-v2v command line will require this (-oa preallocated) otherwise it will fail. I have used it successfully this way.
bugzilla.redhat.com/show_bug.cgi?id=1600547 Bug 1600547 - Disk configuration (RAW Sparse) is incompatible with the storage domain type. Richard W.M. Jones 2018-07-13 07:56:53 UTC I think this is a known bug. You currently must use ‘-oa preallocated’. Eventually we want to modify oVirt so it does the right thing automatically. I re-ran the virt-v2v with the preallocated option, and it worked !!!
Hi, Some time ago I've migrated one machine with: virt-v2v -i libvirtxml hostname.xml -o ovirt-upload -oc https://mgrurl/ovirt-engine/api -os repository01 -op /root/ovirt-admin-password -of raw -oo rhv-cluster="Default" -oo rhv-cafile=/root/ca.pem -oa preallocated -oo rhv-direct --> preallocated - here I'd presume migration would create preallocated disk but NO - the disks are allocated as Thin Provisioned (as somebody already mentioned this few pages back). What does this "-oa preallocated" here really do? Does it extend the disk to it's full size or only to the size which was allocated when migration started? Is it possible that the disk is created as preallocated?
The thing is that yesterday I had terrible problems with one machine which was migrated from OVM using virt-v2v. When I was copying some 300GB to this machine I got lots of warnings in the terminal "kernel:do_IRQ: 0.48 No irq handler for vector (irq -1)" & lots of OLVM events in manager like ""VM [hostmane] is not responding.", "VM [hostmane] has been paused.", "VM [hostmane] has been paused due to no Storage space error.". During that time machine was almost unresponsive & even loosing pings (every 15 sec let's say for the duration of copy action) - luckily it was a test machine, but if this happened on some PROD machines it would definitely affect service availability. Now I need to find out hot to mitigate this.. any thoughts? Thanks!
The current need for use of "-oa preallocated" with the virt-v2v command is due to the following bug workaround (Internal Bug 30683581): OLVM: virt-v2v fails with "Disk configuration (RAW Sparse) is incompatible with the storage domain type" (Doc ID 2707098.1) https://support.oracle.com/epmos/faces/DocumentDisplay?parent=SrDetailText&sourceId=3-30563292831&id=2707098.1 I encountered this myself using OEL8.6 and latest OLVM/KVM package versions, so it is still an issue, but the workaround works. If for example you have a SAN on the backend which "thin provisions" the space, then when you use virt-v2v with -oa preallocated at the OLVM side, the SAN still thing provisions the space. If your SAN already thin-provisions, then no need to attempt thin-provision on OLVM side. Anway, you need to use the -oa preallocated workaround anyway for virt-v2v to work.
We are trying to migrate from OVM (3.4.6.2265) with Oracle Linux 7.9 VMs to OLVM (Version 4.4.10.7-1.0.17.el8) with Oracle 8.6 KVMs. A RedHat support page, https://access.redhat.com/articles/1351473, seems to indicate that RHEL7 -> RHEL8 is not supported. Would this be applicable to OL7->virt-v2v->OL8 migrations? ovirt-engine-4.4.10.7-1.0.17.el8.noarch qemu-kvm-core-6.1.1-3.module+el8.5.0+20635+d56619be.x86_64 vdsm-4.40.100.2-1.0.12.el8.x86_64 supermin-5.1.19-10.module+el8.5.0+20635+d56619be.x86_64 On the XEN hypervisors libvirt-1.2.14-19.1.el6.x86_64 xen-4.4.4-222.0.15.el6.x86_64 Note, before upgrading to OLVM 4.4 with OL8, we were able to migrate OL7 VMs to OLVM using virt-v2v. Thank you, Philip Fielder Enterprise DevOps/Sandia Nat. Lab
That Red Hat document (https://access.redhat.com/articles/1351473) with chart seems to be more about which vm guest versions (source side) and host versions (KVM target side) are supported. But you said "migrate from OVM (3.4.6.2265) with Oracle Linux 7.9 VMs" which is supported with hosts "OLVM (Version 4.4.10.7-1.0.17.el8) with Oracle 8.6 KVMs". As a comparison, I am currently/successfully migrating vm guests (using the virt-v2v and director's blog method) with the following environments: Source: Oracle VM (OVM) Manager Version: 3.4.7.244 with vm guests running RHEL7 and Windows 2016 to the following environment: Target: Oracle Linux Virtualization Manager (OLVM) version**:**4.4.10.7-1.0.10.el8 running on OEL v8.6 + UEK 5.4.17, with KVM hosts running Oracle Enterprise Linux (OEL) v8.6 with Unbreakable Enterprise Kernel (UEK) 5.4.17. (Note: Make sure the OLVM manager server is a separate host from the KVM host servers as the OLVM and KVM installs have different requirements.) Side note: In conjunction with the director's blog instructions, what I wound up doing is making one of the Xen hosts the source, and one of the KVM hosts the target server. As you use the director's blog, you will see what I mean.
PS: I have also migrated several Oracle RAC systems running on RHEL7 in OVM environment, to the new OLVM environment described in previous post. Each (virt-v2v) migration involves creating (virsh command) an XML profile for each server, so you use either the first or last Oracle RAC server to carry over all the ASM disk, but for the others in the same RAC cluster, you edit those out of the XML profile before migrating the vm. Then after migration to OLVM, you use OLVM to add back the ASM disk (making sure to flag them [x] Shared) to those vm's needed it.
User_8CYHY - Sounds like you have a lot of experience with moving things from Oracle VM to OLVM specifically with RAC. You should write a BLOG post for it!