1 Reply Latest reply: Jun 15, 2011 3:16 PM by user8611443 RSS

    OVM-5004 when importing a virtual machine image on version 2.2.0

    user8611443
      Hi:

      I have a running production environment using OVM 2.2.0 software. I am trying to import a new virtual machine image using the web interface and I keep getting "OVM-5004 Invalid virtual machine config file -". When I look in the logs, I see this:


      ovs_query.log:

      "2011-06-15 10:40:37" ERROR=> get_vm_config: failed. vm_path('/OVS/running_pool/20_em11g') => Exception: failed:Exception: vm.cfg does not exist under path: /var/ovs/mount/20A34C60E3364D718EE0734B34925694/running_pool/20_em11g

      StackTrace:
      File "/opt/ovs-agent-2.3/OVSXXenVMConfig.py", line 2602, in xen_get_vm_config
      raise Exception("vm.cfg does not exist under path: %s" % vm_path)


      StackTrace:
      File "/opt/ovs-agent-2.3/OVSSiteVMConfig.py", line 1206, in get_vm_config
      raise Exception(rs)


      ovs_operation.log:

      2011-06-15 10:40:34" INFO=> verify_image_v2v: /OVS/running_pool/20_em11g
      2011-06-15 10:40:34" INFO=> verify_image_v2v: success:ovm;vm.cfg
      2011-06-15 10:40:35" INFO=> verify_image_v2v: /OVS/running_pool/20_em11g
      2011-06-15 10:40:35" INFO=> verify_image_v2v: success:ovm;vm.cfg
      2011-06-15 10:40:35" INFO=> xen_correct_qos_cfg: vm_cfg('/var/ovs/mount/20A34C60E3364D718EE0734B34925694/running_pool/20_em11g/vm.cfg')=>success.
      2011-06-15 10:40:35" INFO=> xen_correct_cfg: success. vm('/var/ovs/mount/20A34C60E3364D718EE0734B34925694/running_pool/20_em11g') => cfg_file('/var/ovs/mount/20A34C60E3364D718EE0734B34925694/running_pool/20_em11g/vm.cfg')

      So, the config file is both OK and not OK??? Also, the vm.cfg does exist where they are looking:

      # ls -l /var/ovs/mount/20A34C60E3364D718EE0734B34925694/running_pool/20_em11g
      total 35897252
      -rw-rw-rw- 1 root root 28994112000 Jun 29 2010 oms.img
      -rw-rw-rw- 1 root root 7764664320 Jun 29 2010 System.img
      -rw-rw-rw- 1 root root 510 Jun 15 10:40 vm.cfg
      -rw-rw-rw- 1 root root 310 Jun 14 11:34 vm.cfg.orig


      Unfortunately, I can't reboot the server as it is a production system.

      Any ideas what is going on here? In case it helps, the software is the Oracle Enterprise Manager 11g template. I first tried to import it as a template and got the same error, so I copied it over from seed_pool to running_pool, generated a random MAC address, and edited the vm.cfg for my network connection. Here is the vm.cfg:

      bootloader = '/usr/bin/pygrub'
      disk = ['file:/var/ovs/mount/20A34C60E3364D718EE0734B34925694/running_pool/20_em11g/System.img,xvda,w',
      'file:/var/ovs/mount/20A34C60E3364D718EE0734B34925694/running_pool/20_em11g/oms.img,xvdb,w',
      ]
      memory = 5120
      name = '20_em11g'
      on_crash = 'restart'
      on_reboot = 'restart'
      uuid = 'c0baee91-f004-4fb8-b153-832653170c49'
      vcpus = 2
      vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0,vncpasswd=ovsroot']
      vif = ['bridge=xenbr4,mac=00:16:3e:2f:14:46,type=netfront']
      vif_other_config = []

      Does anyone have any idea what I'm doing wrong?
        • 1. Re: OVM-5004 when importing a virtual machine image on version 2.2.0
          user8611443
          SOLVED - It turns out I had a totally different problem. I have two Oracle Virtual Servers and an OCFS2 OVS repository connected via iSCSI. My OCFS2 repository was read-only on one of my servers, due to my own error. I had expanded the iSCSI volume and expanded the OCFS2 file system on-line, but I only issued the commands on one of my virtual servers. When the other (un-expanded) virtual server tried to access the larger file system, it failed and put the file system in read-only mode. :-/

          Solution is:

          # multipath -l

          this gives you the name of the multipath device (in my case OVSData) and all the individual devices (sdX). For each device, issue:

          # echo 1 >/sys/block/<device_name>/device/rescan

          This reloads the size of the device to the SCSI layer.

          # multipathd -k
          multipathd> resize map OVSData

          This reloads the size of the device to the multipath layer.

          If I had done this on both servers BEFORE I expanded the file system (tunefs.ocfs2 -S /dev/mapper/OVSData) I would have been fine. Since I did not, I then had to:

          - shut down all virtual machines on the server
          - /etc/init.d/ovs_agent stop
          - umount /dev/mapper/OVSData
          - mount /dev/mapper/OVSDATA <repository directory> [ in my case /var/ovs/mount/20A34C60E3364D718EE0734B34925694]
          - /etc/init.d/ovs_agent start
          - log in to VM Manager, then Server Pools tab. Choose my pool and click Restore, then OK.
          - restart my virtual machines

          This got everything working properly again.