1 Reply Latest reply on May 16, 2013 7:06 PM by user12273962

    3.2 upgrade problem - repo unusable

      Hello everyone,

      I have a big problem. I'll do my best to explain it. Before the upgrade to 3.2 we were using 3.1.1 and our main storage repo was an iSCSI target on a NAS that had 2 network interfaces. Because of limitations with the NAS I made the mistake of allowing the target to be exposed on both interfaces so that the storage tab in ovmm showed 2 identical physical disks.

      I created the repo on one of the disks and everything worked until I upgraded the servers and ovmm to 3.2 today. Somehow it shows the storage initiators bound to one disk, and the repo on the other. It worked before because I could just mount /dev/mapper/xxxxxx2 instead of /dev/mapper/xxxxx1 (repo is on xxxxx1) and since it was the same disk it worked. Now it seams that xxxxx2 does not show in /dev/mapper/ anymore so

      I can't present the repo to the servers and I can't delete the pool because It says it still has a repo fs on it. I have pretty good experience with oracle VM having installed and maintained both OVM2 and OVM3 for a couple of years but this problem has left me out of ideas. I don't need the old repo. I just need to migrate the VMs to a new repo on a different pool. I have done manual imports before when moving from ovm2 to 3 and when vms needed recovery. So I can do that, for 30 VMs it's doable but I still can't remove the old repo/pool/last server from pool. As you can imagine I would like those back.

      I would like to avoid manual imports until I can remove those since I don't want to cause duplicate UUIDs and other inconsistencies in the ovmm database. I'm hoping to avoid a ovmm clean install and full vm imports. Any ideas will be appreciated. I have tried too many things to explain here but maybe I missed something obvious.
      If I didn't manage to explain it clearly I will try in next posts. Thank you.

      Edited by: 939527 on May 16, 2013 11:22 AM