On January 27th, this site will be read-only as we migrate to Oracle Forums for an improved community experience. You will not be able to initiate activity until January 30th, when you will be able to use this site as normal.

    Forum Stats

  • 3,889,585 Users
  • 2,269,760 Discussions
  • 7,916,785 Comments

Discussions

Bad HDD - Restoring VMs

user5933938
user5933938 Member Posts: 15
edited Feb 7, 2019 4:31PM in Oracle VM Server for x86

We have an older Oracle VM Server version 3.1.1   It works great for some of the testing our Developers need. Recently the hdd that was our repository and had all of our VMs died. Our method for backups was manually copying all the necessary files when we felt we needed a backup so we do have backups of the VMs and other related files. I've added a replacement disk (same size and model as the one that died) and set it up as a new repository in OVM and itis recognized just fine but I'm not quiet sure how to actually import this backup data I have so we can get our VMs back up and running.  Most internet searches come back with info about newer versions of OVM than what we have.

I see the VMs if I go to the server (under Server Pool) but obviously those VMs are for the setup that has the missing/bad disk. 

Thanks

user5933938User_TUSK5
«13

Answers

  • budachst
    budachst Member Posts: 1,832
    edited Jan 27, 2019 10:24AM

    Afair, you will have to make the new disk to look like the old repo's disk. You can achieve that by manually formatting it using mk.ocfs2 giving it the correct ID. However, there are more important files to it, than just the vdisks and guest configs. So, you might be up to a hard time to re-create this repo manually, if you never made a complete backup of the drive/repo.

    As you're on 3.1.1, I think you could also bundle up the vdisk and guest config into a tar archive for each guest and import it as an arbitrary guest into the new repo.

    Cheers,

    budy

    user5933938
  • user5933938
    user5933938 Member Posts: 15
    edited Jan 28, 2019 9:42AM

    If I want to go back and try and give the new disk teh same ID as the old disk how would I do that exactly?  I have the steps on formatting and such but I just setup the new disk as a new repo. I didn't know it was an option to give it the same ID as the old failed disk.

    The second option I can tar up the vdisks and config files no problem. However, I do not see any option in OVM 3.1.1 to import a VM. I see option to import a disk (via Repository) but not an entire VM. The newer versions OVM 3.4 do have that option but I don't see it in 3.1.1

    Thanks

  • budachst
    budachst Member Posts: 1,832
    edited Jan 29, 2019 9:26AM

    Hi,

    okay - put your prepeller-hat on… you will need to dig up some information, especially the old UUID of the repo you lost. Usually, there's a hidden file on each repo named .ovsrepo. This file contains the vital information, but these are also stored in the OVM database and on each OVS, which had that repo mounted. So running the folliowing on your OVS should spit out also the data of the lost repo:

    ovs-agent-db dump_db repository

    Run that and post the result - it's been a couple of years, since I dealt with OCFS2 volumes…

  • user5933938
    user5933938 Member Posts: 15
    edited Jan 29, 2019 11:25AM

    Thanks for your replies.

    The issue I have is the old repo disk is not working anymore so the server doesn't see it. So I physically removed it and I installed a new disk and setup a new repo thinking I could somehow import my backup data to it. Running that command you mentioned just gives me info on the new disk

    [[email protected] ~]# ovs-agent-db dump_db repository

    {'0004fb00000300006fe662cd83bd6b7f': {'alias': u'JDE910_REPO1',

                                          'filesystem': 'ocfs2',

                                          'fs_location': '/dev/mapper/35000c50062c887c3',

                                          'manager_uuid': u'0004fb0000010000e7528f36343015ba',

                                          'mount_point': '/OVS/Repositories/0004fb00000300006fe662cd83bd6b7f',

                                          'version': u'3.0'}}

    The only info I have found for the old repo is as follows:

    Repository Name: Repository_JDE910

    /dev/mapper/35000c50034edb1b7

    ID: 0004fb00000300003b7249e7240affcd

    The Oracle VM Manager still has the old repo showing up in it also (as well as the new one)

  • budachst
    budachst Member Posts: 1,832
    edited Jan 29, 2019 1:40PM

    Okay… so… this is a bit tricky and you may want to test that on a new drive. There're two things that need to get worked out:

    - the device-mapper ID as on record in the OVM database

    - the volume label

    We can't control the dm-mapper id, but once we get the label right, we could simply create a symlink to the old dm-mapper id and have OVS recognize the makeshift repo. It bothers me, that the old repo is not noted in the OVS' local database, though. I assume, that this a single hostz setup, no?

    Can you please also run this command on your OVS, just to see what the label of the current repo's drive looks like:

    tunefs.ocfs2 -Q "UUID = %U\nNumSlots = %N\n LABEL = %V\nBlockSize = %B\nCluster-Size = %T\nCompatFlags = %M\n" /dev/mapper/35000c50062c887c3

    It should output something like this (taken from an old note I made, when we were still on OCFS2 repos):

    [[email protected] ovmUtils]# tunefs.ocfs2 -Q "UUID = %U\nNumSlots = %N\n LABEL = %V\nBlockSize = %B\nCluster-Size = %T\nCompatFlags = %M\n" /dev/mapper/23236646265613764

    UUID = 0004FB000005000044D99444114C6AF9

    NumSlots = 32

    LABEL = OVS99444114c6af9

    BlockSize = 4096

    Cluster-Size = 1048576

    CompatFlags = backup-super strict-journal-super

    user5933938
  • user5933938
    user5933938 Member Posts: 15
    edited Jan 29, 2019 1:49PM

    Yes single host setup. All local disks.

    Here are results from the command on the current repo

    [[email protected] /]# tunefs.ocfs2 -Q "UUID = %U\nNumSlots = %N\n LABEL = %V\nBlockSize = %B\nCluster-Size = %T\nCompatFlags = %M\n" /dev/mapper/35000c50062c887c3

    UUID = 0004FB0000050000AF48896C5E9ACA27

    NumSlots = 32

    LABEL = OVS8896c5e9aca27

    BlockSize = 4096

    Cluster-Size = 1048576

    CompatFlags = backup-super strict-journal-super

  • budachst
    budachst Member Posts: 1,832
    edited Jan 30, 2019 2:28AM

    Hi,

    okay then… this is how you should be able to pull that stunt off… please note, that this might or might not work, I am not taking any responsibilities on this. If there's any valuable data on your host, back it up! Any mistake at any step you're about to take can render your host inoperable, wipe any of the hard drives installed and make the heavons come down on you… ok, not the last one, probably.

    For safety, I'd suggest to perform these steps on a blank drive - don't use the new repo's drive, or remove the new repo first, if you haven't made use of it yet. But before doing so, perform one last operation and copy off the .ovsrepo file, that resides on the new repo's root - you might need it for reference.

    Action plan:

    - get the path to the dm-mapper device of the drive, you want to restore your data on. Cross check the devices in /dev/disk/by-id to /dev/mapper

    - according to the information you provided, the drive should be formatted like this:

    mkfs.ocfs2 -b 4096 -C 1048576 -L OVS249e7240affcd  -U 0004FB00000300003B7249E7240AFFCD -N 32 /dev/mapper/<drive>

    - mount the ocfs2 volume and create a .ovsrepo file in it's root, which contains these lines:

    OVS_REPO_UUID=0004fb00000300003b7249e7240affcd

    OVS_REPO_VERSION=3.0

    OVS_REPO_MGR_UUID=0004fb0000010000e7528f36343015ba

    OVS_REPO_ALIAS=Repository_JDE910

    - restore your data on the volume

    - unmount the volume

    - since the dm-mapper path will be different you might to create a link from the old drive's id to the new one's:

    link -s /dev/mapper/<new drive> /dev/mapper/35000c50034edb1b7

    - either check for new drives on your OVS via OVMM and/or reboot your OVS

  • user5933938
    user5933938 Member Posts: 15
    edited Jan 30, 2019 1:36PM

    I'm not using the new repo I setup. I copied off it's .ovsrepo file just in case. Can I just use that drive for this step:

    mkfs.ocfs2 -b 4096 -C 1048576 -L OVS249e7240affcd  -U 0004FB00000300003B7249E7240AFFCD -N 32 /dev/mapper/<drive>

    So basically I have two physical drives in the server. 1) OS drive 2) This 2TB drive for repository     Adding a 3rd drive is not an option at this point because it's an older server and I do not have any an extra drive caddy.

  • user5933938
    user5933938 Member Posts: 15
    edited Jan 30, 2019 2:13PM

    So I received this warning when I execute the command.

    [[email protected] /]# mkfs.ocfs2 -b 4096 -C 1048576 -L OVS249e7240affcd  -U 0004FB00000300003B7249E7240AFFCD -N 32 /dev/mapper/35000c50062c887c3

    mkfs.ocfs2 1.8.2

    WARNING!!! OCFS2 uses the UUID to uniquely identify a file system.

    Having two OCFS2 file systems with the same UUID could, in the least,

    cause erratic behavior, and if unlucky, cause file system damage.

    Please choose the UUID with care.

    Cluster stack: classic o2cb

    /dev/mapper/35000c50062c887c3 is mounted; will not make a ocfs2 volume here!

  • budachst
    budachst Member Posts: 1,832
    edited Jan 31, 2019 12:16AM

    Yes, if you don't delete the repo first, you will receive this error. OCFS2 refuses to overwrite an already existing OCFS2 filesystem. Please remove the new repo in OVMM.