This content has been marked as final. Show 4 replies
I hope you start FW upgrade with CAM > 6.5 and get supportData before.
In this case You can try:
/opt/SUNWsefms/bin/service -d <deviceid> -c recover label=<vol name> manager=<a|b> vdiskID=<vdisk idx> raidLevel=<0|1|3|5|6> capacity=<bytes> segmentSize=<bytes> offset=<blocks> readAhead=<0|1> [drives=<tXdriveY,tXdriveY...> vdiskLabel=<vdisk label>]
raidlevel, capacity, segmentSizem, offset, drive order - You get from SupportData ( Storage_array_profile).
In case you try apply saved configuration file - CAM just create new volumes with required configuration, but data will lost.
Fortunately I have generated supportdata package before upgrading and CAM version is 184.108.40.206. In addition to your reply I found an article at http://www.tune-it.ru/web/bender/blogs/-/blogs/восстановление-томов-на-массивах-6000-и-2000-серии , for not-russian speakers: the article provides the similar solution with /opt/SUNWsefms/bin/service utility, but the author made a note about an offset=<blocks> field, he multiplies the value from profile by 512, I had some volumes being stored on a single Vdisk, and I'm not sure now, because in your's and author's service utulity template it was clearly marked that the value of an argument is in blocks, and the stored in profile value is also in blocks (not in bytes, the piece of my profile - "+...GB (598923542528 bytes) Offset (blocks): ...+"), is he right by multiplying the value from profile?
- Second question - does a service utility provide a functionality to change wwn's on volumes and Storage array identifier (SAID) at whole device? I found out that the previous license files are not accepted, because of another Feature Enable Identifier (I think it is calculated from a changed value of Storage array identifier, am I right?), and why I want to change the wwn's and mappings (mappings will correct from the bui) on recreated volumes as per profile is is that I want to avoid problems by possible misrecognition them by vxvm at a server side (target numbering change) and further recorrecting/reimporting vxvm disks and disk group ownership.
I don't have expireance with offset other then 0, so can't comment it. Realy - need test.
But system should not permit create overlapping volumes. You will create volumes with increasing offset, so you can try use offset from profile, in case error - multiple it to 512.
For 2 other questions i don't have answer to... Sorry.
Veritas write own label on every disk, so should be not sense changing wwn of disks.
As license binding to array wwn, it may be problem found way to change it.
Edited by: Nik on 15.07.2011 6:43
Sorry for being absent for a long time.
I tried specifying values either from the profile, or being multiplied by 512, - both failed with error:
Completion Status: Operation failed due to invalid parameterization
-, I also tried some bigger random values. I can recreate only the first volume from the first sectors, but on seconds and so on - this error. It good for me, that these volumes are not important for me, because were didn't used for data yet.
Now I have a problem with the other virtual disk, because one of the drives in that raid group failed (raid5), I cannot even delete this virtual disk, to try restoring the volumes later in that manner with service command (the drives are not in Unassigned state),- it complaines on error:
Virtual Disk 0 could not be deleted. A procedure could not complete successfully because a volume group was in the exported state.
-, Is there the way to delete the VD?
I also tried to force import the VD with Service Advisor, but it complains: Legacy volume group.