As in thread - StorageTek 6140 how to clear event logs? I had the problem with event logs during firmware upgrade procedure (...firmware upgrade from 06.60.22.10 to 07.xx with Sun Controller Firmware Upgrade Tool (version 10.36.G1.02)... ), now I have more terrific result - my upgrade failed on the firmware activation stage, 30 minutes later (as it was shown in documentation) I checked leds on controllers, controller A had code "88" (This ESM is being held in Reset by the other ESM) and a Service Action Required led on. Tried to reset turning it off and on, but it only worked with single controller A boot (second controller was taken off). After I readded it in CAM, firmware version was ok (07.15.11.17), then inserted the second controller (the leds are ok, id/diags showing 85 after about 2-5 minutes of negotiation and boot), but what I detected was that there is no any volumes. In doc I read that upgrade tool makes a backup of configuration before activation of a new firmware, and it there, - is any chance to restore configuration with it? The *.cfg file is like a script, but not for CAM (tried to import, but complains on non xml-format), what kind of tool I need now? Is it possible to restore the volumes, even it will recreate the volumes, will they be created on their previous borders? Thanks in advance!
I hope you start FW upgrade with CAM > 6.5 and get supportData before.
In this case You can try:
/opt/SUNWsefms/bin/service -d <deviceid> -c recover label=<vol name> manager=<a|b> vdiskID=<vdisk idx> raidLevel=<0|1|3|5|6> capacity=<bytes> segmentSize=<bytes> offset=<blocks> readAhead=<0|1> [drives=<tXdriveY,tXdriveY...> vdiskLabel=<vdisk label>]
raidlevel, capacity, segmentSizem, offset, drive order - You get from SupportData ( Storage_array_profile).
In case you try apply saved configuration file - CAM just create new volumes with required configuration, but data will lost.
Fortunately I have generated supportdata package before upgrading and CAM version is 126.96.36.199. In addition to your reply I found an article at http://www.tune-it.ru/web/bender/blogs/-/blogs/восстановление-томов-на-массивах-6000-и-2000-серии , for not-russian speakers: the article provides the similar solution with /opt/SUNWsefms/bin/service utility, but the author made a note about an offset=<blocks> field, he multiplies the value from profile by 512, I had some volumes being stored on a single Vdisk, and I'm not sure now, because in your's and author's service utulity template it was clearly marked that the value of an argument is in blocks, and the stored in profile value is also in blocks (not in bytes, the piece of my profile - "+...GB (598923542528 bytes) Offset (blocks): ...+"), is he right by multiplying the value from profile?
- Second question - does a service utility provide a functionality to change wwn's on volumes and Storage array identifier (SAID) at whole device? I found out that the previous license files are not accepted, because of another Feature Enable Identifier (I think it is calculated from a changed value of Storage array identifier, am I right?), and why I want to change the wwn's and mappings (mappings will correct from the bui) on recreated volumes as per profile is is that I want to avoid problems by possible misrecognition them by vxvm at a server side (target numbering change) and further recorrecting/reimporting vxvm disks and disk group ownership.
I don't have expireance with offset other then 0, so can't comment it. Realy - need test.
But system should not permit create overlapping volumes. You will create volumes with increasing offset, so you can try use offset from profile, in case error - multiple it to 512.
For 2 other questions i don't have answer to... Sorry.
Veritas write own label on every disk, so should be not sense changing wwn of disks.
As license binding to array wwn, it may be problem found way to change it.
Edited by: Nik on 15.07.2011 6:43
Sorry for being absent for a long time.
I tried specifying values either from the profile, or being multiplied by 512, - both failed with error: Completion Status: Operation failed due to invalid parameterization
-, I also tried some bigger random values. I can recreate only the first volume from the first sectors, but on seconds and so on - this error. It good for me, that these volumes are not important for me, because were didn't used for data yet.
Now I have a problem with the other virtual disk, because one of the drives in that raid group failed (raid5), I cannot even delete this virtual disk, to try restoring the volumes later in that manner with service command (the drives are not in Unassigned state),- it complaines on error: Virtual Disk 0 could not be deleted. A procedure could not complete successfully because a volume group was in the exported state.
-, Is there the way to delete the VD?
I also tried to force import the VD with Service Advisor, but it complains: Legacy volume group.