Skip to Main Content

Infrastructure Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

unable to lucreate on vdisk

idawongOct 8 2012
hi,
i have a setup with
2.2 VM
08/11 solaris 10 both guest and primary ldom
guest ldom are built with zfs. no ODS.
the guest consists of 1 SAN disk controlled by zpool and all filesystems are zfs

i want to test live upgrade on a guest ldom. so i added a new vdisk which is basically a san disk of an idenitcal size as the disk for the guest's root disk. it is presented in the same way as well. both are SMI disks and they are full disks added the primary service and presented as full disk to the guest.

when i run
# lucreate -c root-pool -n patched_BE -p patch-rpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <root-pool>.
Creating initial configuration for primary boot environment <root-pool>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c0d0s2> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <root-pool> PBE Boot Device </dev/dsk/c0d0s2>.
ERROR: ZFS pool <patch-rpool> does not support boot environments
ERROR: The disk used for ZFS pool is EFI labeled.
WARNING: The boot environment definition file </etc/lutab> was removed because the PBE was not successfully defined and created.

there is no boot environment yet. so i expected it cannot get BE ID. but why does it think the disk is EFI labled?
after running lucreate, it seems that the disk was labeled in EFI format.
does anyone have any solution to this? or seen this before?

Comments

986120
Rollback by this.

$GRID_HOME/crs/install/roothas.pl -deconfig -force -verbose


Then again start root.sh file in one node, wait for it to complete. After completion of the root.sh, now start on other nodes.

--
Bala :)
munnsd1
Hi

Did you make progress with this issue? I am having the same error on a standalone grid infrastructure install. I have also tried all posted solutions that seem faintly relevant to no avail. If you have cracked it, please share.

Stef
user6418608
Do you have any ipv6 entry in your /etc/profile?
Something like ::1 localhost6.localdomain localhost6?
SriniPonnala-HDS
I guys I have run into same problem. Is there a solution for this or not.

OS: OEL 6.3 x86_64
ASM 2.0
Grid Infra: 11.2.0.1

root.sh script fails saying:

Adding daemon to inittab
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start: Inappropriate ioctl for device at /u01/app/oracle/product/11.2.0/grid/crs/install/roothas.pl line 296.
1 - 4
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Nov 5 2012
Added on Oct 8 2012
0 comments
998 views