Skip to Main Content

Infrastructure Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

whenever cluster boots zpool disappers

hsakcaApr 30 2015 — edited Jun 8 2015

Dear Community,

I have following setup

  1. Oracle Solaris 10  -> 5.10 Generic_147147-26 sun4v sparc
  2. Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
  3. Oracle Solaris Cluster 3.3u2 for Solaris 10 sparc
  4. Oracle Solaris Cluster Geographic Edition 3.3u2 for Solaris 10 sparc

I installed Oracle Solaris 10 with ZFS

I have a pool called db for mount point /oradata

When ever I reboot/power cycle my cluster ZFS pool disappears

because of that clster cannot start properly so oracle database resource/group fails

Everytime I reboot/power cycle I have to do manually

zpool import db

clrg online ora-rg

...

what can be the reason?

the only thing I know

db zpool, the pool is imported with ora-has resource which I created as shown below (with Zpools option)

# /usr/cluster/bin/clresourcegroup create ora-rg

# /usr/cluster/bin/clresourcetype register SUNW.HAStoragePlus

# /usr/cluster/bin/clresource create -g ora-rg -t SUNW.HAStoragePlus -p Zpools=db ora-has

---> under working conditions zpool status

# zpool status db

  pool: db

state: ONLINE

scan: none requested

config:

        NAME        STATE     READ WRITE CKSUM

        db          ONLINE       0     0     0

          mirror-0  ONLINE       0     0     0

            c0t2d0  ONLINE       0     0     0

            c0t3d0  ONLINE       0     0     0

errors: No known data errors

---> boot log:

Booting in cluster mode

impdneilab1 console login: Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1 (nodeid = 1) with votecount = 1 added.

Apr 21 17:12:24 impdneilab1 sendmail[642]: My unqualified host name (impdneilab1) unknown; sleeping for retry

Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1: attempting to join cluster.

Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Cluster has reached quorum.

Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1 (nodeid = 1) is up; new incarnation number = 1429629142.

Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Cluster members: impdneilab1.

Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: node reconfiguration #1 completed.

Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1: joined cluster.

Apr 21 17:12:24 impdneilab1 in.mpathd[262]: Successfully failed over from NIC nxge1 to NIC e1000g1

Apr 21 17:12:24 impdneilab1 in.mpathd[262]: Successfully failed over from NIC nxge0 to NIC e1000g0

obtaining access to all attached disks

This post has been answered by hsakca on May 27 2015
Jump to Answer

Comments

BryanWood
You need to disable disk locking, which ordinarily is performed by the first VM to prevent any other VMs from corrupting your vmdk files via uncoordinated writes. You will have to shutdown both of your VMs and edit the *.vmx flat file for each, adding lines like the following (settings taken from workstation 6, but should be nearly identical settings for VM Player 3.x):

http://crosbysite.blogspot.com/2007/10/clustering-in-vmware-workstation-6.html

scsi1.sharedbus = "Virtual"
disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"

A few comments:

- your shared disks (scsi1) must be on a separate virtual scsi bus than the boot disk (scsi0), to allow setting the sharedbus flag as seen above.
- you must also ensure the cache parameters and unsynced writes settings are set to guarantee that all IO is immediately flushed to the vmdk file so the other VM can immediately access the latest version of data.
user8860348
Folks,

Hello. Thanks a lot for replying.

Because 2 Virtual Machines rac1 and rac2 share the same disk F:\VM_RAC\sharerac\asm1.vmdk and cause the problem, can we have rac1 and rac2 use different disks ?

For example,
Let rac1 uses disk F:\VM_RAC\sharerac\asm1.vmdk
Let rac2 uses disk F:\VM_RAC\sharerac\asm2.vmdk

If yes, how to have rac1 use asm1.vmdk and rac2 uses asm2.vmdk ?
BryanWood
Answer
Unfortunately no, Oracle RAC requires all nodes have access the same set of shared disks. If your database resides within ASM, each ASM instance (one per node) must also see the same set of disks to mount the ASM diskgroup containing the database's datafiles.

Best Regards,
Bryan Wood
Marked as Answer by user8860348 · Sep 27 2020
user8860348
Folks,

Hello. thanks a lot for replying.
I have edited the VMX files for rac1 and rac2. Both VMs can open at the same time now. Thanks again.
1 - 4
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Jul 6 2015
Added on Apr 30 2015
5 comments
1,895 views