The path at which zone volumes are placed is controlled by zfs_volume_base in /etc/cinder/cinder.conf.
Thanks for your reply!
Maybe I did not express myself clearly.
I want to change the path(zonepath), at which an instances of a solaris non-global zone(NOT volumes in cinder) is allocated, to a specified pool.
About this, do you have any advice?
There is no direct support for configuring the zonepath, and indeed we'd discourage it as doing so with OpenStack is entirely untested. You could perhaps modify the brand template, which is /etc/zones/SYSdefault.xml, but you'd have to do that on each compute node. What problem are you hoping to solve by doing this?
When creating many instances, I want to allocate them to a array(e.g. ZFS Storage Appliance) rather than the root disk because my root disk is small and if allocating the instances to root pool, it will not support OpenStack instance migration.
I believe you are misunderstanding how zones are provisioned in Solaris OpenStack. The zone root pool is on iSCSI, and things are only mounted into the zonepath under /system/zones. For example from one of my compute nodes, I have 32 guests deployed and /system/zones is only using 77K:
# df -h /system/zones
Filesystem Size Used Available Capacity Mounted on
rpool/VARSHARE/zones 547G 77K 320G 1% /system/zones
There's no need to do what you're suggesting.
To place the iSCSI on the ZFS Storage Appliance, you'll need to configure the ZFS SA Cinder driver. We have a very comprehensive white paper on this topic at http://www.oracle.com/technetwork/server-storage/sun-unified-storage/documentation/openstack-cinder-zfssa-120915-2813178…
I am confused about iSCSI. When ZFS Storage Appliance is NOT used, is the zone root pool still on iSCSI?
If openStack instances are allocated in the root pool, do not they occupy space in root disk?
In my below configuration, a small openStack instances of a solaris non-global zone occupy 1.70G disk space in root pool.
zfs list -r rpool/VARSHARE/zones
NAME USED AVAIL REFER MOUNTPOINT
rpool/VARSHARE/zones 1.70G 253G 32K /system/zones
rpool/VARSHARE/zones/instance-00000001 1.70G 253G 33K /system/zones/instance-00000001
rpool/VARSHARE/zones/instance-00000001/rpool 1.70G 253G 31K /system/zones/instance-00000001/root/rpool
rpool/VARSHARE/zones/instance-00000001/rpool/ROOT 1.70G 253G 31K legacy
rpool/VARSHARE/zones/instance-00000001/rpool/ROOT/solaris-2 1.70G 253G 1.54G /system/zones/instance-00000001/root
rpool/VARSHARE/zones/instance-00000001/rpool/ROOT/solaris-2/var 114M 253G 98.2M /system/zones/instance-00000001/root/var
rpool/VARSHARE/zones/instance-00000001/rpool/VARSHARE 2.49M 253G 2.43M /system/zones/instance-00000001/root/var/share
rpool/VARSHARE/zones/instance-00000001/rpool/VARSHARE/pkg 63K 253G 32K /system/zones/instance-00000001/root/var/share/pkg
rpool/VARSHARE/zones/instance-00000001/rpool/VARSHARE/pkg/repositories 31K 253G 31K /system/zones/instance-00000001/root/var/share/pkg/repositories
rpool/VARSHARE/zones/instance-00000001/rpool/export 99.5K 253G 32K /system/zones/instance-00000001/root/export
rpool/VARSHARE/zones/instance-00000001/rpool/export/home 67.5K 253G 32K /system/zones/instance-00000001/root/export/home
Whether iSCSI is used is based on the volume_driver setting in cinder.conf; ZFSISCSIDriver uses iSCSI, ZFSVolumeDriver uses local ZFS volumes. As I said in the initial reply, where those drivers allocate storage is set by zfs_volume_base in cinder.conf.
Thank you again!
I think that we are talking about two different things respectively. You are talking about the persistent volume in cinder, I'm talking about the ephemeral volume used as root disk in instance. Can I locate the ephemeral volume of a instance(a solaris non-global zone) to a designated place rather than in rpool of compute node?
Currently, if you allocate non-global zones as OpenStack volumes, the non-global zone file systems are created in the root pool by default. There is no way to modify this configuration at the moment. If you allocate kernel zones for your volumes, then yes, you can customize your OpenStack configuration to create the kernel zone file systems in a separate pool.
I think this article will help clarify how OpenStack volumes are stored in ZFS file systems and pools:
I see,thank all.
Cindy's correct about the single-node OpenStack deployment, however in a multi-node setup with a separate Cinder volume service the non-global zones will not be in the root pool of the compute node. It is important to understand that the Solaris Zones OpenStack driver does not use ephemeral volumes at all; the root pool is always on a Cinder volume, even in a single-node deployment.
I‘m testing a single-node OpenStack deployment and enable the below cinder services.
root@Sun:~# svcs -a | grep cinder
disabled 16:19:10 svc:/application/openstack/cinder/cinder-backup:default
online 16:19:50 svc:/application/openstack/cinder/cinder-upgrade:default
online 17:33:39 svc:/application/openstack/cinder/cinder-db:default
online 17:33:52 svc:/application/openstack/cinder/cinder-api:default
online 17:33:54 svc:/application/openstack/cinder/cinder-scheduler:default
online 17:34:11 svc:/application/openstack/cinder/cinder-volume:setup
online 17:34:11 svc:/application/openstack/cinder/cinder-volume:default
But it seem that the instance(non-global zone) is still placed in the ZFS of root pool of the node. How can I set up the single-node deployment to place the root pool of a instance(non-global zone) on a cinder volume rather than the ZFS in root pool of the node? Thanks!
I'm sorry, I forgot that there's this one weird path through the driver; if it's a non-global zone and the Cinder service is local, we end up not using the Cinder volume and instead let the zones infrastructure use its default, which as Cindy said is only on the local root pool. There is no support for customizing this.
Thank for your clarification!