- 3,723,860 Users
- 2,244,635 Discussions
- 7,850,737 Comments
Forum Stats
Discussions
Categories
- 16 Data
- 362.2K Big Data Appliance
- 7 Data Science
- 2.1K Databases
- 615 General Database Discussions
- 3.7K Java and JavaScript in the Database
- 32 Multilingual Engine
- 497 MySQL Community Space
- 7 NoSQL Database
- 7.7K Oracle Database Express Edition (XE)
- 2.8K ORDS, SODA & JSON in the Database
- 422 SQLcl
- 62 SQL Developer Data Modeler
- 185.1K SQL & PL/SQL
- 21.1K SQL Developer
- 2.4K Development
- 3 Developer Projects
- 32 Programming Languages
- 135.6K Development Tools
- 14 DevOps
- 3K QA/Testing
- 337 Java
- 10 Java Learning Subscription
- 12 Database Connectivity
- 72 Java Community Process
- 2 Java 25
- 12 Java APIs
- 141.2K Java Development Tools
- 8 Java EE (Java Enterprise Edition)
- 153K Java Essentials
- 135 Java 8 Questions
- 86.2K Java Programming
- 270 Java Lambda MOOC
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 16 Java SE
- 13.8K Java Security
- 4 Java User Groups
- 22 JavaScript - Nashorn
- 18 Programs
- 147 LiveLabs
- 34 Workshops
- 10 Software
- 4 Berkeley DB Family
- 3.5K JHeadstart
- 5.7K Other Languages
- 2.3K Chinese
- 4 Deutsche Oracle Community
- 16 Español
- 1.9K Japanese
- 3 Portuguese
SuperCluster Zone can´t mount non-global filesystem why?

Hi,
I reboot a zone in our Solaris SuperCluster, and after that, we can´t see the filesystems mounted, and our zone took a status incomplete.
For example, if I execute this command a got:
[email protected]:/zones/nxpaiosbprodtransa/root# zfs mount zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-4
cannot mount 'zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-4': dataset is exported to a local zone
and on the other hand:
[email protected]:/# zoneadm list -icv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- nxpaiodiprodbatcha installed /zones/nxpaiodiprodbatcha solaris excl
- nxpaiosbprodtransa incomplete /zones/nxpaiosbprodtransa solaris excl
- nxpaiotdprodbca installed /zones/nxpaiotdprodbca solaris excl
- nxpaiprodnismaster installed /zones/nxpaiprodnismaster solaris excl
- nxpaiotdprodadm installed /zones/nxpaiotdprodadm solaris excl
As you can see my zone with problems is nxpaiosbprodtransa (incomplete)
What should I do? How to proceed?
Answers
-
You can't mount the file system in your global zone because the file system has the property "zoned" set to "on".
To see all mounted file systems, use "df" with the "Z" flag.
Regarding the "incomplete" status: There is nothing in your post that indicates what may have happened.
Andris
-
Hi Andris,
First, thank you very much for your response, please howto check and change the property "zoned" in order to mount the filesystems?
And why could that have happened, with only one reboot in the area (Weblogic OSB installed)?
Regards,
Oscar
-
I have in this moment:
Nodo A with problems:
----------------------------
[email protected]:~# zfs list -o name,zoned,mountpoint -r zonaspool/nxpaiosbprodtransa
NAME ZONED MOUNTPOINT
zonaspool/nxpaiosbprodtransa off /zones/nxpaiosbprodtransa
zonaspool/nxpaiosbprodtransa/rpool on /rpool
zonaspool/nxpaiosbprodtransa/rpool/ROOT on legacy
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-1 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-1/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-2 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-2/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-3 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-3/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-4 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-4/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-backup-1 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-backup-1/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-backup-2 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-backup-2/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris/var on /var
zonaspool/nxpaiosbprodtransa/rpool/VARSHARE on /var/share
zonaspool/nxpaiosbprodtransa/rpool/VARSHARE/pkg on /var/share/pkg
zonaspool/nxpaiosbprodtransa/rpool/VARSHARE/pkg/repositories on /var/share/pkg/repositories
zonaspool/nxpaiosbprodtransa/rpool/export on /export
zonaspool/nxpaiosbprodtransa/rpool/export/home on /export/home
zonaspool/nxpaiosbprodtransa/rpool/export/home/ecff6006 on /export/home/ecff6006
zonaspool/nxpaiosbprodtransa/rpool/export/home/eddc7975 on /export/home/eddc7975
zonaspool/nxpaiosbprodtransa/rpool/export/home/ehsc6161 on /export/home/ehsc6161
zonaspool/nxpaiosbprodtransa/rpool/export/home/emagent on /export/home/emagent
zonaspool/nxpaiosbprodtransa/rpool/export/home/nesm785k on /export/home/nesm785k
zonaspool/nxpaiosbprodtransa/rpool/export/home/nnmm9016 on /export/home/nnmm9016
zonaspool/nxpaiosbprodtransa/rpool/export/home/oracle on /export/home/oracle
zonaspool/nxpaiosbprodtransa/rpool/export/home/orarom on /export/home/orarom
zonaspool/nxpaiosbprodtransa/rpool/orabin on /rpool/orabin
zonaspool/nxpaiosbprodtransa/rpool/orabin/stage on /stage
But MOUNTPOINT isn't OK.
###########################
Nodo B with GOODs MOUNTPOINTs...
[email protected]:~# zfs list -o name,zoned,mountpoint -r zonaspool/nxpaiosbprodtransb
NAME ZONED MOUNTPOINT
zonaspool/nxpaiosbprodtransb off /zones/nxpaiosbprodtransb
zonaspool/nxpaiosbprodtransb/rpool on /zones/nxpaiosbprodtransb/root/rpool
zonaspool/nxpaiosbprodtransb/rpool/ROOT on legacy
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris on /
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-1 on /
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-1/var on /var
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-2 on /
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-2/var on /var
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-3 on /
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-3/var on /var
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-4 on /zones/nxpaiosbprodtransb/root
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-4/var on /zones/nxpaiosbprodtransb/root/var
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-backup-1 on /
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-backup-1/var on /var
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-backup-2 on /
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris-backup-2/var on /var
zonaspool/nxpaiosbprodtransb/rpool/ROOT/solaris/var on /var
zonaspool/nxpaiosbprodtransb/rpool/VARSHARE on /zones/nxpaiosbprodtransb/root/var/share
zonaspool/nxpaiosbprodtransb/rpool/VARSHARE/pkg on /zones/nxpaiosbprodtransb/root/var/share/pkg
zonaspool/nxpaiosbprodtransb/rpool/VARSHARE/pkg/repositories on /zones/nxpaiosbprodtransb/root/var/share/pkg/repositories
zonaspool/nxpaiosbprodtransb/rpool/export on /zones/nxpaiosbprodtransb/root/export
zonaspool/nxpaiosbprodtransb/rpool/export/home on /zones/nxpaiosbprodtransb/root/export/home
zonaspool/nxpaiosbprodtransb/rpool/export/home/ecff6006 on /zones/nxpaiosbprodtransb/root/export/home/ecff6006
zonaspool/nxpaiosbprodtransb/rpool/export/home/eddc7975 on /zones/nxpaiosbprodtransb/root/export/home/eddc7975
zonaspool/nxpaiosbprodtransb/rpool/export/home/ehsc6161 on /zones/nxpaiosbprodtransb/root/export/home/ehsc6161
zonaspool/nxpaiosbprodtransb/rpool/export/home/emagent on /zones/nxpaiosbprodtransb/root/export/home/emagent
zonaspool/nxpaiosbprodtransb/rpool/export/home/nesm785k on /zones/nxpaiosbprodtransb/root/export/home/nesm785k
zonaspool/nxpaiosbprodtransb/rpool/export/home/nnmm9016 on /zones/nxpaiosbprodtransb/root/export/home/nnmm9016
zonaspool/nxpaiosbprodtransb/rpool/export/home/oracle on /zones/nxpaiosbprodtransb/root/export/home/oracle
zonaspool/nxpaiosbprodtransb/rpool/export/home/orarom on /zones/nxpaiosbprodtransb/root/export/home/orarom
zonaspool/nxpaiosbprodtransb/rpool/orabin on /rpool/orabin
zonaspool/nxpaiosbprodtransb/rpool/orabin/stage on /zones/nxpaiosbprodtransb/root/stage
-
what can I do to check why my zone (nxpaiosbprodtransa) is in INCOMPLETE Status, If I only rebooted the zone like others case.
-
You get different outputs because the zones are in different states. One is running, whereas the other one is not.
Andris
-
please howto check and change the property "zoned" in order to mount the filesystems?
Use "zfs get" and "zfs set" commands.
"zfs get zoned <file system" lists the value of the "zoned" property
"zfs set zoned=off <file system>" enables you to mount the file system in your global zone
Andris
-
what can I do to check why my zone (nxpaiosbprodtransa) is in INCOMPLETE Status, If I only rebooted the zone like others case.
My sugggestion would be to open an SR with Oracle.
-
Hi,
We got now:
[email protected]:~# zfs list -o name,zoned,mountpoint -r zonaspool/nxpaiosbprodtransa
NAME ZONED MOUNTPOINT
zonaspool/nxpaiosbprodtransa off /zones/nxpaiosbprodtransa
zonaspool/nxpaiosbprodtransa/rpool on /zones/nxpaiosbprodtransa/root/rpool
zonaspool/nxpaiosbprodtransa/rpool/ROOT on legacy
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-1 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-1/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-2 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-2/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-3 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-3/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-4 on /zones/nxpaiosbprodtransa/root
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-4/var on /zones/nxpaiosbprodtransa/root/var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-backup-1 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-backup-1/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-backup-2 on /
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris-backup-2/var on /var
zonaspool/nxpaiosbprodtransa/rpool/ROOT/solaris/var on /var
zonaspool/nxpaiosbprodtransa/rpool/VARSHARE on /zones/nxpaiosbprodtransa/root/var/share
zonaspool/nxpaiosbprodtransa/rpool/VARSHARE/pkg on /zones/nxpaiosbprodtransa/root/var/share/pkg
zonaspool/nxpaiosbprodtransa/rpool/VARSHARE/pkg/repositories on /zones/nxpaiosbprodtransa/root/var/share/pkg/repositories
zonaspool/nxpaiosbprodtransa/rpool/export on /zones/nxpaiosbprodtransa/root/export
zonaspool/nxpaiosbprodtransa/rpool/export/home on /zones/nxpaiosbprodtransa/root/export/home
zonaspool/nxpaiosbprodtransa/rpool/export/home/ecff6006 on /zones/nxpaiosbprodtransa/root/export/home/ecff6006
zonaspool/nxpaiosbprodtransa/rpool/export/home/eddc7975 on /zones/nxpaiosbprodtransa/root/export/home/eddc7975
zonaspool/nxpaiosbprodtransa/rpool/export/home/ehsc6161 on /zones/nxpaiosbprodtransa/root/export/home/ehsc6161
zonaspool/nxpaiosbprodtransa/rpool/export/home/emagent on /zones/nxpaiosbprodtransa/root/export/home/emagent
zonaspool/nxpaiosbprodtransa/rpool/export/home/nesm785k on /zones/nxpaiosbprodtransa/root/export/home/nesm785k
zonaspool/nxpaiosbprodtransa/rpool/export/home/nnmm9016 on /zones/nxpaiosbprodtransa/root/export/home/nnmm9016
zonaspool/nxpaiosbprodtransa/rpool/export/home/oracle on /zones/nxpaiosbprodtransa/root/export/home/oracle
zonaspool/nxpaiosbprodtransa/rpool/export/home/orarom on /zones/nxpaiosbprodtransa/root/export/home/orarom
zonaspool/nxpaiosbprodtransa/rpool/orabin on /rpool/orabin
zonaspool/nxpaiosbprodtransa/rpool/orabin/stage on /zones/nxpaiosbprodtransa/root/stage
I executed for example:
zfs set mountpoint=/zones/nxpaiosbprodtransa/root/rpool zonaspool/nxpaiosbprodtransa/rpool
one filesystem by one but
as you can see as MOUNTPOINT is GOOD, but after zone reboot the filesystems were not mounted, WHY?
-
What you are doing is plain wrong. Please do not mess with the file systems.
As I have said before: You see different mount points because your respective zones are in different states. Part of the mount that you see is determined at runtime of the zone (e.g. the zonepath that is prepended to the mount as you see it from inside the zone).
Try stopping a running zone and see how the output of the "zfs list" changes.
You should undo your changes and open an SR with Oracle to determine why your zone does not boot.
Andris
-
Hi Andris,
I really appreciate your time. I will open an SR with Oracle.
Best regards,
Oscar