Skip to Main Content

Java Development Tools

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

[SOLVED] Show/hide options in af:navigationPane

user8959800Apr 2 2014 — edited Apr 8 2014

Hi.

I'm using JDeveloper 11.1.1.7.0

I have followed this tutorial http://docs.oracle.com/cd/E18941_01/tutorials/jdtut_11r2_43/jdtut_11r2_43.html to create a basic and plain left-side menu in my application.

This tutorial shows more advance topics, but I have created a simple left-side menu, with a list.

I haved created an unbounded task flow, drag/drop a wildcard, then the views, then the control flows, ... It works perfect

In the JSF template, in the left-side, I have this code:

<af:navigationPane id="pt_np1" hint="list" value="#{supervision_menu}" var="supervisionInfo">

                    <f:facet name="nodeStamp">

                      <af:commandNavigationItem text="#{supervisionInfo.label}" id="pt_cni1"

                                                destination="#{supervisionInfo.destination}"

                                                action="#{supervisionInfo.doAction}"/>

                    </f:facet>

                  </af:navigationPane>

I need an advanced feature, and I want to show/hide some options, based on a parameter.

I need to execute a view object with this parameter, process the results, so I get the information to know which options could be shown or hidden.

So, my questions are:

  • How could I force to execute this check as the first step of this flow?
  • In generated menu.xml and in af:commandNavigationItem, there are visible  attribute. Where should I place the condition?

Any help would be very appreciated. Thanks





This post has been answered by Alejandro Tovar Lanz on Apr 3 2014
Jump to Answer

Comments

Nik
Hi.
Please show current *zpool status rpool"

Geberaly, rpool used sliece, not whool disk.
So
you need:
1. Get VTOC c0d0
prtvtoc /dev/rdsk/c0d0s0

2. Create same slice on c0d2. Use format or
prtvtoc /dev/rdsk/c0d0s0| fmthard -s - /dev/rdsk/c0d2s0

3. Attach slice.
zpool attach -f rpool c0d0s0 c0d2s0

Regards.
KernelSteve-Oracle
Nik is correct, but to give some context around why you're getting the issue, it's because we don't support EFI in rpools. If you do not supply the slice number to the device when you create zpools or attach vdevs, ZFS will use the entire disk and as such apply an EFI label to the device. By specifying a slice, you're telling ZFS that you want to keep the current SMI/VTOC label (which must be correct prior to using the disk). ZFS won't copy the label from other disks within the pool.

HTH
904579
Hello,

I applied all those steps mentioned in this thread and getting below error while mirroring zfs.


ggntestmirr: /\> zpool attach -f rpool c0d0 c0d1
cannot label 'c0d1': EFI labeled devices are not supported on root pools.
ggntestmirr: /\> zpool attach -f rpool c0d0s0 c0d1s0
cannot attach c0d1s0 to c0d0s0: new device must be a single disk
ggntestmirr: /\>



ggntestmirr: /\> zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0d0s0 ONLINE 0 0 0

errors: No known data errors
ggntestmirr: /\> format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c0d0 <DGC-RAID5-0326 cyl 32766 alt 2 hd 64 sec 10>
/virtual-devices@100/channel-devices@200/disk@0
1. c0d1 <DGC-RAID5-0326 cyl 32766 alt 2 hd 64 sec 10>
/virtual-devices@100/channel-devices@200/disk@1
Specify disk (enter its number):
KernelSteve-Oracle
As per Nik's Step 3 you MUST specify slice 0 (s0). By specifying s0 you tell ZFS to use the SMI label instead of the EFI which happens when you provide the entire disk. Look at your 'zpool status -v' output and you'll see that it shows "c0d0s0".

Because you issued "zpool attach -f rpool c0d0 c0d1" first, ZFS would have put an EFI label on the c0d1 disk which is why the second command failed. Copy the SMI label from c0d0 to c0d1 and use "zpool attach -f rpool c0d0s0 c0d1s0" instead.
Nik
Hi.


It's look like you system work in some vrtual environment.

In case x86, you need create Solaris partitions on disk d1.

1. Use fdisk to see partition of c0d0 and after cretae same on c0d1.
fdisk /dev/rdsk/c0d0p0
See table

fdisk /dev/rdsk/c0d1p0
Create same table.

2. Use format for create same slices on c0d1 as c0d0.
You can use this command for this:

prtvtoc /dev/rdsk/c0d0s0 | fmthard -s - /dev/rdsk/c0d1s0

3. After you can attach disk to mirror.
zpool attach -f rpool c0d0s0 c0d1s0

4. You need install boot block on с0d1 for make available alternative boot device.
Read man installboot


Regards.
904579
Hi,

I again formatted my mirror disk c0d1 and tell labelled it in SMI mode and again I issues the below command:

ggntestmirr: $PWD\> zpool attach -f rpool c0d0s0 c0d1s0
cannot attach c0d1s0 to c0d0s0: new device must be a single disk


Here also I m getting the same error.

This is my sparc solaris 10 machine.
KernelSteve-Oracle
As you're using a virtual environment (LDOMs??) this could very well be Bug ID 6852962 - "zpool attach on root pool in a guest LDOM fails with cannot attach new device must be a single disk". The bug is fixed in Kernel Patch 142909-17 or higher. What Kernel rev does your virtual environment have? (uname -a)
904579
PFA the uname -a o/p

ggntestmirr: /\> uname -a
SunOS ggntestmirr 5.10 Generic_141444-09 sun4v sparc sun4v
KernelSteve-Oracle
141444-09 is below 142909-17 where the issue is fixed. Try patching the system using the Solaris Patchset downloadable from My Oracle Support and see if you have the same issue after patching. See Doc ID 1273718.1 for step by step instructions for locating the Patchset.
1 - 9
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on May 6 2014
Added on Apr 2 2014
10 comments
3,714 views