Skip to Main Content

APEX

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Filtering on LOV by Apex Page Item - Resolved

Shra1Feb 16 2010 — edited May 18 2010
Following Issue has been resolved...

Finally I could find a way to resolve this issue...I created a new page-0 item for phone_id and initialized it with the Page 11 site_id value always (p0_site_id source tab data source initialize to application item and simply type P11_SITE_ID in the data source)...Used the p0_phone_id as the filter on my named LOV...The query works as expected now.

Hi, I have a page item with a Named LOV attached to it..The following query is the driving query for the named-lov..However, I need to apply further filter on the LOV to narrow the list to a value of a page item - :P11_SITE_ID - But the LOV doesn't return any rows..I tested the entire query outside at SQL> by hard-coding the value of :P11_SITE_ID and it returns the expected rows with the filter.

When I arrive at Page 11, I'm setting the value of :P11_SITE_ID from the invoking page. I printed the initialized value of :P11_SITE_ID on Page 11 and it does show the assigned value (from the originating interactive report page link column that's invoking this page).

This LOV is on Page 11 - phone_id column set as "select list (named lov)".

-Named lov SITE_ASSIGNED_PHONES_LOV query sql below...

*<code>*
SELECT phone_number d, phone_id r
FROM phones p
WHERE phone_id IN
(SELECT s.phone_id
FROM site_phones_assigned s
WHERE s.site_id = :P11_SITE_ID
)
ORDER BY 1;
*</code>*

The above LOV query doesn't filter on the :P11_SITE_ID value..LOV doesn't return any values on Page11. However, for testing, if I remove the :P11_SITE_ID filter, all site-phones-assigned values are displayed on the phone_id lov during runtime.

Please advice, how one can apply a page-item filter on a named-lov.

Thank you

-Shravan

Edited by: Shravan_Kumar on Feb 21, 2010 11:33 AM

Edited by: Shravan_Kumar on Feb 21, 2010 12:18 PM

Comments

Srini Chavali-Oracle
How big is the source database ? How much downtime can you afford ? A less complicated approach may be using simple expdp/impdp

http://docs.oracle.com/cd/E11882_01/server.112/e23633/expimp.htm

HTH
Srini
mseberg
Hello;

Checked my notes and I did this from AIX 9 years ago. Thoughts :

On your Step 1 - Will Archive be off on the new database until the data is moved? If yes at what point will you turn it on?

On your Step 2 - Would consider using restrict mode instead.

On your Steps 5 and 6 - How are you moving? SCP? How much time did you budget for transfer?

On Step 10 - Will Data Pump skip empty tables and break something? ( How will you deal with invalid objects in general on the new system ? )

On Step 11 - Are there addition users to create on Linux system?

Questions

1. Are there any scripts which need to move which can tested in advance?

2. Are there Jobs/Crons which need to be account for on the old and new system? ( Off on old and ON on new )

3. Was Linux setup with LVM? ( Given your large size might save headache down the road )

4. What method will be used to compare objects between the old and the new?

5. Does you checklist have a start and stop time for each item?

6. Has the net80 been tested in advance?

( Would keep Linux and Oracle on separate partitions if possible, then you could reinstall the OS without touching Oracle )

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Notes :

( Migration of an Oracle Database Across OS Platforms [ID 733205.1] )


How To Use RMAN CONVERT DATABASE on Source Host for Cross Platform Migration [ID 413586.1]

Cross-Platform Migration on Destination Host Using Rman Convert Database [ID 414878.1]


Creating a Duplicate Database on a New Host. [ID 388431.1]


Best Regards

mseberg
VishP-Oracle
Method

1) Install a Shell 11g database on the target system (Linux).

2) Make all tablespaces in source database (10g on HP-UX) READ ONLY with the exception of SYS/SYSAUX/UNDO/TEMP.

3) Take a transportable tablespace export dump using either old exp utility or datapump.

4) Take an export of all data schemas without tables/indexes.

5) copy all datafiles from source system (HP-UX) to target system (Linux).

6) copy all the export dumps from source to target system.

7) Make all the tablespaces in the source system (HP-UX) READ WRITE.

8) Convert the datafiles in the target system (Linux) from Big-Endian to Little-Endian using RMAN.

9) Import the data schemas.
Why are you doing both transportable tablespaces and import ? You just need to import metadata if you are planning to use transportable tablespace.
10) Import all the tablespaces using either old exp utility or datapump.

11) create roles, public database links, public synonyms etc on the Linux system.


Are there any flaws or is there any step that would not work in the above methodology?

PS: The database contains a a bunch of materilaized views..


Thanks for your time...
Follow MOS notes:
733205.1
243304.1

I guess you can not use rman convert database because of different endianness.

So either use transportable tablespace or expdp/impdp.
829206
The database is 1.5TB..That's the reason why I'm going for TTS. We can tolerate a downtime of 2 days.
Steve_Wood
Hi

I'm having to move a 3.4TB database from a big to a little endian platform (HPUX -> RHEL5) both using ASM. I was planning to do the following as the RMAN convert command does not work at database level with different platform endians and the XTTS / NFS method isn't going to work so well when using ASM with 100s of tablespaces.

1. Use whichever of these import methods proves to be quicker... parallel expdp then use dbms_file_transfer.PUT_FILE to move the dump files to the target ASM instance then use impdp for the imports. Or use impdp over a network link using the NETWORK_LINK parameter.

2. Use a PL/SQL script to do a insert /* APPEND */ over a db link to insert all of the rows added to the source database since the database was exported.

Will let you know if this is any good.

Steve
1 - 5
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Jun 15 2010
Added on Feb 16 2010
2 comments
2,026 views