Skip to Main Content

Oracle Database Express Edition (XE)

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Problem connecting to embedded PL/SQL gateway

436645Sep 16 2009 — edited Sep 16 2009
I have Oracle XE installed on a box running Oracle Enterprise Linux 5.3 (64bit). It took quite a bit of effort to get it set up, but I got there in the end. I have upgraded the default Apex 2.2 installation to Apex 3.2.1 and configured it to use the embedded PL/SQL gateway on port 8500. I used port 8500 because I also have Apache Tomcat running and that is using port 8080.

My problem is that I can access the Apex web interface using the URL http://localhost:8500/apex, but not when I use the static IP assigned to my workstation which is 10.0.0.6. When I try the URL http://10.0.0.6:8500/apex, Firefox gives the error 'Firefox cannot establish a connection to the server 10.0.0.6:8500'.

I have tried turning off the system firewall but get the same problem. I also have SELinux installed, but it is running in permissive mode.

I think this is an Oracle XE/Apex related problem because I can access the Tomcat homepage by using either localhost or the IP 10.0.0.6 in the URL - therefore I think this rules out any firewalling/security issues.

I would like to be able to access the Apex set up from other computers, so it is quite important to get this problem resolved.

Does anyone have any ideas what the problem may be?
This post has been answered by AndyPol on Sep 16 2009
Jump to Answer

Comments

Gaz in Oz

The obvious question is:

Why?

Leading on from that, what business problem are you trying to solve with this?

Billy Verreynne

Using a PL/SQL loop to insert billions of rows, is pure fantasy.

Once you have your billions of rows in your "dynamic" table(s), what then?

John_K

Obviously the size of your table is somewhat of a concern, but (regardless of the why's etc), I think I'd be possibly looking at using partitioning for this. Depending on your database version of course (i.e. 12.2 introduced automatic list partitioning).

SQL> create table xxjktst as (select owner, object_name, object_type from dba_objects);

Table created.

SQL>

SQL> create table xxjktst_part (

  2    owner varchar2(128),

  3    object_name varchar2(128),

  4    object_type varchar2(23)

  5  ) partition by list (owner)

  6  (partition apps values ('APPS'),

  7   partition applsys values ('APPLSYS'),

  8   partition pub values ('PUBLIC'),

  9   partition the_rest values (default));

Table created.

SQL>

SQL>

SQL> insert into xxjktst_part select * from xxjktst;

432966 rows created.

SQL>

SQL> select count(*) from xxjktst_part partition(apps);

  COUNT(*)

----------

    178654

SQL>

SQL> drop table xxjktst_part;

Table dropped.

SQL>

SQL> drop table xxjktst;

Table dropped.

SQL>

You might also want to look into dbms_redefinition etc. But as you can see, you effectively now have many "small" tables, and you can query them either using the partition key, or by using "partition(partition_name)" syntax.

BluShadow

Just to note, partitioning is typically a costed option on the oracle licence, however I would expect a company storing so many billions of rows of data would be running a full enterprise database and have a full blown licence for everything to allow them to do something like partitioning.  That sort of thing would have (obviously) been picked up by the infrastructure architects who designed the requirements for the technology, knowing that so much data was going to be stored in the database.

John_K

Yes, I made the assumption that anywhere storing 300 billion rows would have pretty much most options they needed

jaramill

What is your database version?  The full 4 digits.

Run this query:

select * from v$version;

user9294681

Well, the query generates 300 billion rows, the QA needs to be able to test different small chunks. Once the testing is completed, the data needs to sent out as csv files via FTP. hence, this is why multiple tables are needed so that each table will be converted as a csv file and sent otu

Paulzip

1) Create a view / MView for the query.

2) Create filtered view(s) / MView(s) based on 1) by adding a where clause.  This can be used for testing.  You can even use synonyms to name these views to whatever you want.

3) Do testing based on 2), drop view(s) in 2.

Done.  Not difficult and not a hack.

user9294681

Hi Paulzip;

I tried doing that but I was getting serious performance issue. For instance, the 300 billion rows takes a long time to run and consumes so much tablespace and memory on the server, but then I realized during a test if i filtered out the 300 billions to create a smaller  chunk of 1 billions rows in several tables and utilize some of the powerful optimized processes you can apply to a table, I am able to retrieve 1 billions rows from each table in about a 1 min.

Furthermore, during the extraction to convert to csv files to send out, it was faster to extract via a table which was heavily optimized

Hence, that is why i am trying to go through the table route instead unless I am missing something

user9294681

I am on oracle 12.1.0.1

Paulzip

Of course it will.  Even if each row takes one microsecond to fetch, it'll still take 83 hours.

Skip step 1 and go with step 2 with a large amount of filtering.

user9294681

thanks paulzip for the prompt response, just to confirm so when you say filter view, you mean a view with all the columns and basically the user can call the view and use the any required associated column as a filter for instance

create or replace view v_emp as

select * from employees

calling  view with filter

select * from v_emp pv

where pv.ename = 'John'

@John_K


THanks John_k for the posted example, i thought  of creating one table with the 300 billions record and partitioning it based on used filters. The concern was if we have so many QA staffs trying to access the table, it might really slow down things or create traffic. I do not know if i am right  but i am just looking for the most effective and efficient way.

EdStevens

user9294681 wrote:

thanks paulzip for the prompt response, just to confirm so when you say filter view, you mean a view with all the columns and basically the user can call the view and use the any required associated column as a filter for instance

create or replace view v_emp as

select * from employees

calling view with filter

select * from v_emp pv

where pv.ename = 'John'

@John_K


THanks John_k for the posted example, i thought of creating one table with the 300 billions record and partitioning it based on used filters. The concern was if we have so many QA staffs trying to access the table, it might really slow down things or create traffic. I do not know if i am right but i am just looking for the most effective and efficient way.

IF multiple sessions are looking at the same 100 billion rows, there will be "traffic", regardless of if those rows are in a partition or a flat table. And all 300 billion will take the same amount of space regardless of portioning or not. 

Paulzip
Answer

user9294681 wrote:

thanks paulzip for the prompt response, just to confirm so when you say filter view, you mean a view with all the columns and basically the user can call the view and use the any required associated column as a filter for instance

create or replace view v_emp as

select * from employees

calling view with filter

select * from v_emp pv

where pv.ename = 'John'

Yes, you could create a parameterised view whilst doing the test phase.  Probably best via a package variable(s), which can be set per session.

Marked as Answer by user9294681 · Sep 27 2020
user9294681

thanks

1 - 15
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Oct 14 2009
Added on Sep 16 2009
4 comments
1,641 views