Skip to Main Content

SQL & PL/SQL

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Function call overhead

657156Sep 25 2009 — edited Sep 25 2009
This is 11g on Linux, but reproduceable on all platforms.

I think I am hitting a performance wall due to overhead of calling a function for every row in a resultset, wondering if anyone has a neat way around this. The simplest example is a function that does nothing, then compare the times. ie:


#
# Create function that does nothing, we are really just measuring the function call overhead
#
SQL> create or replace function do_nothing(x varchar2) return varchar2 is begin return x; end do_nothing;
/

Function created.

#
# Build a bogus table of strings to test
#
SQL> create table test (val varchar2(100));
SQL> begin
for i in 1..1000000 loop
insert into test (val) values (dbms_random.string('A',10));
end loop;
commit;
end;
/


#
# now time, comparing time diffs from no function versus with function
#
SQL> select max(val) from test;

MAX(VAL)
==========
zzzxgZcdxk

Elapsed: 00:00:00.06
SQL> select max(do_nothing(val)) from test;

MAX(DO_NOTHING(VAL))
================
zzzxgZcdxk

Elapsed: 00:00:03.59




So, is there any way to do this faster? I have tried doing it with java stored proc, same exact time, so I suspect that the time is just overhead of making in this case the 1million function calls.


And for extra credit, maybe I am doing something silly in the first place but the reason we are doing this is to take oracle timestamps and convert them into doubles representing "days from epoch", including sub-day precision with timezone conversion. Is there a way I can do this without a function call? Here is the real function call I am using, :

CREATE OR REPLACE FUNCTION DAYS_FROM_EPOCH (p_d1 timestamp,p_targetTZ varchar2 default 'GMT')
RETURN NUMBER is
l_epoch timestamp;
l_diff interval day(9) to second(9);
BEGIN

l_epoch := to_date('1/1/1970','mm/dd/yyyy');
l_diff := from_tz(p_d1,'GMT')-from_tz(l_epoch,p_targetTZ);
return extract(day from l_diff) + extract(hour from l_diff)/24 + extract(minute from l_diff)/24/60 + extract(second from l_diff)/24/60/60;

END DAYS_FROM_EPOCH;
/



used like:

SQL> select days_from_epoch(systimestamp) from dual;

DAYS_FROM_EPOCH(SYSTIMESTAMP)
=======================
14512.4224



Thanks for your help!

Comments

Gaz in Oz

You could use XMLQUERY()

Paulzip
Answer

I could easily do this for you, but you won't learn anything.  However, here's a good article on how to go about it.

Marked as Answer by muttleychess · Sep 27 2020
muttleychess

wonderful !!

It seems the same example

Only I would like to know how to get the version of XML

xml version="1.0" encoding="UTF-8" standalone="yes

mNem

something like ...

select 

'<?xml version="1.0" encoding="UTF-8" standalone="yes"?>'

||

xmlserialize(document ... as clob indent) as xml_output

....

Paulzip

muttleychess wrote:

wonderful !!

It seems the same example

Only I would like to know how to get the version of XML

xml version="1.0" encoding="UTF-8" standalone="yes

The prolog is added (and only needed) if you serialise the data to a BLOB.  It's simply for the consumer's sake so they know how to decode the BLOB into character data.  If you are serialising into a CLOB, you don't need it, however if you really want to, use mNem's concat approach.

report error:
Oops, something went wrong!  We've notified our team to take a look. Please try again later. (Reference 300000.25.201.31548).
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Oct 23 2009
Added on Sep 25 2009
6 comments
1,167 views