user12064076 wrote:Define medium size? A few million? A few 100 million? Remember that today's larger databases deal with billions of rows. There is no such thing as a "+medium sized+" data set as data volumes can differ significantly from database to database.
But for a medium size of data, is pipelined table function not as good as the stored procedures that returns a collection of user-defined type?
user12064076 wrote:Flawed approach using expensive private server memory (PL/SQL PGA) to cache SQL data and then push that data to the client - when the client can instead use the far more scalable and superior shared memory buffer cache instead.
In my application, I wrote stored procedures that returns collection of user-definded object types to Java.
With object types, we can remove most of redundent data.This statement does not make sense as the same applies to SQL and cursors. And removing redundant data is best done at the SQL engine itself - via partitioning pruning, predicates, only selecting the applicable columns for the SQL projection and so on.
This OO design reduces the load of network and makes it easy for Java to parse.Incorrect. It does not reduce network load - it may in fact increase it. As for Java "parsing" - that is a flawed approach from the onset if the client is required to parse data it receives from the database server. Parsing requires CPU time. Lots of parsing means a hefty CPU overhead that will degrade performance.
But the problem is we have to pile up all the data in memory first and push them to client in a whole. If it's too huge, ORA-22813 will occur.How is that going to solve the problem?
That's why I plan to resort to piplelined table function.
If ,as you said, I returned a cursor, it would be very difficult for Java to organize the data.A pipeline table needs to be used via a cursor. All DML statements received by Oracle from a client are parsed as cursors.
The "+pipeline+" is created by the SQL engine - it needs SQL to execute the PL/SQL code via the SQL TABLE() function.
SELECT * FROM TABLE(table_function_name(parameter_list))
No need for PL/SQL code. No need for a pipeline. The client will open the cursor and fetch objects from a collection. The very same approach that the client would have used when fetching from a cursor on a pipeline table function.
create or replace type TEmployee is object( .. property definitions ... ); create or replace type TEmployeeCollection is table of TEmployee; select TEmployee( col1, col2, .., coln ) as EMP_COLLECTION from emp where dept_id = :0 and salary > :1 and date_employed >= :2 order by salary, date_employed
user12064076 wrote:What type of calculation and on what data?
I am thinking about a question. Another stored procedure uses the result of the first one to do calculation.
If it returns a collection, it's very easy to achieve this.A cursor can be processed using bulk processing - and this should be the default approach to cursors. Bulk collect from a cursor and process the collection. But only if that processing is too complex to do in SQL and needs PL/SQL instead.
But if it returns a ref cursor, how should I implement this?
Do I have to fetch them into a collection first?
user12064076 wrote:I dislike nested tables like this - it adds complexities to the SQL projection and can have performance implications.
Please note the statememt is complex and is built with dymanic SQL and looks like
where joins conditions and filter conditions
Stored procecedure2 will calculate the statistics based on due_date and complete_date of the result set of Stored procedure1.Why go to the SQL engine to process the collection? Can you not apply that SQL processing to the original SQL that creates the collection?
To acheive this, I have to fetch the cursor into a collection first.And the use
select * from table(collection).
What is not clear is what the proc2 output is and what needs to be done with this output. Does it need to go back into some SQL table? Does it need to be passed to an external client? Written to file/XML/etc?
open refcursor for select due_date, complete_date, nested_table1, nested_tables from tables where joins conditions and filter conditions; .. loop fetch refcursor into buffer; // no bulk collect due to the nested tables being fetched // process 1st nested collection for example for i in 1..buffer.nested_table1.Count loop Proc2( buffer.due_date, buffer.complete_date, buffer.nested_table1 ); .. end loop; exit when refcursor%notfound; end; close refcursor;