This content has been marked as final. Show 20 replies
The approach is fundamentally flawed.
Fact: PL/SQL can write flat files.
Fact: You don't need Perl to create flat file
Fact: You likely don't even need the table.
Conclusion: you need to fix the problem, instead of taking more symptom fighting measures which don't solve anything.
If you insist on using a result table (you dion't need one), it should be a Globally Temporary, which bypasses the buffer cache completely.
My bet is the 'complex PL/SQL data transforming+loading process' is a piece of disaster code. Your question already indicates this.
It points to basic lack of understanding on how the buffer cache works. The buffer cache is not the cause. It is the culprit of suboptimal code.
Senior Oracle DBA
Even if you continue to use a permanent table it is unlikely the entire table will be stored in the buffer cache at once or for the entire length of the processing. As the table rows are inserted Oracle will flush the filled blocks to disk and when selected retrieve the rows (blocks) from disk into the buffer cache as necessary.
If all the processing is done by one session you should look at using a GTT instead of a permanent table.
If several sessions are involved then you may need to continue to use the permanent work table, but you can reduce the demand on the buffer cache by making sure each step in the process is an efficient as possible. It seems like you have extra steps.
HTH -- Mark D Powell --
Complete last sentence.
Edited by: Mark D Powell on May 15, 2012 8:34 AM
rahulras wrote:Why write table at all?
I am on 11.2 on Linux.
I have a complex PL/SQL data transforming+loading process, which reads data from several input tables and insert the final resultant records in a table (lets call it result table). At the end of processing, typically, this process produces 2million records in the result table.
After the processing is complete, we create a REF CURSOR on the result table and pass the ref cursor to a Perl script, which generates a nice flat file from the data.
Just write file directly using UTL_FILE from PL/SQL procedure
Did you read what I wrote?
If so, why are you asking?
Apart from that: you can check easily, without effort.
You are wasting people's time!!!
But then still: there is no need for a table, there is no need for Perl, there is just need to hire some competent people, who don't mistrust seniors and like to follow sound advice.
Senior Oracle DBA
All the buffer cache usage by this process will obviously kick out lot of data from the buffer cache which might be usefull.No, not necessarily.
How can I reduce the buffer cache consumption by this report generation process? which will in trun save us lot of I/O.Reducing buffer cache usage does not necessarily reduce I/O.
I would also advocate what sybrand, sb and others is saying.
Why write to any table, temporary or otherwise, if you don't need to?
All that unnecessary undo & redo...
Perhaps see this article by Adrian Billington, particularly the section "asynchronous data unloading with parallel pipelined functions":
It's a misunderstood topic because usage of TEMP is generally associated with direct path read/write (temp) for when workareas spill to temp but the same is not true of temp tables.
But to my question, "Inserting to/reading from GTT, does it not use buffer cache? ", you two gave opposite answers.Please don't take my word for it - test it yourself.
Trace a simple usage of a temp table.
Don't let these threads discourage you from testing it yourself, but:
GTT access makes no waits in 10046 .
Why are blocks from temporary tables placed in buffer cache?
Of course in 11gR2, it might be interesting if you can get direct path reads due to seial adaptive direct path reads depending on the size of your temp table and your buffer cache but that's a different matter
Edited by: Dom Brooks on May 15, 2012 5:40 PM