Skip to Main Content

Oracle Database Discussions

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Fetching CLOB column Faster

User_OCZ1TJul 30 2019 — edited Aug 5 2019

Hi, We are using version 11.2.0.4 of oracle exadata. We have below query which fetch the data and is getting executed from informatica. And its just a "SELECT * from TAB1" query. It was taking ~1hr for fetching ~135k rows and then from the sql monitor we found all the time was just spent on client for fetching data. And then we got to know it has one CLOB column which is causing the issue, if we comment the CLOB column(C10) , the data fetch is finishing in few seconds. So as an alternative we were using below SUBSTR option to fetch the column C10 and it was helping us to finish the query in few seconds. But suddenly we got to see failure for this query with error "ORA-06502: PL/SQL: numeric or value error: character string buffer too small" and then its found its failing because of few of the values came into column C10 which were holding values >4000 bytes.  So want to understand if there is any alternate way we can fetch the clob column here without fail and for large value(>4000bytes)?

Query:-

SELECT c1,c2,c3...c39 FROM TAB1;

Alternate option to fetch column C10:-

DBMS_LOB.SUBSTR (C10,(SELECT MAX (LENGTH (C10)) FROM TAB1)) C10

Error:-

ORA-06502: PL/SQL: numeric or value error: character string buffer too small

ORA-06512: at line 1

Below is the sql monitor for one of the slow execution which we used to see with CLOB column fetched in full:-

Query:-

SELECT c1,c2,c3...c39 FROM TAB1;

Global Information

------------------------------

Status              :  EXECUTING                

Instance ID         :  4                        

SQL Execution ID    :  67108864                 

Execution Started   :  04/09/2018 06:02:49      

First Refresh Time  :  04/09/2018 06:02:49      

Last Refresh Time   :  04/09/2018 06:40:45      

Duration            :  2277s                    

Module/Action       :  SQL*Plus/-               

Program             :  sqlplus.exe              

Fetch Calls         :  26415                    

Global Stats

=================================================

| Elapsed |   Cpu   | Cluster  | Fetch | Buffer |

| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |

=================================================

|    0.69 |    0.69 |     0.01 | 26415 |  27031 |

=================================================

SQL Plan Monitoring Details (Plan Hash Value=2531190874)

============================================================================================================================================================

| Id   |          Operation          |        Name        |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail | Progress |

|      |                             |                    | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |          |

============================================================================================================================================================

| -> 0 | SELECT STATEMENT            |                    |         |      |      2278 |     +0 |     1 |    26417 |          |                 |          |

| -> 1 |   TABLE ACCESS STORAGE FULL | TAB1               |    135K | 7212 |      2278 |     +0 |     1 |    26417 |          |                 |       6% |

============================================================================================================================================================

This post has been answered by AndrewSayer on Jul 31 2019
Jump to Answer

Comments

Dom Brooks
if we comment the CLOB column(C10) , the data fetch is finishing in few seconds.

Is this surprising?

A CLOB is a large object.

Exactly how long it takes to fetch will depend on how large the object is. The larger the object, the longer it will take.

And it will depend on client/network settings - you will basically be roundtripping on the CLOB.

If you traced this you would see exactly how many round trips etc.

Some client software has different options in how to deal with CLOBs.

There also might be some influence on whether internally CLOB on table is set to CACHE / NOCACHE etc.

So as an alternative we were using below SUBSTR option to fetch the column C10 and it was helping us to finish the query in few seconds

Really? Why would this be a valid way of returning an entire CLOB?

If it really is effective in limited circumstances then it would only work for CLOBS < 4000 characters because you're then returning a VARCHAR2 which means that you are probably fetching that data differently in the client software which deals with large objects via lob locators - which is all you can do once it is actually "large". And it will predictably fail for anything > 4000 characters.

Investigate what options informatica has for dealing with clobs and what network settings you are currently using.

Do some tracing with current setup so you can see exactly what the problem is (e.g. network round tripping mainly with some physical IO probably) and then you can see what effect any change you make has.

Jonathan Lewis

One of the features of selecting LOBs is that the client code probably has to fetch the LOB Locator and then fetch the LOB data one item at a time.  Then there's probably a fixed size buffer the client uses for fetching the LOB so a single LOB may require multiple round-trips before the whole thing is acquired.

At some point in the past SQL*Plus used to fetch 80 bytes of a LOB at a time, and if the LOB was declared "nocache" each 80 byte chunk would require the block to be re-read.  The "workaround" to this was to "set long" and "set longchunksize" in SQL*Plus to larger values - possibly there's a similar configuration issue that you need to review in Informatica.

As a test of hypothesis you could start the query running and check how many "SQL*Net message to client" and "SQL*Net more data to client" it takes to move a small subset of your larger LOBs.

Regards

Jonathan Lewis

User_OCZ1T

As i see the CLOB column definition its saying " ENABLE STORAGE IN ROW  CHUNK  8192  RETENTION  NOCACHE". So in table its defined as NOCACHE.

In the last posted sql monitor the number of fetch calls was 26415 for 26417 rows in rows(Actual) column of the sql monitor.

I tried fetching the number of rows which really has rows having length>4000 bytes, and it turns out to be very less(i.e 81 out of 124k).

SELECT max(length(C10)), count(*) FROM TAB1 where LENGTH (C10)<4000

MAX(LENGTH(C10))    COUNT(*)

3823                     124876

SELECT max(length(C10)), count(*) FROM TAB1

MAX(LENGTH(C10))    COUNT(*)

18799                    124957

Now comparing fetching CLOB column vs Varchar2(4000), i tried to test/compare the performance for rows with <4000 byte length. Still seeing the CLOB column fetch is significantly slower, why? Isn't it true that in below two scenarios we are  fetching same amount of data from CLOB column, then why the CLOB fetch is so much slower. I am able to see the same effect when i run it from sqlplus client.

Fetching CLOB column -C10 as it is.:-

SQL Text

------------------------------

SELECT /*+monitor*/c1,c2,c3...

TAB1.C10 FROM TAB1 where LENGTH (C10)<4000

Global Information

------------------------------

Status              :  EXECUTING                

Instance ID         :  4                        

SQL ID              :  bwyw0v1h3pgxq            

SQL Execution ID    :  67108864                 

Execution Started   :  07/30/2019 12:45:54      

First Refresh Time  :  07/30/2019 12:45:54      

Last Refresh Time   :  07/30/2019 12:57:25      

Duration            :  691s                     

Module/Action       :  SQL*Plus/-               

Program             :  sqlplus.exe              

Fetch Calls         :  8526                     

Global Stats

=================================================

| Elapsed |   Cpu   |  Other   | Fetch | Buffer |

| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |

=================================================

|    0.30 |    0.03 |     0.27 |  8526 |   8794 |

=================================================

SQL Plan Monitoring Details (Plan Hash Value=2531190874)

============================================================================================================================================================

| Id   |          Operation          |        Name        |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail | Progress |

|      |                             |                    | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |          |

============================================================================================================================================================

| -> 0 | SELECT STATEMENT            |                    |         |      |       693 |     +0 |     1 |     8534 |          |                 |          |

| -> 1 |   TABLE ACCESS STORAGE FULL | TAB1               |    6248 | 8594 |       693 |     +0 |     1 |     8534 |          |                 |       2% |

============================================================================================================================================================

Fetching CLOB column -C10 using SUBSTR function:-

SQL Monitoring Report

SQL Text

------------------------------

SELECT /*+monitor*/c1,c2,..DBMS_LOB.SUBSTR (C10,(SELECT MAX (LENGTH (C10)) FROM TAB1)) C10 FROM TAB1 where LENGTH (C10)<4000

Global Information

------------------------------

Status              :  DONE (ALL ROWS)          

Instance ID         :  4                        

SQL ID              :  g0ya736m3sgdu            

SQL Execution ID    :  67108864                 

Execution Started   :  07/30/2019 12:53:38      

First Refresh Time  :  07/30/2019 12:53:38      

Last Refresh Time   :  07/30/2019 12:53:45      

Duration            :  7s                       

Module/Action       :  SQL*Plus/-               

Program             :  sqlplus.exe              

Fetch Calls         :  26                       

Global Stats

===================================================================================================

| Elapsed |   Cpu   |    IO    | Concurrency | PL/SQL  |  Other   | Fetch | Buffer | Read | Read  |

| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |

===================================================================================================

|    5.11 |    1.58 |     0.78 |        0.00 |    1.22 |     2.76 |    26 |  81960 |  151 |   1MB |

===================================================================================================

SQL Plan Monitoring Details (Plan Hash Value=3132348012)

================================================================================================================================================================

| Id |          Operation           |        Name        |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity |         Activity Detail         |

|    |                              |                    | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |           (# samples)           |

================================================================================================================================================================

|  0 | SELECT STATEMENT             |                    |         |      |         6 |     +1 |     1 |        1 |    83.33 | Cpu (1)                         |

|    |                              |                    |         |      |           |        |       |          |          | SQL*Net more data to client (3) |

|    |                              |                    |         |      |           |        |       |          |          | direct path read (1)            |

|  1 |   SORT AGGREGATE             |                    |       1 |      |         1 |     +1 |     1 |        1 |          |                                 |

|  2 |    TABLE ACCESS STORAGE FULL | TAB1               |    125K | 7212 |         1 |     +1 |     1 |     125K |          |                                 |

|  3 |   TABLE ACCESS STORAGE FULL  | TAB1               |    6248 | 8594 |         7 |     +1 |     1 |     125K |    16.67 | Cpu (1)                         |

================================================================================================================================================================

Predicate Information (identified by operation id):

---------------------------------------------------

   3 - filter(LENGTH("C10")<4000)

Statistics

----------------------------------------------------------

         47  recursive calls

          0  db block gets

      81999  consistent gets

        163  physical reads

        572  redo size

   32115212  bytes sent via SQL*Net to client

        736  bytes received via SQL*Net from client

         26  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

     124876  rows processed

User_OCZ1T

One thing i noticed, irrespective of the type of lob value, its taking same amount of time. I mean to say, if i run the "select query for ~81 rows which holds larger CLOB value(length>4000bytes)" and compare the time with the "select query for other ~81 rows having length <4000 bytes " both are consuming same amount of time.

Tried running the query for ~1000 rows having rows with length<4000bytes, by setting "set longchunksize 200000" and "set long 200000", but seeing similar timing as below(i.e. without any of these set at sqlplus).

Fetched the CLOB column as it is for ~1000 rows with arraysize  of 5000 in sqlplus, below is the sql monitor for same.

Edited:- I am not sure if changing the storage option to CACHE will help us here, but will try to test it on Dev.

Global Information

------------------------------

Status              :  DONE (ALL ROWS)          

Instance ID         :  4                        

SQL ID              :  5mr5nx1x81dfg            

SQL Execution ID    :  67108864                 

Execution Started   :  07/30/2019 14:57:58      

First Refresh Time  :  07/30/2019 14:57:58      

Last Refresh Time   :  07/30/2019 14:59:23      

Duration            :  85s                      

Module/Action       :  SQL*Plus/-               

Program             :  sqlplus.exe              

Fetch Calls         :  1000                     

Global Stats

=================================================

| Elapsed |   Cpu   |  Other   | Fetch | Buffer |

| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |

=================================================

|    0.03 |    0.00 |     0.03 |  1000 |   1065 |

=================================================

SQL Plan Monitoring Details (Plan Hash Value=4129443724)

===========================================================================================================================================================

| Id |                Operation                |        Name        |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail |

|    |                                         |                    | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |

===========================================================================================================================================================

|  0 | SELECT STATEMENT                        |                    |         |      |        86 |     +0 |     1 |      999 |          |                 |

|  1 |   COUNT STOPKEY                         |                    |         |      |        86 |     +0 |     1 |      999 |          |                 |

|  2 |    TABLE ACCESS STORAGE FULL FIRST ROWS | TAB1               |     999 | 2534 |        86 |     +0 |     1 |      999 |          |                 |

===========================================================================================================================================================

Predicate Information (identified by operation id):

---------------------------------------------------

   1 - filter(ROWNUM<1000)

   2 - filter(LENGTH("C10")<4000)

Statistics

----------------------------------------------------------

          1  recursive calls

          0  db block gets

       1068  consistent gets

          3  physical reads

          0  redo size

     775428  bytes sent via SQL*Net to client

     306930  bytes received via SQL*Net from client

       2000  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

        999  rows processed

    

mtefft

We have run into this before, with JDBC and also with SSIS.

If a column is CLOB/NCLOB and it is not null in a particular row, then (as Jonathan pointed out) there will be multiple trips to fetch the column value for that row. You can't avoid that.

If the CLOB/NCLOB column is null in a particular row, then this does not happen.

On the other hand, if a column is VARCHAR/NVARCHAR, you will fetch many rows in a single trip.

You can see this behavior in a SQL trace.

What to do? If you have a rather small number of rows with the actual CLOB data > 4000 bytes (i.e. too large for VARCHAR) then you can do this:

SELECT

CASE WHEN LENGTH(MY_CLOB_COLUMN) <= 4000 then CAST(MY_CLOB_COLUMN AS VARCHAR2(4000)) ELSE NULL END AS MY_CLOB_COLUMN_AS_VARCHAR2,

CASE WHEN LENGTH(MY_CLOB_COLUMN) <= 4000 then NULL ELSE MY_CLOB_COLUMN END AS MY_CLOB_COLUMN_AS_CLOB,

(note: if NCLOB then use 2000 instead of 4000)

And the client application (Informatica) would have to marry these two columns (MY_CLOB_COLUMN_AS_VARCHAR2 and MY_CLOB_COLUMN_AS_CLOB) together.

So the result is that the vast majority of your rows would have nulls in their CLOB columns, so the multi-trip behavior drops dramatically.

But now your client has to deal with the two columns to get the single value you want. Don't try to put them back together within a SQL statement, you will just be putting yourself back where you started.

If most of your CLOBs are too big to fit in a VARCHAR - don't bother.

Jonathan Lewis

Just to clarify one important detail:

If you're running this hacked select from SQL*Plus then whatever you set the arraysize to be Oracle will use single row processing because the query includes a CLOB column in the select list.  The reduction in roundtrips occurs because it always takes (at least) two round trips to fetch a row with a non-null CLOB - one to get the row with the LOB Locator and one to fetch the actual LOB content if the locator is null (or, maybe, says that the LOB is empty, but I'd have to check that).

There may be further roundtrips in SQL*Plus if you've done

set long N

set longchunksize M

where N and M are numeric (the defaults are 80) and N is larger than M and the size of some of your LOBs is larger than M.

Moreover, if your lobs are stored out of line and NOCACHE and it takes several chunks to read and display one lob then each of those reads will be a direct path read. There may be something about this on my blog (or old website, or in Practical Oracle 8i somewhere), but if not I'll write something up in the next few days.

Regards

Jonathan Lewis

Mustafa KALAYCI

I would like to add something not really related to OP's question. a CLOB with size 4000 will be stored in the lob segment not in the table.It must be less than 4000bytes ( 3964 to be precise). if your data length is 4000 it will be stored in lob segment and will require additional fetch as Jonathan said.

Here some examples about this: CLOB size matters!

AndrewSayer

Jonathan Lewis wrote:

Just to clarify one important detail:

If you're running this hacked select from SQL*Plus then whatever you set the arraysize to be Oracle will use single row processing because the query includes a CLOB column in the select list. The reduction in roundtrips occurs because it always takes (at least) two round trips to fetch a row with a non-null CLOB - one to get the row with the LOB Locator and one to fetch the actual LOB content if the locator is null (or, maybe, says that the LOB is empty, but I'd have to check that).

There may be further roundtrips in SQL*Plus if you've done

set long N

set longchunksize M

where N and M are numeric (the defaults are 80) and N is larger than M and the size of some of your LOBs is larger than M.

Moreover, if your lobs are stored out of line and NOCACHE and it takes several chunks to read and display one lob then each of those reads will be a direct path read. There may be something about this on my blog (or old website, or in Practical Oracle 8i somewhere), but if not I'll write something up in the next few days.

Regards

Jonathan Lewis

I thought that the new lobPrefetch option in sql*plus would prevent the extra fetches by including lob values in the standard fetch but it doesn’t seem to behave like that for me - it’s probably for chunks.

If you use odp.net, then you can use the initialLobFetchSize parameter to decide how much of a lob to pull through in the same fetch as the rest of the rows https://docs.oracle.com/en/database/oracle/oracle-database/12.2/odpnt/DataReaderInitialLOBFetchSize.html#GUID-E015AF50-1…

Ive used it in my own big fetches and it makes a huge difference.

I believe Informatica uses the ODBC driver which doesn’t have such an option unfortunately. Do you have to use Informatica for this?

User_OCZ1T

Tried modifying the query as you mentioned. But not seeing any improvement. Below is the sql monitor for fetching ~1000 rows of LOB having length<4000bytes. Both using CASE and SUBSTR clause.

Using CASE statement taking same time as it was for fetching whole LOB:-

SQL Monitoring Report

SQL Text

------------------------------

SELECT c1,c2...,CASE WHEN LENGTH(C10) <= 4000 then CAST(C10 AS VARCHAR2(4000)) ELSE NULL END AS MY_CLOB_COLUMN_AS_VARCHAR2, CASE WHEN LENGTH(C10) > 4000 then NULL ELSE C10 END AS MY_CLOB_COLUMN_AS_CLOB FROM TAB1 where LENGTH (C10)<4000 and rownum<1000

Global Information

------------------------------

Status              :  DONE (ALL ROWS)          

Instance ID         :  4                        

SQL ID              :  3m0j31tb3rd8x            

SQL Execution ID    :  67108864                 

Execution Started   :  07/31/2019 05:41:21      

First Refresh Time  :  07/31/2019 05:41:21      

Last Refresh Time   :  07/31/2019 05:42:46      

Duration            :  85s                      

Module/Action       :  SQL*Plus/-               

Program             :  sqlplus.exe              

Fetch Calls         :  1000                     

Global Stats

=================================================

| Elapsed |   Cpu   |  Other   | Fetch | Buffer |

| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |

=================================================

|    0.02 |    0.00 |     0.02 |  1000 |   1065 |

=================================================

SQL Plan Monitoring Details (Plan Hash Value=4129443724)

===========================================================================================================================================================

| Id |                Operation                |        Name        |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail |

|    |                                         |                    | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |

===========================================================================================================================================================

|  0 | SELECT STATEMENT                        |                    |         |      |        86 |     +0 |     1 |      999 |          |                 |

|  1 |   COUNT STOPKEY                         |                    |         |      |        86 |     +0 |     1 |      999 |          |                 |

|  2 |    TABLE ACCESS STORAGE FULL FIRST ROWS | TAB1               |     999 | 2335 |        86 |     +0 |     1 |      999 |          |                 |

===========================================================================================================================================================

Predicate Information (identified by operation id):

---------------------------------------------------

   1 - filter(ROWNUM<1000)

   2 - filter(LENGTH("C10")<4000)

Statistics

----------------------------------------------------------

          1  recursive calls

          0  db block gets

       1068  consistent gets

          3  physical reads

          0  redo size

     608700  bytes sent via SQL*Net to client

     306929  bytes received via SQL*Net from client

       2000  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

        999  rows processed

Using SUBSTR function Finishing in 3 seconds:-

SELECT c1,c2....,DBMS_LOB.SUBSTR (C10,(SELECT MAX (LENGTH (C10)) FROM TAB1)) C10 FROM TAB1 where LENGTH (C10)<4000 and rownum<1000

999 rows selected.

Elapsed: 00:00:03.59

Execution Plan

----------------------------------------------------------

Plan hash value: 2117026829

------------------------------------------------------------------------------------------------------------

| Id  | Operation                             | Name               | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT                      |                    |   999 |   195K|  2335   (1)| 00:00:29 |

|   1 |  SORT AGGREGATE                       |                    |     1 |   164 |            |       |

|   2 |   TABLE ACCESS STORAGE FULL           | TAB1               |   151K|    23M|  7212   (1)| 00:01:27 |

|*  3 |  COUNT STOPKEY                        |                    |       |       |            |       |

|*  4 |   TABLE ACCESS STORAGE FULL FIRST ROWS| TAB1               |   999 |   195K|  2335   (1)| 00:00:29 |

------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

   3 - filter(ROWNUM<1000)

   4 - filter(LENGTH("C10")<4000)

Statistics

----------------------------------------------------------

         47  recursive calls

          0  db block gets

      41131  consistent gets

         14  physical reads

        572  redo size

      25154  bytes sent via SQL*Net to client

       1197  bytes received via SQL*Net from client

         68  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

        999  rows processed

Mustafa KALAYCI

so your clob data might be too big and instead of fetching some GBs of data, fetching some MB of data must be faster of course. check the total length of your lob data:

select sum(size_in_mb)/1024/1024 total_clob_size_in_mb,

       sum(case when size_in_mb > 4000 then 4000

                when size_in_mb <= 4000 then size_in_mb

           end

          ) size_for_first_4000_mb

from (select dbms_lob.getlength(col1)/1024/1024 size_in_mb from table1);

first column is total size and the second is the size when you get first 4000 char. probably there is a huge difference.

User_OCZ1T

The sum size of CLOB column coming as 5.76MB. And also the maximum size of the CLOB is coming as ~.04MB.

select sum(size_in_mb) total_clob_size_in_mb,

       sum(case when size_in_mb > 4000 then 4000

                when size_in_mb <= 4000 then size_in_mb

           end

          ) size_for_first_4000_mb

from (select dbms_lob.getlength(C10)/1024/1024 size_in_mb from TAB1);       

5.76065254211426    5.76065254211426

select max(dbms_lob.getlength(C10))/1024/1024 size_in_mb from TAB1

0.0448904037475586

Mustafa KALAYCI

that's odd. did you delete a huge data from this table? there might be a too much empty space in your table or lob segment. use dbms_space_admin and check empty spaces in your table/lob segment. I assume this could be the problem.

sample code:

DECLARE

  fs1_blocks  number;  fs1_bytes   number;

  fs2_blocks  number;  fs2_bytes   number;

  fs3_blocks  number;  fs3_bytes   number;

  fs4_blocks  number;  fs4_bytes   number;

  full_blocks number;  full_bytes  number;

  unformatted_bl number;  unformatted_by number;

BEGIN

  DBMS_SPACE.space_usage('TABLE_OWNER HEREEEE' ,

                         upper('TABLE_NAME OR SEGMENT NAME HEREEEE') ,

                         'TABLE' ,

                         unformatted_bl, unformatted_by,

                         fs1_blocks , fs1_bytes  ,

                         fs2_blocks , fs2_bytes  ,

                         fs3_blocks , fs3_bytes  ,

                         fs4_blocks , fs4_bytes  ,

                         full_blocks, full_bytes                        

                         );

  DBMS_OUTPUT.PUT_LINE('fs1_blocks : ' || fs1_blocks );  DBMS_OUTPUT.PUT_LINE('fs1_bytes  : ' || fs1_bytes  );

  DBMS_OUTPUT.PUT_LINE('fs2_blocks : ' || fs2_blocks );  DBMS_OUTPUT.PUT_LINE('fs2_bytes  : ' || fs2_bytes  );

  DBMS_OUTPUT.PUT_LINE('fs3_blocks : ' || fs3_blocks );  DBMS_OUTPUT.PUT_LINE('fs3_bytes  : ' || fs3_bytes  );

  DBMS_OUTPUT.PUT_LINE('fs4_blocks : ' || fs4_blocks );  DBMS_OUTPUT.PUT_LINE('fs4_bytes  : ' || fs4_bytes  );

  DBMS_OUTPUT.PUT_LINE('full_blocks: ' || full_blocks);  DBMS_OUTPUT.PUT_LINE('full_bytes : ' || full_bytes );

  DBMS_OUTPUT.PUT_LINE('unformatted_bl: ' || unformatted_bl);  DBMS_OUTPUT.PUT_LINE('unformatted_by : ' || unformatted_by );

END;

/

to check lob segment:

select segment_name from all_lobs where table_name = 'YOUR_TABLE_HERE';

change the code above as this:

.....

  DBMS_SPACE.space_usage('TABLE OWNER HEREEEE' ,

                         upper('SEGMENT NAME FROM ALL_LOBS') ,

                         'LOB' ,

.....

User_OCZ1T

Not able to fully understand the output. But i just ran it for the exact LOB segment and below is the output for the specific LOB segment.

fs1_blocks : 0

fs1_bytes  : 0

fs2_blocks : 0

fs2_bytes  : 0

fs3_blocks : 0

fs3_bytes  : 0

fs4_blocks : 0

fs4_bytes  : 0

full_blocks: 171991

full_bytes : 1408950272

unformatted_bl: 3868

unformatted_by : 31686656

AndrewSayer

933257 wrote:

Tried modifying the query as you mentioned. But not seeing any improvement. Below is the sql monitor for fetching ~1000 rows of LOB having length<4000bytes. Both using CASE and SUBSTR clause.

Using CASE statement taking same time as it was for fetching whole LOB:-

  1. SQLMonitoringReport
  2. SQLText
  3. ------------------------------
  4. SELECTc1,c2...,CASEWHENLENGTH(C10)<=4000thenCAST(C10ASVARCHAR2(4000))ELSENULLENDASMY_CLOB_COLUMN_AS_VARCHAR2,CASEWHENLENGTH(C10)>4000thenNULLELSEC10ENDASMY_CLOB_COLUMN_AS_CLOBFROMTAB1whereLENGTH(C10)<4000andrownum<1000
  5. GlobalInformation
  6. ------------------------------
  7. Status:DONE(ALLROWS)
  8. InstanceID:4
  9. SQLID:3m0j31tb3rd8x
  10. SQLExecutionID:67108864
  11. ExecutionStarted:07/31/201905:41:21
  12. FirstRefreshTime:07/31/201905:41:21
  13. LastRefreshTime:07/31/201905:42:46
  14. Duration:85s
  15. Module/Action:SQL*Plus/-
  16. Program:sqlplus.exe
  17. FetchCalls:1000
  18. GlobalStats
  19. =================================================
  20. |Elapsed|Cpu|Other|Fetch|Buffer|
  21. |Time(s)|Time(s)|Waits(s)|Calls|Gets|
  22. =================================================
  23. |0.02|0.00|0.02|1000|1065|
  24. =================================================
  25. SQLPlanMonitoringDetails(PlanHashValue=4129443724)
  26. ===========================================================================================================================================================
  27. |Id|Operation|Name|Rows|Cost|Time|Start|Execs|Rows|Activity|ActivityDetail|
  28. ||||(Estim)||Active(s)|Active||(Actual)|(%)|(#samples)|
  29. ===========================================================================================================================================================
  30. |0|SELECTSTATEMENT||||86|+0|1|999|||
  31. |1|COUNTSTOPKEY||||86|+0|1|999|||
  32. |2|TABLEACCESSSTORAGEFULLFIRSTROWS|TAB1|999|2335|86|+0|1|999|||
  33. ===========================================================================================================================================================
  34. PredicateInformation(identifiedbyoperationid):
  35. ---------------------------------------------------
  36. 1-filter(ROWNUM<1000)
  37. 2-filter(LENGTH("C10")<4000)
  38. Statistics
  39. ----------------------------------------------------------
  40. 1recursivecalls
  41. 0dbblockgets
  42. 1068consistentgets
  43. 3physicalreads
  44. 0redosize
  45. 608700bytessentviaSQL*Nettoclient
  46. 306929bytesreceivedviaSQL*Netfromclient
  47. 2000SQL*Netroundtripsto/fromclient
  48. 0sorts(memory)
  49. 0sorts(disk)
  50. 999rowsprocessed

Using SUBSTR function Finishing in 3 seconds:-

  1. SELECTc1,c2....,DBMS_LOB.SUBSTR(C10,(SELECTMAX(LENGTH(C10))FROMTAB1))C10FROMTAB1whereLENGTH(C10)<4000andrownum<1000
  2. 999rowsselected.
  3. Elapsed:00:00:03.59
  4. ExecutionPlan
  5. ----------------------------------------------------------
  6. Planhashvalue:2117026829
  7. ------------------------------------------------------------------------------------------------------------
  8. |Id|Operation|Name|Rows|Bytes|Cost(%CPU)|Time|
  9. ------------------------------------------------------------------------------------------------------------
  10. |0|SELECTSTATEMENT||999|195K|2335(1)|00:00:29|
  11. |1|SORTAGGREGATE||1|164|||
  12. |2|TABLEACCESSSTORAGEFULL|TAB1|151K|23M|7212(1)|00:01:27|
  13. |*3|COUNTSTOPKEY||||||
  14. |*4|TABLEACCESSSTORAGEFULLFIRSTROWS|TAB1|999|195K|2335(1)|00:00:29|
  15. ------------------------------------------------------------------------------------------------------------
  16. PredicateInformation(identifiedbyoperationid):
  17. ---------------------------------------------------
  18. 3-filter(ROWNUM<1000)
  19. 4-filter(LENGTH("C10")<4000)
  20. Statistics
  21. ----------------------------------------------------------
  22. 47recursivecalls
  23. 0dbblockgets
  24. 41131consistentgets
  25. 14physicalreads
  26. 572redosize
  27. 25154bytessentviaSQL*Nettoclient
  28. 1197bytesreceivedviaSQL*Netfromclient
  29. 68SQL*Netroundtripsto/fromclient
  30. 0sorts(memory)
  31. 0sorts(disk)
  32. 999rowsprocessed

You could use two queries, one to fetch the smaller lobs as a varchar2, this will use standard array fetches.

One query to fetch the remaining rows with lobs that cant be substr’ed, this will hopefully only be a few rows.

User_OCZ1T

Initially i thought you are suggesting to have a UNION but then i realized as the SUBSTR will be VARCHAR data type vs the other part of the query will be a CLOB, so it has to be fetched twice by informatica. Correct me if wrong.

And also i tested the query by setting high value of longchunksize and long but that didn't affect the query performance. So other one i was thinking, if i should test this by setting the CACHE option for the CLOB storage and if that will benefit us?

AndrewSayer
Answer

933257 wrote:

Initially i thought you are suggesting to have a UNION but then i realized as the SUBSTR will be VARCHAR data type vs the other part of the query will be a CLOB, so it has to be fetched twice by informatica. Correct me if wrong.

And also i tested the query by setting high value of longchunksize and long but that didn't affect the query performance. So other one i was thinking, if i should test this by setting the CACHE option for the CLOB storage and if that will benefit us?

Exactly, unioning will need to use the same data type so won't work too well. Although if you can make it project the clob as null for the short lines it might be okay - would probably confuse informatica though!

I meant run:

select ..substr(clobcol,1,4000) my_clob from my_table where dbms_lob.getLength(clobcol)<=4000

and

select ..clobcol my_clob from my_table where dbms_lob.getLength(clobcol)>4000

Assuming 1 byte characters.

I don't think cache will help, the problem is in the network round trips from what I can tell

Marked as Answer by User_OCZ1T · Sep 27 2020
Jonathan Lewis

Your database spent 0.02 seconds working.

Your client or your network has spent the other 85.98 seconds.

If you want to check this, take a starting and ending snapshot of v$sesstat and v$session_event for your session, don't rely only on the autotrace stats. This may give you some idea of why your are losing time.

Regards

Jonathan Lewis

User_OCZ1T

Yes Jonathan, i see ~1minute 25seconds total time as the overall run duration and the DB time is fraction of seconds. Below is the stats i fetched from gv$session_event with event like *SQL*NET.

SELECT event, total_waits, time_waited_micro, max_wait

FROM gv$session_event

WHERE SID = 1350 and upper(event) like upper('%SQL*%') and inst_id=4

order by total_waits desc;

    

EVENTTOTAL_WAITSTIME_WAITED_MICROMAX_WAIT
SQL*Net message to client202419590
SQL*Net message from client20231229003822173
Dom Brooks

Might not be relevant to Informatica but good article on the JDBC options for LOB prefetching here:

https://blog.aregger.io/jdbc-lob-prefetching/

1 - 20

Post Details

Added on Jul 30 2019
20 comments
27,058 views