Discussions
Categories
- 197.1K All Categories
- 2.5K Data
- 546 Big Data Appliance
- 1.9K Data Science
- 450.7K Databases
- 221.9K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 31 Multilingual Engine
- 552 MySQL Community Space
- 479 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3.1K ORDS, SODA & JSON in the Database
- 555 SQLcl
- 4K SQL Developer Data Modeler
- 187.2K SQL & PL/SQL
- 21.3K SQL Developer
- 296.3K Development
- 17 Developer Projects
- 139 Programming Languages
- 293K Development Tools
- 110 DevOps
- 3.1K QA/Testing
- 646.1K Java
- 28 Java Learning Subscription
- 37K Database Connectivity
- 158 Java Community Process
- 105 Java 25
- 22.1K Java APIs
- 138.2K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 19 Java Essentials
- 162 Java 8 Questions
- 86K Java Programming
- 81 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.3K Java SE
- 13.8K Java Security
- 205 Java User Groups
- 24 JavaScript - Nashorn
- Programs
- 466 LiveLabs
- 39 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.7K Other Languages
- 2.3K Chinese
- 175 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 233 Portuguese
Low performance when reading large LOB

Hello everyone,
we need to read a file of about 70M from from database LOB and send it to clients. Dead simple:
<?php
$id = 'filename';
$c = oci_connect('scott', 'tiger', 'MYCONN');
$sql = "SELECT CONTENT FROM MYTABLE WHERE ID=:1";
$s = oci_parse($c, $sql);
oci_bind_by_name($s, ':1', $id);
$r = oci_execute($s);
$arr = oci_fetch_array($s, OCI_ASSOC);
if (is_object($arr['CONTENT'])) {
while (!$arr['CONTENT']->eof()) {
echo $arr['CONTENT']->read(1000000);
}
}
All works fine, only performance sucks. This script, for the 70 meg file, takes 40 seconds(!) and the transfer speed is below 2MB/s. The app server is connected to the database with gigabit ethernet, tested transfer speed (tested with scp) between app server and database is over 100MB/s, as is to be expected. The database and the app server are both underutilized beefy machines. I tried various chunk sizes for the read method, but the variations in speed are insignificant. I tried SQL*Net tracing on the client, there are no errors reported. There is no special SQL*Net configuration either on the client, nor server.
Anyone has any experience with tuning this? I am sure we can do much better.
Thanks for any hints,
Michal
Answers
-
Hi Michal!
Here are some comments:
- For me, between computers on the same floor, an 80M CLOB is
fetched in 3 seconds when the PHP script doesn't display output.
Writing the CLOB to a file via shell redirection added another
second. My CLOB was created with default parameters.
I used PHP 5.5 with Oracle 12c client libs & db server.
- Character set conversion will slow down the transfer.
- The $arr['CLOBDATA']->read(x) chunk size does impact performance.
There is an internal upper limit which caps 'x' to 1M.
- Using Oracle 10g or later client libraries allows the OCI
OCILobRead2 call to be used by PHP OCI8.
- Check chapter 12 "Performance Guidelines" of
http://docs.oracle.com/cd/E16655_01/appdev.121/e17605.pdf
- Do the DB LOB storage parameters affect performace? E.g. the
CHUNK size? For generic queries in a live system, I've seen
altering the table and turning on "(CACHE)" help. I also have a
note about some the high water mark enqueue once having contention
(this was fixed in 11.1.0.7 and 10.2.0.4). Again, this is
unlikely to affect a simple benchmark script like the one you
posted.
- Check the Oracle Net session data unit (SDU) size. An 8K size
(the new default in Oracle 11g) is suitable for many applications
but for LOBs a bigger value might be better. It should be set the
same value in both the database server sqlnet.ora and the PHP OCI8
tnsnames.ora file
-
Hi Chris and thanks.
You've pointed me in right direction with the last one, actually, because looking at the SDU I encountered this note:
Note: Starting with Oracle Database 11g, Oracle Net Services optimized bulk data transfer for components, such as Oracle SecureFiles LOBs and Oracle Data Guard redo transport services. The SDU size limit, as specified in the network parameter files, does not apply to these bulk data transfers.
I checked and, to my surprise, the LOB was defined as BASICFILE. When I rebuilt it as SECUREFILE, the performance immediately went up and I am seeing transfer rate of about 20MB/s and the script runs 3-4 seconds, which is good enough. Problem solved.
Thanks for the kick,
Michal