We access a database which contains a Clob column. We are using the OCCI C++ API to extract multiple rows from the database as quickly as possible as a result of a SELECT statement. The live system returns 17000 rows; each row contains a CLOB (which acts as a locator for the actual CLOB data). When iterating 17000 results, the system takes over 20 seconds to call ‘next’ for all result rows. This is due to the flight time of repeatedly going to the database for the next row.
In order to significantly reduce this time, I am attempting to read multiple results from the ResultSet instance by providing a buffer to populate. In order to achieve this I need to provide an array of Clobs to the setDataBuffer method of the ResultSet instance. I should then be able to call the ‘next’ method to populate the buffers with a specified number of results, i.e. ‘resultSet->next(10);’.
When specifying the buffer according to the documentation I must provide the type ‘OCCI_SQLT_CLOB’. Unfortunately, when this type of buffer has been specified, the application crashes as described below upon the call to ‘resultSet->next(10);’.
I have tested the mechanism with the ‘OCCIINT’ type specified and this works as expected.
- Create a table with a occi::Clob column.
- Populate with 20 rows of random occi::Clob data.
- Initialise the occi::Environment
- Open an occi::Connection to the database.
- Create a statement to extract multiple rows e.g. “SELECT [column] FROM [table]”.
- Execute the statement and extract the occi::ResultSet. (see code below)
- Create a occi::Clob array and set the buffer on the occi::ResultSet instance (see code below)
- Call ‘next(10)’ on the occi::ResultSet instance, resulting in a pointer access violation in the OCCI dlls.
void reproduceClobCrash(Environment* env, Connection* conn)
occi::Statement* stmt = conn->createStatement("SELECT DATA FROM [ENTER_TABLE_NAME]");
occi::ResultSet* rs = stmt->executeQuery();