I have a technical question regarding the effect of fetchSize (which
can be set on a prepared jdbc statement using method setFetchSize())
on memory allocation when accessing Oracle databases from jdbc. We are
using Java 11, Oracle 12.2, ojdbc7.jar 12.1.0.2.
I have read the Oracle documentation below:
https://www.oracle.com/technetwork/database/application-development/jdbc-memory-management-12c-1964666.pdf
but the document seems to contradict itself on a crucial point.
The first few chapter suggest that in version 12, improvements were
made to the driver so that it will allocate memory not based on max
datatype-size, and fetchSize, but instead based on the actual data
being fetched. A later chapter discussing the changes to version 12
seems to suggest setting fetchSize correctly is crucially important.
So it is still confusing, as to what the effect of this parameter is.
How exactly does this parameter affect the memory allocation within
the driver?
In our code base we have had performance problems when people forget
to set the fetchSize because the Oracle driver default is too small
(10) resulting in too many round trips. We want to increase this size
to something reasonable, but in order to determine that value we would
like to know what effect this variable have on the memory allocations
within the driver itself. Can one blindly set it to something like
8,000 for example? What does this mean for queries that would fetch
only a couple of rows? Are we wasting resources in the client
applications in that case?
Message was edited by: 3503691