1 Reply Latest reply on Oct 18, 2019 8:41 PM by dsurber-Oracle

    Effect of FetchSize on memory allocation of clients in Oracle 12

    3503691

      I have a technical question regarding the effect of fetchSize (which

      can be set on a prepared jdbc statement using method setFetchSize())

      on memory allocation when accessing Oracle databases from jdbc. We are

      using Java 11, Oracle 12.2, ojdbc7.jar 12.1.0.2.

       

      I have read the Oracle documentation below:

      https://www.oracle.com/technetwork/database/application-development/jdbc-memory-management-12c-1964666.pdf

      but the document seems to contradict itself on a crucial point.

       

      The first few chapter suggest that in version 12, improvements were

      made to the driver so that it will allocate memory not based on max

      datatype-size, and fetchSize, but instead based on the actual data

      being fetched.  A later chapter discussing the changes to version 12

      seems to suggest setting fetchSize correctly is crucially important.

       

      So it is still confusing, as to what the effect of this parameter is.

      How exactly does this parameter affect the memory allocation within

      the driver?

       

      In our code base we have had performance problems when people forget

      to set the fetchSize because the Oracle driver default is too small

      (10) resulting in too many round trips.  We want to increase this size

      to something reasonable, but in order to determine that value we would

      like to know what effect this variable have on the memory allocations

      within the driver itself.  Can one blindly set it to something like

      8,000 for example?  What does this mean for queries that would fetch

      only a couple of rows?  Are we wasting resources in the client

      applications in that case?

       

      Message was edited by: 3503691

        • 1. Re: Effect of FetchSize on memory allocation of clients in Oracle 12
          dsurber-Oracle

          It was not the intent to suggest that fetchSize didn't matter. FetchSize matters for all versions; the preallocated memory is some-constant + (fetchSize * (per-row-size + sum(column-size)). We'll ignore some-constant and per-row-size and honestly I have no idea what they are. What really matters is column-size. Pre-12 column-size was O(max-size-to-store-value). max-size-to-store-value of a VARCHAR(4000) was O(4000) bytes. In 12 and later column-size is 15 bytes.

           

          So lets say you have a query that returns 100 VARCHAR(100) values. Assume the default fetchSize of 10. Pre-12 the driver would preallocate 100K to execute this query, 100 * 100 * 10 (plus some overhead). 12 and later it would be 15K, 15 * 100 * 10. Now assume the fetchSize is 10,000. 12 and later driver would preallocate 15M, 15 * 100 * 10,000. So yes, the fetchSize matters even in 12 and later.

           

          Our experience is that fetch sizes much greater than 100 are rarely worthwhile. Sure there are cases where significantly larger fetch sizes are faster but most of the time the benefit is just not that great compared to the increase in footprint.

           

          It's worth mentioning that when the 12 and later drivers retrieve the query results they do have to allocate enough memory to actually store the results. In practice that is usually less than the amount required to store the max size result but not necessarily. For example, if your query returns only non-NULL DATEs, both pre and post 12 drivers will need the same amount of memory, more or less. The pre-12 drivers would allocate it before execution and the 12 and later drivers after execution. So setting the fetchSize to 10,000 will use well more than 15M when you include the memory needed to store the actual data. With 32k VARCHARs it can be a lot more. For what it is worth the driver will happily handle such cases if the heap is big enough.