It was not the intent to suggest that fetchSize didn't matter. FetchSize matters for all versions; the preallocated memory is some-constant + (fetchSize * (per-row-size + sum(column-size)). We'll ignore some-constant and per-row-size and honestly I have no idea what they are. What really matters is column-size. Pre-12 column-size was O(max-size-to-store-value). max-size-to-store-value of a VARCHAR(4000) was O(4000) bytes. In 12 and later column-size is 15 bytes.
So lets say you have a query that returns 100 VARCHAR(100) values. Assume the default fetchSize of 10. Pre-12 the driver would preallocate 100K to execute this query, 100 * 100 * 10 (plus some overhead). 12 and later it would be 15K, 15 * 100 * 10. Now assume the fetchSize is 10,000. 12 and later driver would preallocate 15M, 15 * 100 * 10,000. So yes, the fetchSize matters even in 12 and later.
Our experience is that fetch sizes much greater than 100 are rarely worthwhile. Sure there are cases where significantly larger fetch sizes are faster but most of the time the benefit is just not that great compared to the increase in footprint.
It's worth mentioning that when the 12 and later drivers retrieve the query results they do have to allocate enough memory to actually store the results. In practice that is usually less than the amount required to store the max size result but not necessarily. For example, if your query returns only non-NULL DATEs, both pre and post 12 drivers will need the same amount of memory, more or less. The pre-12 drivers would allocate it before execution and the 12 and later drivers after execution. So setting the fetchSize to 10,000 will use well more than 15M when you include the memory needed to store the actual data. With 32k VARCHARs it can be a lot more. For what it is worth the driver will happily handle such cases if the heap is big enough.