1 2 Previous Next 16 Replies Latest reply on Jun 18, 2019 3:36 PM by BluShadow Go to original post
      • 15. Re: CLOB: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
        mathguy

        BluShadow wrote:

         

        mathguy, I think it related more to the block sizes, especially on earlier versions of Oracle, where the recommendation was 2K or 4K, hence 4000 bytes, plus some for the additional bits of data needed.

         

        Perhaps, but that still doesn't make sense. For one thing, 4k is 4096, not 4000. What are the odds that the overhead is exactly 96 bytes? It seems clear to me that an effort was made to work with round numbers.

         

        Then - block size of 4k would explain the exponent (12); except that block size should be related to row length. Most tables don't just have a single column, of data type VARCHAR2(4000). So I don't see why the limit for VARCHAR2 strings should be related to block size; such a limit would perhaps make sense for row length instead.

        • 16. Re: CLOB: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
          BluShadow

          mathguy wrote:

           

          BluShadow wrote:

           

          mathguy, I think it related more to the block sizes, especially on earlier versions of Oracle, where the recommendation was 2K or 4K, hence 4000 bytes, plus some for the additional bits of data needed.

           

          Perhaps, but that still doesn't make sense. For one thing, 4k is 4096, not 4000. What are the odds that the overhead is exactly 96 bytes? It seems clear to me that an effort was made to work with round numbers.

           

          Then - block size of 4k would explain the exponent (12); except that block size should be related to row length. Most tables don't just have a single column, of data type VARCHAR2(4000). So I don't see why the limit for VARCHAR2 strings should be related to block size; such a limit would perhaps make sense for row length instead.

           

           

          Well

          a) if the overhead was just 20 bytes, then they could have said, hey let's make the largest string we can store in a block 4076 bytes... Duh... not a nice number.  Ok, so they just rounded it down to 4000.  Not that hard to consider, people like round numbers , and the remaining 76 bytes could then be used for "future enhancements" where there may be a need to store further overhead information.  They future planned without knowing what the future would hold.  That would be the logical reasoning behind that.

           

          b) No, block size doesn't really relate to row length.  Short rows can allow for multiple rows in a single block.  Larger rows require multiple blocks.  What you wouldn't want is a single column that spanned multiple blocks, hence why CLOBs are only stored inline up to about 4000 bytes (actually slightly less, something like 3996 bytes, but that's another argument for you LOL!) before they are stored external to the table and just the lob pointer stored in the table.  Then, block sizes are really to do with the physical I/O reading of data from the disks and the caching of data etc.  You have to get down to the real low level technicalities of how the database was originally designed back in the days and the physical limitations of hardware in those days.

          1 2 Previous Next