2 Replies Latest reply on Oct 7, 2013 11:45 PM by Sergiusz Wolicki-Oracle

    How to review implication of database character set change on PL/SQL code?








      We are converting WE8ISO8859P1 oracle db characterset to AL32UTF8. Before conversion, i want to check implication on PL/SQL code for byte based SQL functions.



      What all points to consider while checking implications on PL/SQL code?



      I could find 3 methods on google surfing, SUBSTRB, LENGTHB, INSTRB. What do I check if these methods are used in PL/SQL code?

      What do we check if SUBSTR and LENGTH functions are being used in PL/SQl code?



      What all other methods should I check?

      What do I check in PL/SQL if varchar and char type declarations exist in code?

      How do i check implication of database characterset change to AL32UTF8 for byte bases SQL function.






      Thanks in Advance.






        • 1. Re: How to review implication of database character set change on PL/SQL code?
          Srini Chavali-Oracle

          Pl review 788156.1


          You will have to perform a test conversion and test your application thoroughly to identify any issues that may be caused by this conversion



          • 2. Re: How to review implication of database character set change on PL/SQL code?
            Sergiusz Wolicki-Oracle

            There is no quick answer.  Generally, the problem with PL/SQL code is that once you migrate from a single-byte character set (like WE8ISO8859P1) to a multibyte character set (like AL32UTF8), you can no longer assume that one character is one byte. Traditionally, column and PL/SQL variable lengths are expressed in bytes. Therefore, the same string of Western European accented letters may no longer fit into a column or variable after migration, as it may now be longer than the old limit (2 bytes per accented letter compared to 1 byte previously). Depending on how you dealt with column lengths during the migration, for example, if you migrated them to character length semantics, and depending on how relevant columns were declared (%TYPE vs explicit size), you may need to adjust maximum lengths of variables to accommodate longer strings.


            The use of SUBSTR, INSTR, and LENGTH and their byte equivalents needs to be reviewed. You need to understand what the functions are used for. If the SUBSTR function is used to truncate a string to a maximum length of a variable, you may need to change it to SUBSTRB, if the variable's length constraint is still declared in bytes.  However, if the variable's maximum length is now expressed in characters, SUBSTR needs to be used.  However, if SUBSTR is used to extract a functional part of a string (e.g. during parsing), possibly based on result from INSTR, then you should use SUBSTR and INSTR independently of the database character set -- characters matter here, not bytes. On the other hand, if SUBSTR is used to extract a field in a SQL*Loader-like fixed-format input file (e.g. read with UTL_FILE), you may need to standardize on SUBSTRB to make sure that fields are extracted correctly based on defined byte boundaries.


            As you see, there is universal recipe on handling these functions. Their use needs to be reviewed and understood and it should be decided if they are fine as-is or if they need to be replaced with other forms.