This content has been marked as final. Show 1 reply
I suspect this is due to national character set. You never mention what character set is used in your .NET program, as well as Oracle database. So I will assume in your .NET program, it is NOT plain ASCII, but highly possible UTF8.
Byte is the native type which really represent 1 character as 1 byte, but if you use or defined your program/Windows/database as something more, then 1 character in String variable does not equals to 1 byte.
You can use Unicode API or create a byte array to store the 2,001 character string to confirm that you are really occupying 2,001 bytes of character, and not more than 4,000 characters defined in Oracle table
Many experience programmer/DBA often missed this point, and thinking although I defined my Windows/database as Chinese/Japanese/Poland characterset, as long as I use ASCII 0 - 127, it is using 1 byte. NO! This is NOT always the case, and varies depending on characterset, programming, OS, database.
Oracle database's UTF8 and UTF16 character set does not compliant to most current UTF8, and UTF16 standard, which has several version. They often uses an older version of UTF8/UTF16 as they need to go through millions of testing in the product like many other, and delay the adoption of latest version
Sorry that I do not supply any code, but you as a programmer should figure this out (character set). You will hit on this 10 - 30 years in your career, and you better become an expert now