We have a database that need to handle Chinese characters. Currently, it is setup like this:
SQL> select * from NLS_DATABASE_PARAMETERS;
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
What character should I change to be able to handle Chinese character and how to do it? Thanks!
You need to create the database with the characterset you want to use. You cannot convert it after it is created.
You could use AL32UTF8 for the characterset to handle chinese characters.
See the Length Semantics section of the globalization guide for an explanation of how the characterset affects the size of columns that you specify.
Character semantics is useful for defining the storage requirements for multibyte strings of varying widths. For example, in a Unicode database (AL32UTF8), suppose that you need to define a VARCHAR2 column that can store up to five Chinese characters together with five English characters. Using byte semantics, this column requires 15 bytes for the Chinese characters, which are three bytes long, and 5 bytes for the English characters, which are one byte long, for a total of 20 bytes. Using character semantics, the column requires 10 characters.
Pl post details of OS and database versions.
If the database is empty (or has data that is of no consequence), create a new database with with AL32UTF8 characterset.
If the database has data that you cannot lose, you will need to follow the steps in the docs
DMU is another tool that can be used
Changing the characterset of an existing database is not a trivial exercise and should be done with care and the application tested thoroughly.
Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode) [ID 260192.1]