- 381.9K All Categories
- 2.1K Data
- 207 Big Data Appliance
- 1.9K Data Science
- 447.5K Databases
- 220.8K General Database Discussions
- 24 Multilingual Engine
- 516 MySQL Community Space
- 464 NoSQL Database
- 7.8K Oracle Database Express Edition (XE)
- 2.9K ORDS, SODA & JSON in the Database
- 472 SQLcl
- 3.9K SQL Developer Data Modeler
- 186K SQL & PL/SQL
- 20.9K SQL Developer
- 292.7K Development
- 7 Developer Projects
- 125 Programming Languages
- 289.4K Development Tools
- 95 DevOps
- 3K QA/Testing
- 645.5K Java
- 23 Java Learning Subscription
- 36.9K Database Connectivity
- 150 Java Community Process
- 104 Java 25
- 22.1K Java APIs
- 137.8K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 16 Java Essentials
- 143 Java 8 Questions
- 85.9K Java Programming
- 79 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.2K Java SE
- 13.8K Java Security
- 197 Java User Groups
- 243 LiveLabs
- 35 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.8K Other Languages
- 2.3K Chinese
- 166 Deutsche Oracle Community
- 1.2K Español
- 1.9K Japanese
- 225 Portuguese
How can I load data from AL32UTF8 charset into database using WE8ISO8859P1?
I am trying to load data from a file coming from an AL32UTF8 charset source to our file server which then we have to upload to our EBS database using SQL Loader. Our EBS database is charset is set as WE8ISO8859P1 NLS_CHARACTERSET and NLS_LANGUAGE is American. I am submitting a EBS concurrent request that runs a sql loader file. When a diacritic is in the file, the record fails. It appears that a character is added when this happens. 'JOSÉ' becomes 'JOSÃ‰'. And no matter what we have tried, 'AMPLIACIÓN COLONIA' is converted to 'AMPLIACIÃ¿N COLONIA'. Our DBAs say that changing the charset in either database is a not an option. To confirm this is what is causing the issue, I did edit the file and replaced the accented characters to non-accented ones. The program ran without error and loaded all of the records when there is no accents. Here is the message from the log file from the concurrent request: (The Column it is giving in the error is not the column with the accent)
Record 7: Rejected - Error on table TABLE_NAME, column COLUMN.
Invalid zoned decimal byt TABLE_NAME, column COLUMN.e.
Record 5838: Rejected - Error on tab
Invalid zoned decimal lebyte.
6273 Rows successfully loaded.
2 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
So far my team has tried the following things in the SQL LOADER file without success.
"TRANSLATE(:column_name USING NCHAR_CS)" and "TRANSLATE(:column_name USING CHAR_CS)"
TRANSLATE(:column, 'ÁÀÂÄÃÅÇÉÈÊËÍÌÎÏÑÓÒÔÖÕØÚÙÛÜ', 'AAAAAACEEEEIIIINOOOOOOUUUU');
convert(:column,'WE8ISO8859P1') and convert(:column,'WE8ISO8859P1','AL32UTF8') - the program ran without error, but the data was not inserted into the database correctly. It inserted ******* *******R¿ K**** instead of ******* *******RÍA K**.
-REPLACE(:column, 'Ó', 'O')
utl_raw.cast_to_varchar2((nlssort(:column, 'nls_sort=binary_ai'))) - this resulted in more errors
We also tried setting NLS_LANG which resulted in program error
and setting CHARACTERSET which didn't work
We have tried the following that works in TOAD, but doesn’t work when ran through EBS:
select TRANSLATE(convert(upper('string causing error in the file'),'WE8ISO8859P1'),'ÁÀÂÄÃÅÇÉÈÊËÍÌÎÏÑÓÒÔÖÕØÚÙÛÜ', 'AAAAAACEEEEIIIINOOOOOOUUUU') from dual
select TRANSLATE(convert(upper('string causing error in the file'),'US7ASCII'),'ÁÀÂÄÃÅÇÉÈÊËÍÌÎÏÑÓÒÔÖÕØÚÙÛÜ', 'AAAAAACEEEEIIIINOOOOOOUUUU') from dual
SELECT regexp_replace(regexp_replace('string causing error in the file','[[=O=]]+','O' ),'[[=E=]]+','E' ) FROM dual;
Any help on fixing this is appreciated.