This content has been marked as final. Show 8 replies
Hi. They usually ask folks to change there "number" identifier to a name (Sam, fred, Jill, ???) so we can be more personal. Welcome.
It's curious that you are not getting error messages. Is the loaded data garbled -- that is, the wrong data loaded to the wrong columns? Are only entire records being dropped? Is there some kind of load report? What does it say about records loaded vs. not loaded?
You need to find out what characters / records are causing the problem. Here's what I do when stumped. Take a small file, say 1000 records. Does it load all of them. If so, try 10,000? If not, try 500. Once I find a small sample that doesn't load, I try a binary search approach. If 1000 records, will not load, is there a problem in the first 500 or the second 500? If in the first 500, is there a problem in the first 250. Etc.
If it's a timing problem or not related to a particular record or character, this won't help but it's what I do first. Most of the time, I get more records loading than I actually have in the data rather than fewer because someone has put End-of-Line characetrs (EOLs) in descriptive fields like comments. You seem to have an oppositive problem.
Howard, thanks for reply.
I've tried to import 1000 rows, about 200 rows were dropped out again, without any error messages. All disappeared rows are consecutive and don't depent on any specific symbols. For example rows 2079,2080....2111 are imported, then a gap approaches and rows 2112-2192 are not imported. In a word, it's like this depends on row position, not content.
I've imported this table with SQL*Loader and all rows have been imported clearly. Could it be an APEX 2.1.xx internal application program or ANSI/UTF-8 Cyrillic conflict?
I ask so because when I query some VARCHAR2 columns from this imported table in APEX SQL Commands there's an error appears "ORA-06502: PL/SQL: numeric or value error: character string buffer too small". SQL*Plus and APEX SQL Query Buider parse any queries from this table without any errors.
Could these errors be due to each other?
Edited by: Andrew_K on 15.11.2012 11:40
1) First, is there a way to compare the rows loaded by APEX with those loaded by SQL Loader. If you do SELECT APEX rows MINUS SELECT SQL Loader rows, do you get nothing left. That is every APEX row that loads, loads perfectly.
2) It seems to me there have to be some "no load" messages somewhere -- I just don't know where to look in APEX.
3) I know this gets tedious but can you take the first subset of the 1000 rows where the 200 fail to load and start and end 10 records before and after those that faill to load. Then run with just those 220. I expect the 20 rows to load and the same 200 to fail.
40) What's the data like? How many columns, how many characters per row on average? Are there large, free format "text" fields? Any unusual characters (punctuation?) in a record that fails to load?
Best of Luck,
APEX 2.1? Are you back on APEX 2.1? Or does that mean something else!
I'm sure you have found the symptom -- the ORA-06502 error. Good for you! It's faliling in the character translation. Just how, I don't know how it's failing or how to correct.
Can you APEX import without Cyrillic and see if that works?
1. I've imported table with APEX and SQL*Loader to 2 different tables ("PATIENTS", "PATIENTS_SL". Encoding of imported table was: ANSI; character set: Cyrillic Windows-1251. APEX has imported 46658 rows, SQL*Loader has imported 69861 rows ( +4 from my last post).
Thereafter SELECT .. MINUS operator returns 23712 discriminated rows, all of them are consecutive and are located by groups, for example:
I've noticed no significant text differencies between imported and not imported tables.
172 173 .... 255 256 421 422 423 .... 498 499 671 672 673 ....
2. Here's APEX imporing detailed order:
3. Common table code
Overview This page displays information about text data file import. If any rows failed while importing, this page displays an error message. Import Date 16.11.12 Imported By SYSTEM Schema SYSTEM Table PATIENTS_APEX Rows Imported 46658 Rows Failed 0 Failed Rows No failed rows found.
When trying to import data in UTF-8 encoding an abracadabra appears instead of Cyrillic text.
CREATE TABLE "PATIENTS" ("ID" NUMBER CONSTRAINT P_ID_NN NOT NULL, "FULLNAME" VARCHAR2(4000) CONSTRAINT FULLNAME_NN NOT NULL, "DOB" DATE CONSTRAINT DOB_NN NOT NULL, "STREET" VARCHAR2(4000), "OCCUPATION" VARCHAR2(4000), "REGION" VARCHAR2(4000), "LOCATION" VARCHAR2(4000), "MIHP" VARCHAR2(4000), "SPI" VARCHAR2(4000), "GENDER" VARCHAR2(4000) CONSTRAINT GENDER_NN NOT NULL ) /
Here's my NLS parameters(in .csv):
Value of HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ORACLE\KEY_XE\NLS_LANG:
PARAMETER;Database value;Instance value;Session value NLS_CALENDAR;GREGORIAN;-;GREGORIAN NLS_CHARACTERSET;AL32UTF8;-;- NLS_COMP;BINARY;-;BINARY NLS_CURRENCY;$;-;р. NLS_DATE_FORMAT;DD-MON-RR;-;DD.MM.RR NLS_DATE_LANGUAGE;AMERICAN;-;RUSSIAN NLS_DUAL_CURRENCY;$;-;р. NLS_ISO_CURRENCY;AMERICA;-;RUSSIA NLS_LANGUAGE;AMERICAN;AMERICAN;RUSSIAN NLS_LENGTH_SEMANTICS;BYTE;BYTE;BYTE NLS_NCHAR_CHARACTERSET;AL16UTF16;-;- NLS_NCHAR_CONV_EXCP;FALSE;FALSE;FALSE NLS_NUMERIC_CHARACTERS;.,;-;, NLS_RDBMS_VERSION;10.2.0.1.0;-;- NLS_SORT;BINARY;-;RUSSIAN NLS_TERRITORY;AMERICA;AMERICA;RUSSIA NLS_TIME_FORMAT;HH.MI.SSXFF AM;-;HH24:MI:SSXFF NLS_TIMESTAMP_FORMAT;DD-MON-RRHH.MI.SSXFF AM;-;DD.MM.RR HH24:MI:SSXFF NLS_TIMESTAMP_TZ_FORMAT;DD-MON-RR HH.MI.SSXFF AM TZR;-;DD.MM.RR HH24:MI:SSXFF TZR NLS_TIME_TZ_FORMAT;HH.MI.SSXFF AM TZR;-;HH24:MI:SSXFF TZR
4. When I've deleted all columns, containing Cyrillic text, table is imported totaly with APEX(2 date format tables, 3 varchar2 format tables, containing number data,
and some cyrillic and latin characters) but "ORA-06502: PL/SQL: numeric or value error: character string buffer too small" appears when I'm trying to query large
number of rows.
Does it mean that error not related to data format or encoding?
When I request large values (e.g. many rows), the same error appears.
ALL other utilities, including APEX query builder, parse this data without errors.
I can only suggest a couple things you might look at. I wish I could be more help.
I strongly believe it's a character translation problem. Exactly what or how to fix ... not a clue.
1) When I've had data from the wrong character set, I've usually see more records being found rather than fewer. This was because there were extra end-of-line/record (EoR) chacaters in the user-entered comments. But another time, it found few (or no) EoRs and errored out -- probably with a buffer exceeded error.
2) Let's Believe the Error Messages! In this case, the "LACK" of an error message. Import say it only sees 46,658 records -- believe it. My guess, Import doesn't see EoL/EoR characters for some of the records. When this has happened to me -- I'm trying to recall -- maybe it was because end-of-field characters were not read correctly and the end-of-line/record character was actually ending up in one of the fields.
3) Also believe the error "ORA-06502: PL/SQL: numeric or value error: character string buffer too small". It absolutely does mean that there is a character encoding problem! When it fails to find a character it recognizes as a end character, it eventually -- after dozens or 100's of records -- exceeds the buffer capacity. ID is your only number field, right? So if the data from some other field were assigned there, I believe this is the error you get.
Think about trying these.
1) Since DOB is a simple date field near the beginning and Gender is the last field, please get a list of unique values with counts. I think you will see garbage because the data is not being picked up properly aligned and that will be cause the end-of-field and/or EoR delimiters are not being found. [ But if the data is out of alignment, why aren't see seeing load errors for the DOB (date) field????? ] Or if ID is read from the data, use ID in place of DOB.
2) If you think you know 10 records which are not being read correctly -- or even just two records. Import just those and imspect the data in the fields carefully to see if/where the data is read out-of-alignment.
But these just confirm the problem,; they don't fix it.
Perhaps we need an answer to these questions, too. What characters are you expecting to see for end-of-field and EoR in the Cryllic data? What characters are actually present? I suppose you'd need a hex editor to view them.
Wish I could be more help!
1) When I'm requesting for few full rows, containing Cyrillic text data, it returns properly, but when I'm requesting for a large number of rows (more then 30-40) or even 3-4 VARCHAR2 columns of all rows, a mentioned error is appearing.
2) Every time I'm trying to import a table different rows are disappearing. I request for them and "one-by-one" they are displayed properly.
Can you explain me a bit more about buffer capacity? Can it be a reason of non-displaying of large number of rows? SQL*Plus displays all columns properly and returns all necessary rows in any quantity!
You've run the Import many times now. Is the load count always the same with the same dataset? I hope so, because then the error is reproducible. If not, it must be even more complex a problem.
Buffer: (I hope this is not too elementary.) An early meaning of "buff" was to absorb a blow or hard hit. Thus buffer came to mean something "in between" absorbing the blow or just "something in between". Whenever things (objects, characters, signals, people) queue to be served, the place they wait can be called a buffer area or just a "buffer".
When I go to the store, I don't just get 1 can of soup and 1 can of beans. I get many because it's more efficient in terms of time and gas used. And I don't immediately prepare everything, I store most of it in the pantry (or cabinet) until needed. The pantry is my buffer. I don't buy a hundred cans -- although they might make me a sweet deal, because my buffer is not big enough to hold 100 cans.
Correspondingly, there has to be a place in memory (or other storage) to hold the characters before they are processed and assigned to the individual fields in the table. Those memory areas are called buffers. For data, the input device waits when the buffer is full. But there is a second consideration. The buffer must be big enough to hold the data of a single read operation. If a record -- all of which is read at one time is 100 characters, the buffer must be at least 100 characters. To be simplisitic, suppose each read operation transfers characters to the buffer until an end or record (EoR) character is read. And there is a standard default buffer size of say 32K bytes. If the data characters don't translate correctly(?), the buffer begins to fill with the data of record, after record, after record until an EoR character is found. If there is more than 32K bytes of data between recognizable EoRs, the read fails with a "buffer size/space exceeded error".
Since you don't see a buffer error on import but you just loose records, it's AS IF not every record has an EoR character. However, that seems much, much too simplistic to be the real problem.
1) What is the average record size? Are we talking 1000 chars? 10,000 chars? 25,000 chars?
Did you attempt any of the things I mentioned last time?
Think about trying these.
2) Since DOB is a simple date field near the beginning and Gender is the last field, please get a list of unique values with counts. I think you will see garbage because the data is not being picked up properly aligned and that will be cause the end-of-field and/or EoR delimiters are not being found. Or if ID is read from the data, use ID in place of DOB.
3) If you think you know 10 records which are not being read correctly -- or even just two records. Import just those 2 and inspect the data in the fields carefully to see if/where the data is read out-of-alignment.
But these just confirm the problem,; they don't fix it.
Perhaps we need an answer to these questions, too. 4) What characters are you expecting to see for end-of-field and EoR in the Cryllic data? What characters are actually present? I suppose you'd need a hex editor to view them.
Wish I could be more help!